Visual Quality Assessment (VQA)
We design prediction algorithms for the visual quality of images and videos, with respect to technical and perceptual aspects e.g. quality of experience (QoE). The tools of our trade include crowdsourcing, machine learning i.e. deep networks, eye-tracking. Consequently, we are creating massive multimedia databases that are suitable for training generic and accurate VQA models.
The visual quality of images and videos is typically assessed by human experts and yields mean opinion scores (MOS). To reduce cost and time, providing methods for automated image/video quality assessment (IQA/VQA) is desirable for the multimedia and signal processing community at large. Current VQA methods are designed solely to capture aspects of the technical quality of displayed video streams. In addition to such visual quality we aim at methods to characterize images and videos in terms of other perceptual aspects.
These aspects include the number and magnitude of eye movements required for viewing the content, the viewer's appreciation of the use of color, and the degree of interestingness. Several such components together with visual quality are combined for an overall ration of perceptual quality. Moreover, by investigating the human perceptual process and by understanding psychophysical phenomena, a saliency model will be developed that is based on a Markov model of eye movements.
Additionally, we will bring the state-of-the art a step closer to reality by setting up and applying media databases of authentic distortions and diverse content, which stand in contrast to current scientific data sets containing only a small variety of content and 'artificial' distortions.
Given an image or video whose visual quality is to be assessed, the question arises as to which IQA/VQA algorithm should be applied. Instead of choosing an algorithm based on a fixed test database, it can be assumed that a better quality assessment is possible when choosing an algorithm based on a particular test database consisting only of images/videos similar to the query image/video.
A number of problems occur:
- What type of similarity is the most appropriate for this application?
- What statistical/perceptual features should be extracted to express similarity?
- How can the statistical/perceptual similarity of the input image and the test images be estimated?
- Should algorithms be combined to get more robust results?
The MMSP VQA Database Collection
The KoNViD-1k Database
Subjective video quality assessment (VQA) strongly depends on semantics, context, and the types of visual distortions. A lot of existing VQA databases cover small numbers of video sequences with artificial distortions. When testing newly developed Quality of Experience (QoE) models and metrics, they are commonly evaluated against subjective data from such databases, that are the result of perception experiments. However, since the aim of these QoE models is to accurately predict natural videos, these artificially distorted video databases are an insufficient basis for learning. Additionally, the small sizes make them only marginally usable for state-of-the-art learning systems, such as deep learning. In order to give a better basis for development and evaluation of objective VQA methods, we have created a larger datasets of natural, real-world video sequences with corresponding subjective mean opinion scores (MOS) gathered through crowdsourcing.Show more
The KonIQ-10k Database
The main challenge in applying state-of-the-art deep learning methods to predict image quality in-the-wild is the relatively small size of existing quality scored datasets. The reason for the lack of larger datasets is the massive resources required in generating diverse and publishable content. To this purpose, we have created a large IQA database of natural, real-world images with corresponding mean opinion scores (MOS) gathered through crowdsourcing.Show more
The IQA-Experts-300 Database
Experts and naive observers have very different opinions in their judgments of aesthetics. Does this apply to image quality assessment as well? If it does, should we care more about expert-like opinions or those of lay-people? In our paper we propose a screening approach to find reliable and effectively expert crowd workers in image quality assessment (IQA). Our method measures the users' ability to identify image degradations by using test questions, together with several relaxed reliability checks.Show more
The KonPatch-30k Database
Image quality assessment (IQA) has been studied almost exclusively as a global image property. It is common practice for IQA databases and metrics to quantify this abstract concept with a single score per image. In an attempt to extend the notion of quality to spatially restricted sub-regions of images, we designed a novel database of individually quality-annotated image patches.Show more
- Hosu, V., Lin, H., Sziranyi, T., Saupe, D., - KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment, arXiv:1910.06180 [cs.CV], October 2019.
- Wagner, M., Lin, H., Li, S., Saupe, D., - Algorithm selection for image quality assessment, arXiv:1908.06911 [cs.CV], August 2019.
- Fan, C., Lin, H., Hosu, V., Zhang, Y., Jiang, Q., Hamzaoui, R., Saupe, D., - SUR-Net: Predicting the satisfied user ratio curve for image compression with deep learning, International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, June 2019, IEEE Press.
- Hosu, V., Goldlücke, B., Saupe, D., - Effective aesthetics prediction with multi-level spatially pooled features, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Press, pp. 9375-9383, L.A., USA, June 2019.
- Men, H., Lin, H., Hosu, V., Maurer, D., Bruhn, A., Saupe, D., - Visual quality assessment for motion compensated frame interpolation, International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, June 2019, IEEE Press.
- Lin, H., Hosu, V., Saupe, D., - KADID-10k: A large-scale artificially distorted IQA database, International Conference on Quality of Multimedia Experience (QoMEX), Berlin, Germany, June 2019, IEEE Press.
- Saupe, D., Kaup, A., Ohm, J. (eds.), - 5th ITG/VDE Summer School on Video Compression and Processing (SVCP), Institutional Repository of the University of Konstanz (KOPS), June 2019.
- Men, H., Lin, H., Hosu, V., Maurer, D., Bruhn, A., Saupe, D. - Technical report on visual quality assessment for frame interpolation, arXiv:1901.05362 [cs.CV], 2019.
- Spicker, M., Hahn, F., Lindemeier, T., Saupe, D., Deussen, O. - Quantifying visual abstraction quality for computer-generated illustrations, ACM Transactions on Applied Perception (TAP), December 2018, in press.
- Jenadeleh, M., Pedersen, M., Saupe, D. - Realtime quality assessment of iris biometrics under visible light, IEEE Computer Society Workshop on Biometrics (CVPR), 2018.
- Varga, D., Sziranyi, T., Saupe, D. - DeepRN: A content preserving deep architecture for blind image quality assessment, IEEE International Conference on Multimedia and Expo (ICME), 2018. (method code, reimplemented)
- Wiedemann, O., Hosu, V., Lin, H., and Saupe D. - Disregarding the big picture: Towards local image quality assessment, 10th International Conference on Quality of Multimedia Experience (QoMEX), 2018.
- Hosu, V., Lin, H., Saupe, D. - Expertise screening in crowdsourcing image quality, 10th International Conference on Quality of Multimedia Experience (QoMEX), 2018.
- Men, H., Lin, H., and Saupe D. - Spatiotemporal feature combination model for no-reference video quality assessment, 10th International Conference on Quality of Multimedia Experience (QoMEX), 2018.
- Lin, H., Hosu, V., Saupe, D. - KonIQ-10K: Towards an ecologically valid and large-scale IQA database, arXiv preprint arXiv:1803.08489, 2018.
- Egger-Lampl, S., Redi, J., Hoßfeld, T., Hirth, M., Möller, S., Naderi, B., Keimel, Ch., Saupe, D. - Crowdsourcing quality of experience experiments, Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, Springer-Verlag, 2017.
- Gadiraju, U., Möller, S., Nöllenburg, M., Saupe, D., Egger-Lampl, S., Archambault, D., Fisher, B. - Crowdsourcing versus the laboratory: Towards human-centered experiments using the crowd, Evaluation in the Crowd. Crowdsourcing and Human-Centered Experiments, Daniel Archambault, Helen Purchase, Tobias Hossfeld (eds.) , Springer-Verlag, 2017.
- Spicker, M., Hahn, F., Lindemeier, T., Saupe, D., Deussen, O. - Quantifying visual abstraction quality for stipple drawings, Symposium on Non-Photorealistic Animation and Rendering, Best Paper Award, 2017.
- Jenadeleh, M., Masaeli, M. M., Moghaddam, M. E. - Blind image quality assessment based on aesthetic and statistical quality-aware features, Journal of Electronic Imaging, 2017.
- Hosu, V., Hahn, F., Jenadeleh, M., Lin, H., Men, H., Szirányi, T., Li, S., Saupe, D. - The Konstanz natural video database (KoNViD-1k), 9th International Conference on Quality of Multimedia Experience (QoMEX), 2017.
- Men, H., Lin, H., Saupe, D. - Empirical evaluation of no-reference VQA methods on a natural video quality database, 9th International Conference on Quality of Multimedia Experience (QoMEX), 2017.
- Hosu, V., Hahn, F., Wiedemann, O., Jung, S.-H., Saupe, D. - Saliency-driven image coding improves overall perceived JPEG quality, IEEE Picture Coding Symposium (PCS), 2016.
- Hosu, V., Hahn, F., Zingman, I., Saupe, D. - Reported attention as a promising alternative to gaze in IQA tasks, 5th International Workshop on Perceptual Quality of Systems (PQS), 2016.
- Saupe, D., Hahn, F., Hosu, V., Zingman, I., Rana, R., Li, S. - Crowd workers proven useful: A comparative study of subjective video quality assessment, Eight International Workshop on Quality of Multimedia Experience (QoMEX), 2016.
- Zingman, I., Saupe, D., Penatti, O., Lambers, K. - Detection of fragmented rectangular enclosures in very high resolution remote sensing images, IEEE Transactions on Geoscience and Remote Sensing (IEEE), 2016.
- Zingman, I., Saupe, D., Lambers, K. - Detection of incomplete enclosures of rectangular shape in remotely sensed images, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.