Ein Symbolbild, das die AG repräsentiert
Login |

Project Description

The visual quality of images and videos is typically assessed by human experts and yields mean opinion scores (MOS). To reduce cost and time, providing methods for automated image/video quality assessment (IQA/VQA) is desirable for the multimedia and signal processing community at large. Current VQA methods are designed solely to capture aspects of the technical quality of displayed video streams. In addition to such visual quality we aim at methods to characterize images and videos in terms of other perceptual aspects. 

These aspects include the number and magnitude of eye movements required for viewing the content, the viewer's appreciation of the use of color, and the degree of interestingness. Several such components together with visual quality are combined for an overall ration of perceptual quality. Moreover, by investigating the human perceptual process and by understanding psychophysical phenomena, a saliency model will be developed that is based on a Markov model of eye movements.

Additionally, we will bring the state-of-the art a step closer to reality by setting up and applying media databases of authentic distortions and diverse content, which stand in contrast to current scientific data sets containing only a small variety of content and 'artificial' distortions.

Given an image or video whose visual quality is to be assessed, the question arises as to which IQA/VQA algorithm should be applied. Instead of choosing an algorithm based on a fixed test database, it can be assumed that a better quality assessment is possible when choosing an algorithm based on a particular test database consisting only of images/videos similar to the query image/video.

A number of problems occur:

  • What type of similarity is the most appropriate for this application? 
  • What statistical/perceptual features should be extracted to express similarity? 
  • How can the statistical/perceptual similarity of the input image and the test images be estimated? 
  • Should algorithms be combined to get more robust results?




Saliency-driven JPEG Coder
Project Overview

Former Project Members



  • Masud Rana

Student Assistants

Links and References

Financial Support:


AFF Universität Konstanz


Gadiraju, U., Möller, S., Nöllenburg, M., Saupe, D., Egger-Lamperl, S., Archambault, D., Fisher, B., Crowdsourcing versus the laboratory: Towards human-centered experiments using the crowd, Information Systems and Applications, incl. Internet/Web, and HCI, Daniel Archambault, Helen Purchase, Tobias Hossfeld (eds.) , pp. 7-30, Springer-Verlag, September 2017.

Egger-Lamperl, S., Redi, J., Hofeld, T., Hirth, M., Möller, S., Naderi, B., Keimel, Ch., Saupe, D., Crowdsourcing Quality of Experience Experiments, Information Systems and Applications, incl. Internet/Web, and HCI, pp. 173-212, Springer-Verlag, September 2017.

Hosu, V., Hahn, F., Wiedemann, O, Jung, S-H, Saupe, D., Saliency-driven image coding improves overall perceived JPEG quality, Picture Coding Symposium (PCS), Nürnberg, December 2016.

Spicker, M., Hahn, F., Lindemeier, T., Saupe, D., Deussen, O., Quantifying visual abstraction quality for stipple drawings, NPAR (Expressive), pp. 8:1 - 8:10, Los Angeles, USA, August 2017.

Jenadeleh, M., Masaeli, MM, Moghaddam, ME, Blind image quality assessment based on aesthetic and statistical quality-aware features, Journal of Electronic Imaging, doi: 10.1117/1.JEI.26.4.043018, pp. 043018, July 2017. 

Hosu, V., Hahn, F., Jenadeleh, M., Lin, H., Men, H., Szirányi, T., Li, S., Saupe, D., The Konstanz Natural Video database (KoNViD-1k), 9th International Conference on Quality of Multimedia Experience (QoMEX), 2017. Jenadeleh

Men, H., Lin, H., Saupe, D., Empirical evaluation of no-reference VQA methods on a natural video quality database, 9th International Conference on Quality of Multimedia Experience (QoMEX), 2017.

Hosu, V., Hahn, F., Zingman, I., Saupe, D., Reported Attention as a Promising Alternative to Gaze in IQA Tasks, 5th International Workshop on Perceptual Quality of Systems 2016 (PQS 2016), Berlin, August 2016.

Saupe, D., Hahn, F., Hosu, V., Zingman, I., Rana, R., Li, S., Crowd workers proven useful: A comparative study of subjective video quality assessment, Eight International Workshop on Quality of Multimedia Experience (QoMEX 2016), Lisbon, June 2016. 

Previous work on IQA/VQA:

Zhu, K., Li, C., Asari, V., Saupe, D., No-reference video quality assessment based on artifact measurement and statistical analysis, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 25, No. 4, pp. 533-546, April 2015.

Zhu, K., Barkowsky, M., Shen, M., Le Callet, P., Saupe, D., Optimizing feature pooling and prediction models of VQA algorithms, IEEE International Conference on Image Processing (ICIP), Paris, France, October 2014.

Zhu, K., Hirakawa, K., Asari, V., Saupe, D., A no-reference video quality assessment based on Laplacian Pyramids, IEEE International Conference on Image Processing (ICIP), Melbourne, Australia, September 2013.

Zhu, K., Asari, V., Saupe, D., No-reference quality assessment of H.264/AVC encoded video based on natural scene features, SPIE-IS&T Electronic Imaging (VDA), SPIE, Vol. 8755, Baltimore, Maryland, USA, May 2013.

Zhu, K., Saupe, D., Performance evaluation of HD camcorders: measuring texture distortions using Gabor filters and spatio-velocity CSF, SPIE-IS&T Electronic Imaging (VDA), SPIE, Vol. 8653, Burlingame, CA, USA, February 2013.

Zhu, K., Li, S., Saupe, D., An objective method of measuring texture preservation for camcorder performance evaluation, Image Quality and System Performance IX, IS&T/SPIE Electronic Imaging 2012, Vol. 8293, Burlingame, CA, USA, January 2012, SPIE.


Rana, M., Subjective video quality assessment in the crowd, University of Konstanz, master thesis, March 2017.