Feng-GUI - Science | Effective Visuals

Science & Quality

Feng-GUI reports closely (92%) resemble a 5 seconds eye tracking session of 40 people.
Get instant and unbiased feedback at fraction of the cost and time of traditional eye-tracking.

The Feng-GUI analysis employs dozens of algorithms from neuro-science studies of natural vision processing, computational attention, eye-tracking sessions, perception and cognition of humans.
Or in English: "What people are looking at?"



Motorola eye-tracking (left) vs. Feng-GUI (right)

How accurate is it?

Feng-GUI algorithmic model incorporates live eye tracking results. This makes our analyses almost as accurate as live eye tracking (92%) MIT approved – at a fraction of the cost. The analysis resembles a 5 seconds eye tracking session of 40 people.
We measure, benchmark, compare and tune the analysis with eye-tracking data sets. The data sets contains thousands of images viewed by hundreds of participants.

  • Cutting edge: our algorithmic model is regularly updated
  • Precise: algorithms informed by latest live tracking results
  • Accurate: not skewed by test conditions and unreliable subjects

The science behind Feng-GUI

Cognitive science deals with how the brain works. It helps us to understand how people perceive and process external stimuli. An understanding of basic cognitive principles is essential in analyzing how your users perceive, process and respond to your design. Each of the reports highlights an aspect of user brain activity and demonstrates how cognitive principles can be applied to your design to ensure they are communicating effectively.

Our testing methodology is based on over 30 years of scientific research.
  • Live eye tracking: analyzing the eye paths of actual users through webpages
  • Neuroscientific research: how the brain analyzes data
  • Cognitive science: how human behaviour affects our perception of visual design
visual features examples
Nissan eye-tracking (left) vs. Feng-GUI (right)
Moschino eye-tracking (left) vs. Feng-GUI (right)
Sisley eye-tracking (left) vs. Feng-GUI (right)

Test and Tune

We measure, benchmark, compare and tune the analysis with public eye tracking data sets.
The data sets contain thousands of images viewed by hundreds of participants.
Massachusetts Institute of Technology (MIT), the best university of technology in the world, has reviewed the accuracy of Feng-GUI attention algorithm as part of its Saliency Benchmark. read more...

Here is a partial list of the datasets.
Dataset Citation
MIT data set Tilke Judd, Krista Ehinger, Fredo Durand, Antonio Torralba. Learning to Predict where Humans Look [ICCV 2009]
FIGRIM Fixation Dataset Zoya Bylinskii, Phillip Isola, Constance Bainbridge, Antonio Torralba, Aude Oliva. Intrinsic and Extrinsic Effects on Image Memorability [Vision Research 2015]
MIT Saliency Benchmark Z. Bylinskii, T. Judd, A. Borji, L. Itti, F. Durand, A. Oliva, and A. Torralba. MIT Saliency Benchmark.
MIT300 T. Judd, F. Durand, A. Torralba. A Benchmark of Computational Models of Saliency to Predict Human Fixations [MIT tech report 2012]
CAT2000 Ali Borji, Laurent Itti. CAT2000: A Large Scale Fixation Dataset for Boosting Saliency Research [CVPR 2015 workshop on "Future of Datasets"]
Coutrot Database 1 Antoine Coutrot, Nathalie Guyader. How saliency, faces, and sound influence gaze in dynamic social scenes [JoV 2014]
Antoine Coutrol, Nathalie Guyader. Toward the introduction of auditory information in dynamic visual attention models [WIAMIS 2013]
SAVAM Yury Gitman, Mikhail Erofeev, Dmitriy Vatolin, Andrey Bolshakov, Alexey Fedorov. Semiautomatic Visual-Attention Modeling and Its Application to Video Compression [ICIP 2014]
Eye Fixations in Crowd (EyeCrowd) data set Ming Jiang, Juan Xu, Qi Zhao. Saliency in Crowd [ECCV 2014]
Fixations in Webpage Images (FiWI) data set Chengyao Shen, Qi Zhao. Webpage Saliency [ECCV 2014]
VIU data set Kathryn Koehler, Fei Guo, Sheng Zhang, Miguel P. Eckstein. What Do Saliency Models Predict? [JoV 2014]
Object and Semantic Images and Eye-tracking (OSIE) data set Juan Xu, Ming Jiang, Shuo Wang, Mohan Kankanhalli, Qi Zhao. Predicting Human Gaze Beyond Pixels [JoV 2014]
VIP data set Keng-Teck Ma, Terence Sim, Mohan Kankanhalli. A Unifying Framework for Computational Eye-Gaze Research [Workshop on Human Behavior Understanding 2013]
MIT Low-resolution data set Tilke Judd, Fredo Durand, Antonio Torralba. Fixations on Low-Resolution Images [JoV 2011]
KTH Koostra data set Gert Kootstra, Bart de Boer, Lambert R. B. Schomaker. Predicting Eye Fixations on Complex Visual Stimuli using Local Symmetry [Cognitive Computation 2011]
NUSEF data set Subramanian Ramanathan, Harish Katti, Nicu Sebe, Mohan Kankanhalli, Tat-Seng Chua. An eye fixation database for saliency detection in images [ECCV 2010]
TUD Image Quality Database 2 H. Alers, H. Liu, J. Redi and I. Heynderickx. Studying the risks of optimizing the image quality in saliency regions at the expense of background content [SPIE 2010]
Ehinger data set Krista Ehinger, Barbara Hidalgo-Sotelo, Antonio Torralba, Aude Oliva. Modeling search for people in 900 scenes [Visual Cognition 2009]
A Database of Visual Eye Movements (DOVES) Ian van der Linde, Umesh Rajashekar, Alan C. Bovik, Lawrence K. Cormack. DOVES: A database of visual eye movements [Spatial Vision 2009]
TUD Image Quality Database 1 H. Liu and I. Heynderickx. Studying the Added Value of Visual Attention in Objective Image Quality Metrics Based on Eye Movement Data [ICIP 2009]
Visual Attention for Image Quality (VAIQ) Database Ulrich Engelke, Anthony Maeder, Hans-Jurgen Zepernick. Visual Attention Modeling for Subjective Image Quality Databases [MMSP 2009]
Toronto data set Neil Bruce, John K. Tsotsos. Attention based on information maximization [JoV 2007]
Fixations in Faces (FiFA) data base Moran Cerf, Jonathan Harel, Wolfgang Einhauser, Christof Koch. Predicting human gaze using low-level saliency combined with face detection [NIPS 2007]
Le Meur data set Olivier Le Meur, Patrick Le Callet, Dominique Barba, Dominique Thoreau. A coherent computational approach to model the bottom-up visual attention [PAMI 2006]