Dashboard Help

Quick Guide

Upload: Click [Upload] to select an image or a video file.
You can upload up to 10 files at once. Wait until the files are uploaded and appears at the images list.
Learn more about input image and input video

Add AOIs: Click [AOIs > Add area] and create few AOIs to allow the analysis engine to output AOI statistics for the Fixations & Visual Features

Generate Reports: Click [Analyze] to analyze the image or video.
Wait few seconds for the reports to load.
Learn more about Analysis Options and Reports

Download Reports: Click [Download > All reports as zip] to download and save the report files to your computer.
Data storage: The availability of your uploaded files and reports depends on your account plan. Files are automatically deleted after 3 months for a monthly subscription, after 2 years for an annual subscription, and after 24 hours for an API account.

Heatmap report

The Heatmap report visually represents viewer attention dynamics within an image or video.
It overlays a color gradient onto the original content, highlighting areas that attract more attention than others.
This report provides insights into viewer behavior, showcasing which elements within the image or video draw the most attention.
This visualization aids in understanding audience engagement and allows for informed decisions in content creation and optimization.


 Use the Heatmap report to:
Visualize which elements attract more attention than others.

 Interpreting the results:
The Heatmap report closely (92%) correlates with traditional eye tracking of 40 viewers, illustrating how areas of the original image or video attract attention.
The heatmap colors range from green through yellow to red, representing low, medium, and high levels of attention, respectively.
Areas with no color imply that they will be overlooked.


Gaze Plot report

The Gaze Plot report visualizes the scan paths and order between elements inside the image.
It displays movement sequence, order and duration of gaze fixation.
The report is made of a series of short stops (fixations) and fast movements of an eye (saccades).
A Gaze Plot report is also referred to as 'Scanpath' report.

 Use the Gaze Plot report to:
Visually demonstrate fixations order and gaze paths.

 Interpreting the results:
Fixations are marked with circles along with a number that state the order in which the eyes move between fixations.
The average time between each fixation is 250 milliseconds.
The report present up to 30 fixations in 3 groups of colors: red (0-2.5 seconds), yellow (2.5-5 seconds) and green (5-7.5 seconds)
The gaze point circle size correlates to the Visibility Score value of that gaze point.

Focus Map report

The Focus map (Opacity) report tones down information that is not attractive and visually displays what your viewers may perceive during the first few seconds of visual inspection.
The map report uses a transparency gradient.
Focus Map report is also referred to as 'Opacity' and 'Fog Map' report

 Use the Focus report to:
Emphasize which areas are being perceived and which are being ignored.

 Interpreting the results:
The report shows the parts of the media with the highest counts or longest fixations and shades the rest of the media.
The most transparent areas are those that attract more attention.

Areas of Interest report

The Areas of Interest (AOIs) report reveals a score of the predicted probability that a person will look at the area along with detailed diagnostics to show why the region is likely to get attention.
An Area of Interest (AOI), is an edited area where the metrics are calculated. Metrics such as visibility score, time to first fixation, fixations count etc'.
You can click and drag boxes to create AOIs surrounding specific areas inside your stimuli. Tagging these areas allows the analysis engine to output AOI statistics for the Fixations & Visual Features
Areas of Interest is also referred to as 'Regions of Interest'

 Use the Areas of Interest report to:
  • Define areas inside the image to determine the Visibility Score of this area.
  • Every AOI is reported with detailed data about fixations and visual features.
  • Compare results between different versions of your design and layout.

 Interpreting the results:
The Visibility Score is a summary metric that indicates how salient are the visuals inside this region. The score is calculated based on the highest heatmap value found inside the AOI region. 0% means no attention at all, and 100% means this area will be mostly noticeable.
Most attention is located at areas with more or longer fixations.

To generate an AOIs report, Click on "Add Area" to add a new AOI area.
Resize and move the AOI area to adjust its location and dimensions.
Click "Analyze" to update the AOIs report with the latest AOIs details.

Fixations & Visual Features

This AOIs data table holds the Fixations and Visual Features data for each Area of Interest (AOI).
Fixations

Visibility Score (percentage)
Visibility Score is a summary metric that indicates how salient are the visuals inside this region. The score is calculated based on the highest heatmap value found inside the AOI region.
High value (>= 25) means that area is noticeable.
Time To First Fixation (milliseconds)
The time in milliseconds from when the stimulus was shown until the start of the first fixation within an Area Of Interest (AOI).
Also referred to as 'TTFF' and 'Time Until Noticed'.
Low value (<= 1500) means that the viewers reached very fast to this area.
Fixations Before (number)
The number of fixations before the participant fixated within an AOI for the first time.
Low value (<= 5) means that the viewers reached very fast to this area.
Fixation Length (milliseconds)
The length of the fixations in milliseconds within an AOI.
High value (>= 500) means that the viewers are spending more time inside this area.
Fixation Count (number)
The number of fixations within an AOI.
High value (>= 2) means that the viewers fixated several times inside this area.
Visual Features
Visual Features are major contributors to the calculation of AOI score.
Drawing attention requires the object to have a significant contrast from surrounding objects and background. The Percentage value represents how much of the AOI area is covered by this visual feature.
The Visual Features report is : intensity (top left), orientation and edges (top right), red/green color contrast (bottom left) and blue/yellow color contrast (bottom right).


 Interpreting the results:
If the AOI Visibility Score is low, not visible enough, and you wish to increase the score and make the object more prominent, the visual features gives you cues about how to achieve that.
Inspect which visual feature of this AOI is low and change its design to make it higher.
Design change can be done by changing intensity (dark/light), orientation (change angle of the object), adding or reducing edges (texture and text), color contrast (red/green or blue/yellow) adding or removing faces.

Intensity Features
The percentage level of intensity.
High attention is equal or above 50%.
Edges Features
The percentage of edges, texture, text and orientation.
Edges define the boundaries between regions in an image by locating sharp discontinuities in pixel values.
Edges detection helps with segmentation and object recognition.
High attention is equal or above 50%.
Red Green Contrast
The percentage of Red-Green contrast.
Red-Green are Complementary Colors
When placed next to each other, they create the strongest contrast for those two colors.
High attention is equal or above 50%.
Blue Yellow Contrast
The percentage of Blue-Yellow contrast.
Blue-Yellow are Complementary Colors
When placed next to each other, they create the strongest contrast for those two colors.
High attention is equal or above 50%.

 Improving the results:
A high contrast between an element and its surrounding will increase the element's visibility.
Intensity - Increasing brightness increases intensity.
Edges - Adding edges, texture and text will increase Edges value
Color - Measure the color contrast of an AOI element and alter it to reduce or induce its visibility.
  1. Identify the dominant color of the element (foreground) and the dominant color of its surrounding (background).
  2. Measure the color contrast ratio of the two colors.
  3. Change the element or its background luminance to increase or decrease the color contrast ratio.

Color Contrast ratios can range from 1 to 21 (commonly written 1:1 to 21:1).
Minimum contrast for reading text is 4.5:1
The contrast of black foreground over a white background is 21:1

Color Contrast ratio calculators
https://contrastchecker.com contrastchecker tool also include a color quantization. https://webaim.org/resources/contrastchecker/
Color picker browser extension
https://www.colorzilla.com/chrome/

Color Quantization tools:
To identify the colors inside the image, you can use color quantization tools:

Aesthetics report

Aesthetics report enables you to examine the emotional aspects of aesthetics.

Based on several large scale studies with millions ratings of visual appeal collected from nearly 10 thousand participants, we developed computational models that accurately measure the perceived visual complexity and excitingness.
  • Viewers will judge design as beautiful or not within 1/50th to 1/20th of a second.
  • Visually complex designs are consistently rated as less beautiful than their simpler counterparts.

Facial Expressions Report

Facial coding is the process of measuring human emotions through facial expressions.
Feng-GUI detects faces in images or videos and analyzes their facial expressions.
Facial expressions report describes the level of eight expressions for each detected face.
The expressions are: Neutral, Happy, Surprise, Sad, Anger, Disgust, Fear and Contempt.
When multiple faces are detected, the expression values are aggregated.
Feng-GUI service is trained using public emotion datasets containing 0.5 million images and videos, manually labeled for the presence of facial expressions along with the intensity of valence and arousal.

Emotions Chart

Analyzing a video frame by frame involves extracting Focus, Complexity, Approach, and Withdraw scores to understand viewer attention and engagement. This process segments the video, applies eye tracking algorithms to each frame, and extracts scores indicating where viewers focus, how they engage with complexity, navigate through the content, and when they withdraw attention. These scores are then plotted on a timeline graph, with seconds along the x-axis and scores on the y-axis. This visualization helps identify viewer behavior patterns and key moments, aiding in the optimization of video design and storytelling techniques for better viewer experience.

The date values are available in "Gaze Plot Data" csv file.
Focus and Complexity are taken from Focus report.
Emotions and Expressions data aggregated into two simpler scores of Approach and Withdraw.
Expressions are: Neutral, Happy, Surprise, Sad, Anger, Disgust, Fear and Contempt.
Approach = Happy + Surprise + Anger + Contempt
Withdraw = Sad + Disgust + Fear

 AI Insights and Recommendations

OpenAI integration enhances your visual attention reports with AI-driven insights and recommendations.
After generating your report, it is analyzed by OpenAI’s language models, which provide personalized suggestions on how to improve your design based on the visual data.
This means you can quickly identify areas for optimization, such as layout adjustments or enhancing key elements like CTAs or headlines, to better capture attention.
With this integration, you’ll not only see where users are focusing but also receive expert, data-backed recommendations for improving your design.
Your data is never used to train OpenAI models, and is deleted from OpenAI servers within 30 days.


Scores

Visibility Score

The Visibility Score is a summary metric that indicates how salient are the visuals inside this region. The score is calculated based on the highest heatmap value found inside the AOI region.
The percentage of participants that fixated at least once within an AOI. The value is the probability of having a visual fixation within first few seconds inside this AOI region.
Visibility Score is also referred to as 'Participant percentage' and 'Seen By (%)'.

The value ranges from 0 to 100.
  • Above 75 is excellent and it means that this area will be mostly noticeable.
  • Between 50-75 is good and some attention is drawn to this area.
  • Under 25 is low and 0 means no attention at all within the View Duration


Clarity Score

Clarity is the opposite of Complexity.
Complexity is measured by the number of regions, corners and features in an image.
High visual clutter increases complexity and reduces clarity.
Ratings of appeal are significantly negatively affected by an increase in visual complexity. The more complicated the stimuli is, the worse the people perceived it's attractive.

A high score means that the design is clean, simple and clear.
A low score means highly confusing, complex and cluttered design.

The value ranges from 0 to 100.
  • Above 75 is excellent and clear.
  • Between 25-75 is good.
  • Under 25 is bad and clutter.

 Improving the results:
Some designs are less clear than others simply because they contain too much content.
You should aim to increase clarity. Clear design reduces cognitive load and ensure that the viewers will see what you want them to see.

To improve clarity, you need to reduce clutter:
  • Reduce unnecessary text.
  • Use more images. Human perception of clutter is much more forgiving of imagery than of text.
  • Increase the amount of whitespace or padding around content.
  • Use images with less texture and fewer lines.
  • Organize the design into easily distinguishable content blocks.
  • Reduce number of tasks user has to perform.
  • Reduce amount of information user has to keep in mind.


Complexity Score

Complexity is the opposite of Clarity.
Complexity is measured by the number of regions, corners and features in an image.
High visual clutter increases complexity and reduces clarity.
Complexity is also referred to as 'Cognitive Demand'.

A high score means highly confusing, complex and cluttered design.
A low score means that the design is clean, simple and clear.

The value ranges from 0 to 100.
  • Above 75 is bad and clutter.
  • Between 25-75 is good.
  • Under 25 is excellent.




Focus Score

Focus is calculated by the distribution of fixations as a measure of mental workload and our sensitivity to cognitive demand.
Focus score depends on View Duration. High duration increases the number of fixations and reduces the focus score.

The value ranges from 0 to 100.
  • Above 75 is excellent and focused.
  • Between 25-75 is a different levels of clustered. There are more gaze points to agree upon.
  • Under 25 is bad and random.

Excitingness Score

How exciting and colorful is the design.
A high score means that the design is exciting, interesting and colorful.

 Improving the results:
There is no right or wrong in high or low score value.
Many successful websites have calm design and get a low Exciting score.
A high Excitingness value for websites is above 25 and for Ads is above 50.
Exciting is affected by colors and how vivid and rich the visual is.
The value ranges from 0 to 100.
  • Above 75 could affect Clarity, if there are large amount of contrasting elements and not enough white space.
  • Between 25-75 is a healthy exciting value
  • Under 25 is calm and dull design


Balanced Score

A high score means that the design is symmetrical, balanced and harmonic.
A low score means unbalanced and asymmetrical.
The cyan grid outlines Color vertical and horizontal symmetry.
The magenta grid outlines Intensity vertical and horizontal symmetry.
The Balance score is the arithmetic mean of the above four values.

The value ranges from 0 to 100.
  • Above 60 is balanced.

 Improving the results:
  • Compartmentalize your design by using grids.
  • Pick two or three base colors at most for your design.
  • Make elements stand out by adding white space around them.
  • Consistent fonts and typography
  • Have all elements connected

Input Image

  • Input Name, Size and Format
    • File Name: The characters allowed for file name are a-z A-Z 0-9 - (dash) . (dot) _ (underscore)
      The maximum supported file name length is 100 characters.
      Illegal file names will be renamed by the service into a random GUID file name.
    • The supported input file formats are: png, jpg, jpeg
    • Image file maximum size is 5MB.
    • Transparent pixels with no color are treated as black pixels.

  • Dimensions
    • Recommended image dimensions: 1024x768, 1280x720, 1920x1080
    • Minimum 800x600. Using smaller dimensions may result in reducing accuracy of the analysis.
    • Maximum 1920x1080. Using larger dimensions will not improve the analysis accuracy and could increases the processing time.
    • Dimensions over 1920x1080 are automatically scaled down.
    • Prefer using landscape orientation (wide) over portrait orientation (tall).
    • Aspect ratio should not exceed 2:1 for landscape or 1:2 for portrait.

  • Quality
    Use your highest quality images. We recommend on using PNG files.
    You can use JPG format, as long as the image was created using no compression and 100% quality.
    As JPG is a lossy compression method, it can add compression artifacts into the image and affect the analysis results.


  • Product and Package image
    • Prefer landscape or square images.
    • Add at least 10% of blank margins around the product.

Input Website

To quickly create a website screenshot image and upload it, use the menu action Upload > Web Address
Be aware that website capture can be inaccurate due to page speed or behavior.
For an accurate screenshot, you should create the screenshot manually, using a browser add-on, and then upload the image using the menu Upload > Image File
  • To create a web snapshot, you can use a browser add-on such as:
    * Awesome Screenshot
    * Fireshot
    * Paparazzi for Mac OSX
  • Capture the visible part of the webpage.
  • Create a website screenshot which contains only the webpage itself. Do not include the surrounding browser UI elements.

Input Video

  • Input Name, Size and Format
    • File Name: The characters allowed for file name are a-z A-Z 0-9 - (dash) . (dot) _ (underscore)
      The maximum supported file name length is 100 characters.
      Illegal file names will be renamed by the service into a random GUID file name.
    • The supported input file formats are: mp4
    • Video file maximum size is 100 MB.
    • Video format codec supported is: H264 - MPEG-4 AVC (part 10) (avc1)
    • Video duration is limited to 240 seconds

  • Dimensions
    • Recommended video dimensions: 1280x720 (720p), 1920x1080 (1080p)
    • Prefer using landscape orientation (wide) over portrait orientation (tall).
    • Dimensions over 1280x720 are automatically scaled down to 720p.

  • Video Analysis Analyzing a video takes several minutes, depending on video duration.

Analysis Options

Specifies which sub algorithms to include in the analysis and how the reports are presented.

  • View Type
    Set the visual context of the input image or video.
    Available options are: Any, Screen, Ad, Package, Outdoor and Indoor
    Applies to image and video analysis.

  • View Distance
    Set the viewer distance from this image or video. Default is a computer screen.
    Available options are: Any, Desktop Screen, Mobile, Print, Indoor Signage, Package Design and Outdoor Billboard.
    Applies to image and video analysis.

  • View Duration
    Set the viewer duration time in seconds. Default is 5 seconds.
    Available options are: 0-2.5 seconds, 0-5 seconds, 0-7.5 seconds.
    Applies only to image analysis.

  • Draw map legend and scores on reports
    A map legend is the little box in the corner of the report. It contains parameters information.
    Applies only to image analysis.

  • Merge input image as report background
    Uncheck this if you wish to generate the heatmap image without a merged layer of the original image.
    Applies only to image analysis.

  • White Focus (opacity) map report
    Create Focus report with white overlay instead of default black overlay
    Applies only to image analysis.

  • Auto create AOIs
    Auto create and add AOIs to the AOIs list
    Applies only to image analysis.
    Automatic creation of Areas of Interest (AOIs) using object detection algorithms works by identifying key objects or elements in an image, such as faces, products, or text, using machine learning models. These models scan the image, draw bounding boxes around detected objects, and automatically designate these areas as AOIs. The process is fast and accurate, removing the need for manual AOI creation, and focuses on elements that naturally attract user attention, such as faces or important design elements like buttons or headlines. This automation helps optimize designs by ensuring that the most critical areas are highlighted.

Share and Compare

Share action enables you to copy a link to the reports and share it externally with your colleagues and clients.

Compare action enables you to compare several reports in a single view.
Select few images from the gallery using the CTRL button and click Compare.


Customization

More output Customization settings at dashboard > Settings > Customization

  • Custom Watermark
    Upload your logo and we will embed it into the reports (watermark.png).
    To test your change, click "Analyze" and examine the watermark at the bottom right corner of each image report.
    Default watermark
    To have no watermark at all, click the "Empty" button or upload a small transparent png image file.
    empty watermark

  • Custom Report
    Upload a custom report template file (template.htm or template.mdddl) . Using a custom report template you can add your logo, remove Feng-GUI branding, choose your report’s visual style, and edit the content to fit the audience. To test your change, click "Download > Report as pdf" and examine the new report design.
    Download examples of default and custom template files

Team Sharing

Share your plans with other team members.
You can add team members to share your plans with, at no additional cost.
Team members are required to have a Feng-GUI account.


Team members on this list are able to use your active plans.
A team member on your team will see your plans inside their Invoices list marked with the sign.
The team member can see the invoice id but cannot download the onvice or see its details.

Affiliate

Affiliate dashboard enables you to see:
* Your Network - Feng-GUI customers who currently have your affiliate code set in their profile.
Your affiliate code will appear in their Account details.
* Transactions from your Network - Transactions that used your affiliate code.