The Science Behind Our Predictions

Built on the world's largest dataset of real eye-tracking data with industry-leading accuracy

✓ 2.2M+ Tests ✓ 41.5M+ Fixations ✓ Low MAE Scores

Dataset Foundation

Our model's accuracy stems from training on the most comprehensive real-world webcam eye-tracking dataset ever assembled from people viewing content on desktop and laptop screens

2.2M+

Eye-Tracking Tests

Comprehensive dataset from real webcam eye-tracking sessions ensuring robust attention pattern recognition across diverse viewing scenarios

14,400+

Distinct Images

Wide variety of designs, advertisements, websites, and packaging viewed on desktop and laptop screens in real-world studies

41.5M+

Eye Fixations

Over 830 million raw gaze samples captured via webcam eye-tracking on desktop and laptop devices, reflecting authentic visual behavior

Proven Predictive Accuracy

We measure performance using Mean Absolute Error (MAE) against real webcam eye-tracking data from desktop and laptop viewing sessions - the lower the score, the closer our predictions match actual human gaze patterns.

Hold Component Accuracy (MAE)

0-0.5s: 0.105
0-1s: 0.090
0-3s: 0.068
0-5s: 0.064
0-10s: 0.062

Speed Component Accuracy (MAE)

0-0.5s: 0.102
0-1s: 0.083
0-3s: 0.062
0-5s: 0.060
0-10s: 0.059

Reach Component Accuracy (MAE)

0-0.5s: 0.118
0-1s: 0.095
0-3s: 0.071
0-5s: 0.070
0-10s: 0.068

Understanding the Accuracy Metrics

Hold Component: Measures how long people fixate on areas (0.062-0.105 MAE). Better accuracy for longer viewing periods as gaze patterns stabilize.

Speed Component: Predicts time-to-first-fixation (0.059-0.102 MAE). Excellent accuracy in determining which areas capture immediate attention.

Reach Component: Predicts proportion of users who notice areas (0.068-0.118 MAE). Strong performance in forecasting overall visibility and coverage patterns across different viewing durations.

Deep Learning Architecture

Input Image
Feature Extraction
Multi-scale Processing
Attention Prediction
VAI/Hold/Speed/Reach

Technical Implementation

Deep Neural Networks

State-of-the-art convolutional neural networks trained specifically on webcam eye-tracking data from real desktop and laptop viewing sessions

Multi-Scale Analysis

Processes images at multiple resolutions to capture both fine-grained details and overall composition patterns that influence visual attention

Real-World Training

Trained exclusively on actual human viewing data from webcam eye-tracking studies, not synthetic or laboratory-controlled environments

Experience the Accuracy Yourself

See how our scientifically-validated predictions can enhance your design process