Attention Predictor Web App – User Manual
Welcome to the Attention Predictor™ Web App user manual. This guide explains how to use the RealEye Attention Predictor to analyze visual designs and make data-driven design decisions. It covers the tool’s features, how to interpret its outputs (heatmaps, gaze paths, and metrics), and provides examples (with a focus on package design) for applying these insights. Recommendations are grounded in industry best practices and research on visual attention and design effectiveness.
Contents
1. Introduction to Attention Predictor
Attention Predictor™ is an AI-powered tool that predicts where people’s eyes will focus on your design – without needing a live eye-tracking test 1. It generates results in seconds, helping you iterate fast. The predictions are built on RealEye’s extensive eye-tracking data, giving about 91% accuracy compared to real human eye-tracking studies 2. In practical terms, this means you can trust the tool to highlight visual attention patterns similar to those observed with real viewers.
Why use Attention Predictor? In visual design, capturing user attention is critical. Research shows that what users look at influences their perceived product quality, brand recall, and even purchase intent 3. By understanding where and what draws attention in your design, you can refine layouts to better communicate your message and improve user engagement.
Key benefits of using Attention Predictor in your design process:
- Instant Insights, Early in Design: Evaluate design concepts early and often. Get attention heatmaps and metrics within seconds to inform decisions at the concept stage 4. For example, you can pre-test multiple packaging ideas and quickly narrow down to the best few, instead of physically testing all 10 concepts 5.
- Data-Driven Decisions: Move beyond subjective opinions. The tool provides objective metrics (like how quickly the logo is seen or what percentage of viewers likely notice the tagline) so you can back up design changes with data.
- Iterative Improvement: Treat the predictor as a design optimization loop – identify potential attention issues, tweak the design, and re-run the analysis. This funnel approach helps filter out weaker designs and saves time and cost by focusing only on strong contenders for any subsequent in-depth user testing 6.
- Align with Best Practices: The metrics tie closely to known design principles. For instance, we know visual hierarchy (the arrangement of elements by importance) guides the eye. Attention Predictor will reveal if your intended hierarchy is working by showing the likely viewing order and emphasis on elements. If users aren’t seeing what they should first, you’ll spot it in the results and can adjust.
By integrating Attention Predictor into your workflow, you ensure that critical elements (brand logos, call-to-action buttons, headlines, etc.) are positioned and designed to grab attention, and nothing important gets overlooked.
2. Getting Started with the Web App
This section walks you through using the Attention Predictor web application – from uploading your design to interpreting the output.
2.1 Uploading Your Design
Step 1: Access the Tool. Go to the Attention Predictor web app (e.g., via the RealEye dashboard or the direct link). Ensure you have an account and access rights if required.
Step 2: Select or Upload Visuals. You can upload images of the designs you want to test. These could be:
- Product packaging artwork (e.g., a PNG/JPG of your label or 3D pack shot),
- Advertisements or banners (digital ads, print ad designs),
- UI screenshots or mockups,
- Any visual where knowing attention distribution is valuable.
Step 3: Define Areas of Interest (AOIs). Once your design is loaded, you’ll typically identify key elements on it as Areas of Interest. An AOI (Area of Interest) is a region you mark on the design that you want to analyze specifically. For example, on a package you might draw AOIs around:
- The Brand Logo,
- The Product Name or headline text,
- Key Claims or Taglines (e.g., “Organic” or “High Protein” badge),
- The Main Image (product photo or illustration),
- Price or Promotional Stickers if present,
- Any other element you care about (e.g., a CTA button on a web UI, a featured icon, etc.).
These AOIs allow the system to compute metrics for each defined element (we’ll explain metrics in Section 3). If you prefer, you can initially use the automatic heatmap without defining AOIs, but defining them unlocks the detailed quantitative analysis per element.
Tips for AOIs: Make the AOI just big enough to cover the element of interest. Avoid overlapping AOIs if possible. You can usually draw rectangles or polygons on the interface to set an AOI. Label the AOIs meaningfully (the tool might auto-label common ones like “Logo” if using a template, or you can name them). The manual’s original guidance on AOIs emphasizes that they should correspond to the key information or design components you want to evaluate.
Step 4: Run the Prediction. After uploading the design and setting AOIs, initiate the analysis (often by clicking a “Run” or “Analyze” button). The system will process the image through the AI model. In a few seconds, you’ll get a report or dashboard of results.
2.2 Understanding the Output Overview
The Attention Predictor provides two types of outputs:
- Visual Outputs – Heatmaps and Scan Paths over your design.
- Quantitative Outputs – Attention Metrics (scores and values for each AOI, including composite indexes).
After running an analysis, the interface will typically show your design with colored overlays and some charts/tables. Let’s break down each component:
Predicted Heatmap: A color-coded overlay on your design indicating intensity of attention. “Hot” colors (red, orange, yellow) mark areas that attract a lot of attention, whereas “cool” colors (green, blue) mark areas with less predicted attention 7 8. This heatmap can be toggled for different metrics (overall vs specific metrics like Speed, Reach, Hold – explained in Section 3). It answers “Where on my design are people likely to look the most?” at a glance.
Attention Scan Path (Gaze Order): A sequence overlay (usually numbered pins and arrows on the design) showing the predicted order in which a viewer’s eyes might move across key elements 9. The numbers (1, 2, 3, …) indicate the rank order of fixations: 1 being where the eye likely goes first, then 2, and so on. Arrows may connect them to visualize the path. This reveals the visual hierarchy and flow: e.g., Do people look at the image first then the title, or vice versa? Is the first fixation on the brand logo as intended, or is it getting looked at later?
Attention Distribution Summary: Some reports include a chart or summary showing how attention is divided among your AOIs 10. For example, it might say the logo got 30% of the total attention, the main image 50%, tagline 10%, etc. This helps compare the relative draw of each element directly.
AOI Metrics Table: A table listing each defined AOI (element) and its scores on key attention metrics 11. Typically, you’ll see columns for:
- VAI (Visual Attention Index) – an overall score 0 to 1 indicating combined effectiveness,
- Speed – how fast the AOI attracts attention (perhaps given in seconds or a normalized 0–1 score or as “Time to First Fixation” in milliseconds),
- Reach – the percentage of viewers expected to notice that AOI (0–100%),
- Hold – how long viewers would look at it, once noticed (could be given as a percentage of total viewing time or an average duration).
There may also be statistics like average, max, min attention values for the AOI for advanced users, but the main four metrics above are the most actionable for design decisions.
Later sections will dive deeper into how to interpret these visualizations and metrics. The next section will explain what each metric means and why it’s important.
3. Features & Metrics Explained
In this section, we explain the core attention metrics and features that the Attention Predictor provides. Understanding these will help you interpret the results properly and make informed design choices.
3.1 Heatmaps – Visual Attention Distribution
Heatmaps are one of the most intuitive outputs. They use color intensity to show predicted attention distribution on your design:
- Warm colors (Red/Orange/Yellow) indicate high attention areas 12. These are spots the model predicts will draw a lot of viewer gaze.
- Cool colors (Green/Blue) indicate low attention areas 13. These parts are likely to be overlooked or only glanced at briefly.
Heatmaps can usually be generated for overall attention and for specific metrics:
- A Composite or VAI Heatmap (sometimes just the default heatmap) shows overall attention “pull” combining factors. It highlights generally which areas are most eye-catching overall 14.
- A Speed Heatmap might use colors to show which areas are noticed first (hot colors = seen quickly) 15 16.
- A Reach Heatmap highlights areas visible to the majority of viewers (hot = most people notice that area) 17.
- A Hold Heatmap highlights where people will spend the most time looking (hot = eyes dwell there longest) 18.
How to use heatmaps: Scan your design with the heatmap overlay:
- Verify if the brightest (hottest) spots align with what you want to draw attention. For example, on a product package, you’d hope the brand name or product image is among the hottest spots.
- Note any important element that is a “cool” spot – that’s a warning sign the element isn’t getting enough attention. Conversely, if an unimportant background element is bright red, it might be too visually dominant, distracting from the key message.
Heatmaps are great for a quick visual gut-check of your design’s attention hotspots. However, they are aggregate and qualitative. That’s where the metrics come in to quantify specifics.
3.2 Visual Attention Metrics (VAI, Speed, Reach, Hold)
The Attention Predictor provides four key metrics for each Area of Interest. These metrics are grounded in eye-tracking research concepts and give a detailed picture of different aspects of visual attention. Table 1 below summarizes each metric:
Table 1: Key Attention Metrics and Their Interpretation
Metric | What It Measures | High Value Meaning | Low Value Meaning | Design Implications |
---|---|---|---|---|
VAI (Visual Attention Index)(0.0 to 1.0) | Composite attention score combining Speed, Reach, and Hold33. It reflects overall attention-grabbing and holding power of the AOI. | The element is overall very attention-effective (draws attention quickly, widely, and for long)3. A higher VAI suggests the area strongly captures viewer attention overall. | The element is underperforming in attention – either it’s not noticed by many or not quick or not engaging. Low VAI indicates the area lacks visual impact compared to others. | Use VAI as a quick gauge of what design elements are working best. Low-VAI elements might need redesign or deemphasis; high-VAI elements are your strong points. |
Speed (Time to First Fixation) | How fast the area is noticed once the viewer starts looking3. Often measured as Time to First Fixation (TTFF) normalized (0-1, higher = faster)3. Essentially, does it grab attention immediately or after a delay? | High Speed = noticed quickly3. The element draws the eye almost immediately (e.g., large title or striking image). A high Speed score (toward 1.0) means the viewer’s gaze goes there early on. | Low Speed = noticed later3. Viewers’ eyes wander to this element only after looking elsewhere first. In worst cases, they might not see it within the first several seconds at all. | Visual hierarchy check: If something intended to be seen first has low speed, increase its prominence. E.g., for a package, the brand or key image should be fast. Aim for high Speed on crucial elements (e.g., logo Speed > 0.7 for packaging3). Low Speed indicates the element might be too subtle or lower in hierarchy – consider making it bigger, bolder, or repositioned to be seen earlier. |
Reach (Visibility %) | The percentage of viewers predicted to notice that area3. In other words, coverage of audience attention. If 100 people viewed this, how many would likely look at this area at least briefly? | High Reach = most viewers see it3. For example, 90% Reach means almost everyone notices it3. This implies the element is very noticeable or centrally placed. | Low Reach = many viewers miss it3. E.g., 50% Reach means half the audience might never notice that element. It’s peripheral or not salient enough. | Must-see content: Key info should have high reach (ideally > 80% for important elements)3. If an important element (like a call-to-action or brand logo) has Reach < 70%, it’s a serious issue – it’s effectively “invisible” to a large portion of users3. Consider relocating it to a more central spot or increasing contrast. For less critical decorative elements, low reach might be acceptable. |
Hold (Attention Duration) | The average time viewers will spend looking at the area, once they notice it3. It reflects sustained interest or how engaging the element is. Often derived from predicted fixation durations (normalized 0-1 or given in milliseconds)32. | High Hold = long dwell time3. Viewers likely linger on this element, indicating deeper engagement (e.g., reading text, scrutinizing detail). For instance, a detailed product description or an enticing image might hold gaze longer. | Low Hold = quick glances3. Viewers look only briefly and then move on. A low hold might mean the content is not interesting enough or very simple (which could be fine if it’s a quick message). | Content engagement: High hold on crucial messaging (e.g., product info, tagline) can be good – it means people read or examine it, though too long might also mean difficulty in processing. If hold is unexpectedly low on something that should be engaging (say, an infographic that you expect people to study), perhaps the content isn’t catching attention enough even when seen. Conversely, if an unimportant graphic has high hold, it might be distracting viewers away from more important info – possibly simplify that element. |
How the metrics work together: These metrics often trade off. For example, large bold text might have high Speed and high Reach (everyone sees it quickly) but maybe lower Hold (people see it, get the message at a glance, and move on). Conversely, a block of fine print might have low Speed (not noticed immediately, especially if it’s at the bottom), low Reach (many ignore it), but those who do read it might spend a longer time (higher Hold). Visual Attention Index (VAI) rolls up the three into one score for convenience 19. But in analysis, it’s useful to dig into each component to understand why an element’s VAI is high or low.
For instance:
- If an element’s VAI is low, check the breakdown: is it because Reach is low (few see it at all)? Or maybe Reach is fine but Speed is low (they eventually see it but not early)? Or perhaps people see it (high Reach, decent Speed) but don’t look for long (Hold is low). Each scenario suggests a different fix.
- If an element’s VAI is high, it generally means it is well noticed and engaged. But ensure it’s intentionally so — e.g., if a high VAI corresponds to a decorative image stealing the show, that might actually be a problem (it’s hogging attention).
The tool might provide the actual numeric values (like “Reach: 85%” or “Hold: 1200ms average”) and often normalizes these into 0-1 scales internally for computing scores. The exact scoring formula for VAI is typically the average of its component scores 20, but the main thing is understanding what each part means for your design.
3.3 Gaze Sequence (Scan Path)
The scan path visualization complements the metrics by showing the likely order in which a viewer will look at your defined AOIs 21. It is often depicted by numbers or arrows connecting the AOIs:
- “1” marks the first fixation, “2” the second, and so on.
- Arrows might show the path connecting these points in sequence.
Using the scan path:
- Validate your layout’s intended sequence. Good designs often guide the user’s eyes in a logical order (for example: first a headline, then an image, then a descriptive text, then a CTA button). Check if the predicted order matches your intended information flow. If not – e.g., if the eyes go to a less important element first – you may need to adjust the design emphasis.
- Check for any elements that are completely skipped in the early scan. Sometimes a scan path might number only a few elements, implying some AOIs weren’t viewed in the typical timeframe (e.g., within the first few seconds no one got to AOI X). Those elements likely have very low Reach/Speed and may need drastic changes if they are important.
The scan path is influenced heavily by the Speed metric of each AOI (and to a degree the spatial layout). The element with the highest Speed score is likely “1”, then next highest might be “2”, etc., assuming they are spatially distinct.
3.4 Example Output Interpretation
Let’s illustrate with an example output (as a hypothetical):
Imagine you uploaded a cereal box design and marked these AOIs: Logo, Product Image, Tagline Text, Nutrition Info, Promotion Badge. After running Attention Predictor, you get:
Heatmap: Bright red over the Product Image and the big cartoon mascot on the box; the Logo area is yellowish; Tagline is greenish; Nutrition info (small text) is mostly blue; the Promo Badge (a starburst saying “High Fiber!”) is orange.
Scan path: 1 on the Mascot Image, 2 on the Promo Badge, 3 on the Logo, 4 on the Tagline, (nutrition info not numbered, meaning not in top 4 fixations).
Metrics:
- Logo: Speed 0.60, Reach 75%, Hold 0.4, VAI = 0.58.
- Product Image/Mascot: Speed 0.85, Reach 95%, Hold 0.5, VAI = 0.77.
- Tagline: Speed 0.50, Reach 60%, Hold 0.3, VAI = 0.47.
- Nutrition Text: Speed 0.30, Reach 40%, Hold 0.2, VAI = 0.30.
- Promo Badge: Speed 0.80, Reach 90%, Hold 0.35, VAI = 0.68.
Interpreting this example:
The Product Image and Promo Badge are attracting attention very well (high Speed, high Reach). The Mascot is basically the first thing everyone looks at, and nearly everyone also sees the flashy promo. These drive the highest VAI. If that was the intention (perhaps the mascot is central to the design), that’s good. But if the brand Logo was supposed to be noticed first, we have an issue because it’s coming third.
The Logo’s scores are moderate; 75% Reach means 1 in 4 viewers might overlook it, and Speed 0.60 indicates it’s not noticed immediately by many. This is suboptimal for branding – ideally a logo on packaging should be near universally seen and quickly 22. The heatmap already showed the logo wasn’t a hottest spot. This suggests the logo might be too small or placed in a corner. A design change (larger logo or better contrast) could be warranted to boost its prominence.
The Tagline (“healthy and delicious!” for instance) is performing poorly (Reach 60%). Many won’t read it at all. Perhaps its font is small or low contrast. If the tagline is an important selling point, it needs to stand out more. If it’s not crucial, maybe that’s acceptable – but generally, such low reach on text could mean it’s essentially not communicating to 40% of the audience.
Nutrition info (small text) being low is expected; not everyone reads the fine print on a quick glance. If this is fine (it’s there for those interested, but not critical for everyone), we might accept low metrics on it. It’s common that detailed info isn’t universally seen in the first few seconds of viewing. However, if that info was legally or strategically important to be noticed (e.g., an allergy warning), then its design should be reconsidered.
The scan order (Image > Promo > Logo > Tagline) shows the visual hierarchy. If the design goal was to have brand recognition first, then clearly the current hierarchy isn’t meeting that goal; the mascot and promo are overpowering the brand. On the flip side, if the promo was vital (maybe a new “20% extra free” burst you intentionally wanted to catch the eye), then it’s working as intended by being second in attention and very high reach.
We will next discuss how to act on such insights – essentially, how to go from these observations to concrete design improvements.
4. Interpreting Results for Design Decisions
This is the most important section: using the Attention Predictor’s output to improve your designs. We will go through typical findings and what they mean for design tweaks, supported by research and examples.
When reviewing your results, ask two key questions for each element:
- “Is this level of attention what I want for this element?” (If an element is crucial but has low attention, that’s a problem; if it’s minor but drawing high attention, that could also be a problem.)
- “If the attention is not ideal, how can I change the design to fix it?” (Often by altering size, color, contrast, position, etc.)
Below are common scenarios and recommended actions:
Element is barely noticed (Low Reach) – Problem: A vital element (e.g., brand logo, CTA button, headline) shows a Reach well below your target (say less than 70% of viewers see it). Implication: A significant portion of users will overlook it entirely, meaning your message or branding might not be received by everyone. Solution: Make it more salient and central. For example, research in packaging design underscores that prominently displayed brand names draw consistent attention 23. If your brand logo is being missed, consider enlarging it, placing it near the area where eyes initially land (often the center or top-left for Western readers), or giving it a color that contrasts with the background. In web design, if a CTA button has low reach, try using a standout color or moving it above the fold. The goal is to ensure key elements are in the natural viewing path. As a rule of thumb, aim for Reach ~85-90%+ for critical elements so nearly everyone notices them 24 25.
Case example: A software homepage’s “Sign Up Free” button had only 60% Reach; many users scanned the page and missed the button. The design team made the button larger and orange (instead of blending in as a gray link). A re-test showed Reach improved to 95% and Speed went from 0.4 to 0.8 – now almost all users see it quickly. This aligns with known UX principles that call-to-action buttons should be prominent and visually distinct 26.
Element is noticed too late (Low Speed) – Problem: You expected an element to catch attention early, but its Speed score is low (or scan-path order is far down the line). Implication: Viewers’ eyes are drawn elsewhere first, possibly diluting the impact of what they see later. For time-sensitive messaging (e.g., a flash sale banner, or a warning message), a delay can mean it’s ignored. Solution: Re-evaluate visual hierarchy. Increase the element’s contrast, size, or isolation. Simplify surrounding clutter that might be stealing attention. For packaging, if a logo or new product stamp is seen late, try placing it where the eye naturally enters the design (often images or top sections get first look). Academic studies note that colorful illustrations can grab initial attention 27 – thus, if an illustration is not the main message but is pulling eyes first, perhaps tone it down relative to what should be first. Conversely, if something important is too subtle, you might add a pop of color. For example, adding a bold outline or background behind a tagline can increase its initial visibility.
Case example: In a print advertisement, the tagline was a clever slogan at the bottom, but eye-tracking prediction showed viewers saw the model’s image and the logo first, and only later (if at all) read the tagline (Speed ~0.3, meaning many took over 3 seconds to reach it). To fix this, the designers moved the tagline closer to the image and increased font size. The revised design had the tagline’s Speed go up to 0.7, indicating most viewers read it almost right after the main image. The ad thus communicated its message quicker. If an element absolutely must be seen first, consider techniques like gaze cueing (e.g., a person in an image looking or pointing towards the element, which can direct the viewer’s gaze) – studies show gaze cues can guide attention in advertisements 28 29.
Element has brief engagement (Low Hold) – Problem: People look at an element but don’t spend much time on it. Possibly they glance and move on very fast. Implication: If this is a text or detailed graphic meant to convey info, people might not be absorbing it. If it’s something meant only to be seen quickly (like a logo), low hold is not necessarily bad (it could mean they recognized it instantly without needing to stare). Solution: If low hold on an important explanatory element (say a product description or an infographic), consider if the content is uninteresting or too hard to read. You might need to simplify the content so it’s digestible or make it more compelling so users choose to spend time. On the other hand, don’t force hold time artificially – the goal is clarity, not just eyeball time. For example, an e-commerce product image might not hold long if it’s just a quick photo, which could be fine if the goal is just to be noticed; but the product description text should hold attention long enough to be read fully. Ensure font size is readable and the text is in a viewer’s direct line of sight to encourage longer focus. Sometimes adding a small intrigue can increase dwell time (e.g., a short tagline that prompts thought, or an infographic that invites closer look).
Case example: A promotional flyer had a paragraph of details that the predictor showed got low Hold (most people scanned the headline and image but spent almost no time on the paragraph). This suggested the paragraph wasn’t being read. The team shortened the text into bullet points and added an image icon next to each point. On re-test, hold time on that area increased significantly as the bullet layout was easier to read – indicating more engagement with the content. In design, concise content often yields better engagement; if attention data shows low hold, try breaking information into digestible chunks or more visual forms (charts, icons, etc.). Another insight from banner ad research: beyond a certain point, extra attention doesn’t improve recall much 30 – meaning if an ad or element conveyed its message in a brief glance, that might be sufficient. So the context of the element matters when judging Hold.
Element draws disproportionate attention (Very high VAI on a secondary element) – Problem: An element that isn’t a key message is nonetheless drawing a lot of attention (e.g., an irrelevant image or decorative graphic overshadowing the main content). Implication: It can distract users from what truly matters. Design resources are being “wasted” on something that doesn’t drive your goal (conversion, brand message, etc.). Solution: Consider toning it down or reallocating that attention to something more important. This might mean reducing its size, muting its colors, or even removing it if it’s not serving a purpose. Alternatively, if it’s an element that can be repurposed to carry a message, do so. For instance, if a background image is very eye-catching, maybe you can overlay a key message on it so at least when people look there, they see content you want them to see.
Case example: A webpage had an decorative animation in the sidebar that turned out to be one of the top attention-grabbers (high VAI) while the actual product features list was relatively ignored. The design team decided to minimize the animation (make it static and lighter in color) and added an icon next to each feature in the list to make that section more visually engaging. The subsequent analysis showed attention balanced out more toward the features list. This follows the principle of visual hierarchy: design elements should be sized and styled according to their importance. Use the attention data to ensure your visual hierarchy truly reflects your content priority.
General layout issues (Scan path anomalies): Sometimes the scan path might show a strange order, like jumping around in a way that suggests the design layout is confusing. For example, if gaze goes: middle -> top -> bottom -> then back to top, it might indicate the design’s flow isn’t linear or clear. Or perhaps two elements are competing (users split focus). To improve flow, align elements in a more predictable reading order (for western audiences, roughly top-to-bottom, left-to-right Z-pattern). Make sure the most important element is also the most visually prominent, second most important is second in prominence, and so on, to naturally guide the eye in a logical sequence 31. Consistency in style can also help (e.g., if all section headers look the same, users know what to look at next).
Tip: You can use the Attention Predictor for A/B testing layout variations. If you suspect an alternate arrangement might create a better flow, test both and compare scan paths and metrics. The tool’s Comparative Analysis features (if available) can highlight differences side by side 32.
Attention vs. Preference Caveat: Keep in mind, grabbing attention is the first step, not the end goal. An element being seen does not automatically mean the user likes it or will act on it. One research study on packaging design noted “seeing is not necessarily liking,” meaning a package can catch the eye but still not appeal to the consumer or fit their preferences (Husić-Mehmedović et al., 2017). Use attention insights in tandem with other feedback. For instance, if two designs both ensure the main message is seen, you might then choose the one that fits brand aesthetics or tested better in surveys. Attention data should be combined with quality of content: an ad that is attention-grabbing but confusing will not perform as well as one that is attention-grabbing and clear in message. So while optimizing designs, always revisit if the content itself is effective after ensuring it gets seen.
To summarize this section, Table 2 provides a quick mapping from common data observations to possible design actions:
Table 2: From Observation to Action – Design Recommendations
Observation (Data) | Possible Issue | Design Recommendation |
---|---|---|
Important AOI Reach < 70% (Most users miss it) | Too hidden or not salient in layout | Increase size/contrast; move to a more central or top position; highlight with color or icon; ensure it doesn’t look like an “ad” (to avoid being ignored)3. |
Important AOI Speed low (Not seen early) | Low in visual hierarchy, overshadowed by other elements | Emphasize it: use larger font or graphic, reduce nearby clutter. Make it the focal point by using imagery or arrows guiding toward it. |
AOI high Speed but low Hold (Seen but not read) | Possibly too superficial or not engaging | If textual, make sure the content is compelling and easy to digest (use bullet points, icons). If it’s fine that it’s quick (like a logo), no change needed. |
AOI high Hold but low Reach (Those who see it read it, but many don’t see it) | Good content, but hidden | Promotion needed: it’s engaging to those who notice, so find ways to increase its reach (e.g., make it stand out more in the design so more people notice it). |
Unimportant element high VAI/Reach | Distraction – drawing attention away from main points | Downplay it: shrink, mute colors, or remove detail. Alternatively, if possible, turn it into a vehicle for a message (e.g., overlay text on that image). |
Scan path out of order (not logical) | Confusing layout flow | Re-arrange layout to guide eye more naturally. Align content in intuitive reading pattern; use numbering or arrows in design if needed to direct attention sequence. |
Metrics all strong, but conversion still low | Attention is good, but messaging or offer might be weak (attention ≠ conversion) | Check the content quality – ensure the design not only draws eyes but the message persuades. Possibly run real user tests for qualitative feedback. Use the predictor to confirm attention, then optimize content clarity. |
By systematically addressing issues flagged by the attention data, you can iterate towards designs that not only look good aesthetically but are optimized to communicate effectively.
Next, we’ll look at specific use case examples illustrating how these insights apply in different domains like packaging, web/UI design, and advertising.
5. Use Cases & Examples
In this section, we demonstrate how the Attention Predictor can guide design improvements in various scenarios. The primary example is product packaging design (a core use case for this tool), followed by brief notes on other applications (web design/UI, advertising, etc.). Each example shows how interpreting the heatmaps and metrics led to concrete design changes and better outcomes.
5.1 Product Packaging Design – Case Study
Scenario: A beverage company is designing a new energy drink can. They want the can’s design to stand out on a shelf and communicate the brand and key benefits (e.g., “Zero Sugar”, “Extra Caffeine”) effectively. Two design concepts were created for testing:
- Design A: Large brand logo in the upper half, a catchy tagline in the middle, and a small “Zero Sugar” badge at the bottom. The background is a bold pattern.
- Design B: Brand logo slightly smaller and lower, a prominent image of a lightning bolt in the center, and the “Zero Sugar” badge at the top in a starburst shape.
They run both designs through Attention Predictor to gather insights:
Design A results: The heatmap shows the top half (logo area) very hot, the middle tagline area also warm, bottom badge relatively cool. The scan path is 1→Logo, 2→Tagline, 3→Background pattern area, with the badge not getting a number (implying low priority). Metrics: Logo Reach 95% (almost everyone sees it) and Speed 0.9 (seen almost instantly) – good. Tagline Reach 80%, Speed 0.6, Hold moderate. Badge Reach only 50%, Speed 0.4 (many miss the “Zero Sugar” message). Background pattern oddly has some high attention (maybe too intense). VAI overall is high for logo and tagline, low for badge.
Design B results: Heatmap is more spread: the lightning bolt image in center is hottest, the starburst “Zero Sugar” at top almost as hot, the brand logo (lower) is yellowish. Scan path: 1→Starburst badge, 2→Lightning image, 3→Logo, 4→Tagline (tagline was at bottom in this design). Metrics: Starburst badge Reach 90%, Speed 0.85 (the eye catches that starburst quickly, as hoped). Lightning image Reach 95%, Speed 0.8 (very eye-catching visual). Logo Reach 70%, Speed 0.5 (brand is seen by many but later than other elements), Tagline Reach 40% (at bottom, many don’t get to it).
Interpretation & Decisions: The team compares A and B:
- Design A clearly ensures the brand logo dominance (which aligns with research that brand visibility is crucial on packaging 33). Everyone sees the brand first in A, whereas in B the brand is a bit lost (only noticed after other elements, and by fewer people). Since brand recognition is top priority, A has an advantage there.
- Design B did a great job with the “Zero Sugar” message – the starburst at top got noticed by 90% quickly, vs only 50% noticing the badge at bottom in A. For a health-conscious selling point, B communicates it more effectively. Perhaps A’s badge was too subtle.
- The tagline text (optional marketing slogan) wasn’t seen by many in B (only 40% reach when at bottom). In A it was better (80% reach, being in the middle). So A is better if that tagline is important.
- B’s central image (lightning bolt) attracted attention, which could be good for grabbing eyes from afar, but it might have stolen some attention from the brand name itself.
- Also, B’s multiple “attention grabbers” (starburst and big image) meant the brand logo was third in line. The team feels this undermines brand identity.
- The background pattern in A drew some attention (maybe too high contrast), which could be tuned down to focus more on text.
Decision: They decide to create a hybrid C design, combining the strengths:
- Keep Design A’s layout (logo prominent at top for immediate brand visibility).
- Incorporate Design B’s starburst badge at top as well, near the logo, to make “Zero Sugar” seen by almost everyone. This way, when eyes go to the top area, they get both brand and “Zero Sugar” are within that region – effectively one combined area of interest.
- Simplify the background pattern so it doesn’t compete for attention (maybe a solid color or gradient instead of busy pattern).
- Place the tagline in the middle in a slightly smaller font (since it’s less critical than brand and the health message). If it’s seen by ~70-80%, that’s acceptable as long as the main points are covered by nearly 100%.
- Possibly adjust the lightning image to be smaller or integrated behind the logo so it still adds visual flair but doesn’t completely redirect gaze away from the brand.
Re-test (Design C): Results show the brand logo + starburst region at top is now the singular hot spot (eyes go there first and both brand and “Zero Sugar” are within that region – effectively one combined area of interest). Reach for brand and the badge are both ~95%. The lightning image got a bit less intense (which is fine) and tagline retains ~75% reach. The team is satisfied.
This example demonstrates how using the tool’s data led to an informed design combination, rather than guessing. It aligns with the idea of an optimization loop: test variations and mix the winning elements. The outcome is a package design where the brand is highly visible and the key selling point is noticed by most consumers quickly, increasing the likelihood the product stands out on a shelf (which is crucial in the competitive consumer goods space).
Notably, these findings echo known packaging research: packages that attract quick, involuntary attention also tend to be remembered better by shoppers 34, and making the brand salient is important for engagement and loyalty 35. By ensuring both brand and value proposition are salient, the company maximizes the chance that a shopper not only notices their product but also recalls its key benefit later.
5.2 Web/UI Design – Brief Example
Use case: Designing a landing page for a mobile app. The page has a headline, some bullet-point benefits, a screenshot graphic, and a “Download App” button. After running the Attention Predictor:
- The heatmap shows the large phone screenshot image dominating attention (bright red), while the “Download” button is only green.
- Scan path is: 1→Headline, 2→Image, 3→(somewhere in text), and the button was 4th.
- Metrics: Button Reach only 65%, Speed 0.4 (not great – many might miss it at first glance, seeing it late if at all).
Design change: The designers realize the button was in a color that blended with the background (stylish but low contrast, hence low attention). They recolor it to a contrasting color and slightly enlarge it. They also move it a bit higher on the page (so it appears without scrolling on smaller screens). Re-testing shows Button Reach 95%, Speed 0.8 – a huge improvement. This means on the revised design, nearly everyone notices the call-to-action quickly, likely increasing conversion. This scenario highlights the importance of CTA visibility, and indeed guidelines suggest CTA buttons should be one of the most salient elements on a landing page 36.
They also noticed the image was perhaps too dominant. They reduced its size 15% which made the headline and bullet text relatively more noticeable (balance of attention improved in the heatmap). In web UI, an image that’s too eye-catching can cause “banner blindness” effect for the text around it – by balancing it, users can pay attention to both image and accompanying info. Studies on web banners indicate that when users are very goal-oriented, they might ignore big promotional graphics 37, so making sure essential information doesn’t look like irrelevant content is crucial. Here the big phone image might have been perceived as decorative; slightly reducing it helped users see the text benefits sooner.
5.3 Advertising & Marketing – Brief Example
Use case: A digital banner ad to be used on a news website. The ad has a product image, a headline, some small print, and a company logo. Attention analysis reveals:
- The headline isn’t grabbing much attention (small text, only 50% Reach).
- The product image and background are very attention-grabbing (they are colorful).
- The logo is in the corner with 30% Reach (most people ignore it – typical for banner ads if placed at a corner).
Design change: To increase headline visibility, they increased its font size and placed it more centrally, overlapping slightly on the image. Headline Reach jumped to ~75% in prediction. They also moved the logo next to the headline at the top (instead of bottom corner) and made it a bit larger: logo Reach improved to 60% (still not everyone, but better). For ads, it’s known that people often ignore logos in banners unless integrated well, but you still want a decent fraction to see it for branding effect 38. Research suggests even if banners are not consciously clicked, if viewers glance at them, it can aid brand recall later 39. Therefore, improving those glance metrics is valuable.
They also trimmed the small fine print (which was rarely read – low hold and reach) to just a short tagline, accepting that an ad should be simple. This streamlined ad now had a clearer visual hierarchy: headline (draw attention, deliver message) -> product image (show the product) -> logo (brand). The predictor’s scan path aligned to that after changes.
5.4 Other Use Cases
Beyond these, Attention Predictor can be applied to any visual layout:
- In-store Shelf Displays: If you simulate a supermarket shelf with your product among others, the tool can show if your package stands out. Look at reach of your product vs others. Designers might adjust packaging colors to ensure their product breaks through the clutter (e.g., if all competitors are dark, maybe a bright color gains advantage).
- Email Newsletter Layouts: See which part of an email grabs attention first. Ensure that important announcements or call-to-action links are positioned where the eye goes early.
- Presentation Slides or Infographics: Identify if the title, graph, or key takeaway text is drawing attention. If a decorative element (like a logo watermark or background graphic) is too strong, you’d know to soften it.
- UX Design (in-app screens): While the tool’s focus has been mostly static visuals, its principles apply to interface screens. For instance, in a complex dashboard, what is grabbing user attention? If unimportant widget visuals are distracting from a critical alert banner, you might redesign their emphasis.
Each use case follows the same interpretative approach:
- Identify goals: What do you want users to see, and in what order?
- Check the data: Does the attention heatmap and metrics support those goals happening?
- If not, apply design adjustments as discussed (size, color, layout, etc., guided by both the tool’s suggestions and design best practices).
- Optionally, test variants: Use the predictor to compare versions (A/B) to confirm improvements quantitatively.
The timeline above outlines a recommended workflow for using Attention Predictor in any design optimization process, ensuring that you iteratively refine your design backed by data at each step 40.
6. Conclusion and Further Reading
By leveraging the Attention Predictor Web App, designers and marketers can make informed, quick decisions to optimize visuals for maximum impact. This manual has shown how to interpret the tool’s features – from heatmaps to detailed metrics – and translate them into concrete design improvements. The key is to align the attention patterns with your communication goals: make sure the right things are seen by the right number of people at the right time.
Remember these parting tips:
- Integrate with Design Process: Use the predictor on early drafts to avoid costly mistakes later. It’s easier to adjust a layout in Photoshop than after printing thousands of packages or coding a webpage.
- Balance Data with Creativity: The tool tells you where attention goes, but it doesn’t tell you what your content should be. Combine its insights with creative judgment. A design must not only capture attention but also deliver a clear message and resonate emotionally with the audience.
- Stay User-Centric: High attention on an element is generally good, but always ask – is that element helping the user? For example, drawing attention to a navigation menu is useful only if users need to use the menu. If not, maybe that attention should be directed to content instead. Use attention data as one facet of user experience optimization.
- Leverage Research: Many of the principles we applied (visual hierarchy, color contrast, gaze guidance) are backed by research. If you’re interested, here are a few references that underpin the suggestions:
Impact of Visual Design on Attention: Research in the dairy packaging context demonstrated how design elements like color and branding affect what people look at and their purchase decisions 41 42.
Involuntary Attention and Memory: A study showed that when packaging grabs attention automatically, people remember it better later on, even without conscious effort 43.
Web UX and Banner Blindness: Studies on web users find that obvious “banner-like” elements are often ignored due to selective attention 44. Integrating important info into content (as opposed to looking like an ad) can improve visibility 45.
Eye-Tracking vs. AI Prediction: The Attention Predictor’s AI is built on a huge eye-tracking data set, achieving around 91% of the accuracy of real studies 46. For deeper technical info on how it works and validation, see RealEye’s model details.
By following the guidance in this manual, you should be able to not only use the Attention Predictor tool effectively but also interpret its data to make smarter design choices. Whether you’re designing a product package that needs to pop on a shelf, a landing page that should funnel users to sign up, or an ad that needs to grab a split-second of attention, understanding visual attention patterns is a powerful advantage.
Happy designing, and may your key content always be seen!
Footnotes
- RealEye Attention Predictor (PREVIEW) ↩
- RealEye Attention Predictor (PREVIEW) ↩
- Understanding Your Product Package Design Report - RealEye Attention ... ↩
- RealEye Attention Predictor (PREVIEW) ↩
- RealEye Attention Predictor (PREVIEW) ↩
- RealEye Attention Predictor (PREVIEW) ↩
- Understanding Your Product Package Design Report - RealEye Attention ... ↩
- Understanding Your Product Package Design Report - RealEye Attention ... ↩
- Understanding Your Product Package Design Report - RealEye Attention ... ↩
- Understanding Your Product Package Design Report - RealEye Attention ... ↩
- Understanding Your Product Package Design Report - RealEye Attention ... ↩
- Understanding Your Product Package Design Report - RealEye Attention ... ↩
- Understanding Your Product Package Design Report - RealEye Attention ... ↩
- Understanding Your Product Package Design Report - RealEye Attention ... ↩
- Understanding Your Product Package Design Report - RealEye Attention ... ↩
- Understanding Your Product Package Design Report - RealEye Attention ... ↩
- Understanding Your Product Package Design Report - RealEye Attention ... ↩
- Understanding Your Product Package Design Report - RealEye Attention ... ↩
- Don’t bury traditional banner ads just yet. Here’s what eye-tracking ... ↩
- Don’t bury traditional banner ads just yet. Here’s what eye-tracking ... ↩
- Understanding Your Product Package Design Report - RealEye Attention ... ↩
- Don’t bury traditional banner ads just yet. Here’s what eye-tracking ... ↩
- Understanding Your Product Package Design Report - RealEye Attention ... ↩
- Don’t bury traditional banner ads just yet. Here’s what eye-tracking ... ↩
- Don’t bury traditional banner ads just yet. Here’s what eye-tracking ... ↩
- Don’t bury traditional banner ads just yet. Here’s what eye-tracking ... ↩
- Understanding Your Product Package Design Report - RealEye Attention ... ↩
- (Gaze cueing study reference) ↩
- (Gaze cueing advertisement reference) ↩
- (Banner ad dwell time vs recall reference) ↩
- (Visual hierarchy / reading pattern reference) ↩
- (Comparative analysis feature reference) ↩
- Understanding Your Product Package Design Report - RealEye Attention ... ↩
- (Involuntary attention packaging study) ↩
- (Brand salience and engagement study) ↩
- Don’t bury traditional banner ads just yet. Here’s what eye-tracking ... ↩
- (Banner blindness / goal-directed browsing study) ↩
- (Banner ad logo visibility reference) ↩
- (Brand recall through incidental exposure) ↩
- (Workflow optimization reference) ↩
- (Dairy/packaging design attention study) ↩
- (Color & branding purchase decision study) ↩
- (Involuntary attention & memory study) ↩
- (Banner blindness classic study) ↩
- (Integrating info into content improves visibility study) ↩
- RealEye Attention Predictor (PREVIEW) ↩