Abstract
When it comes to website engagement, the metric time on task is often perceived as being negatively correlated with success factors like the likelihood of purchase and satisfaction. This is because it is seen as a proxy for the amount of effort a user exerts to complete a given task. However, through the analysis of 128 unmoderated remote usability tests, I found that users often spend longer amounts of time completing a task on a website when they found the content and format of the website to be personally meaningful, engaging, and surprising. In this scenario, rather than representing effort, time on task better captures engagement and discovery. The inverse of this scenario—when users spend a short amount of time on a website—also reveals that short times sometimes occur due to frustration, or a perceived lack of brand mission and purpose, rather than the ease of use of the website. Based on these findings, I argue for a more expansive view of time on task that positions it as a catalyst for further analysis based on observed deviations for expected duration. The practical implications of this are that websites should be designed to foster both clarity and engagement, optimizing for time spent on task rather than simply aiming to reduce it, due to the myriad ways in which time spent on a website can be emotionally experienced by users.
Keywords
time on task, ToT, user experience, user interface, UI, website
Introduction
In the realm of UX, time on task is a widely used metric for gauging the efficiency of a user interface (Coursaris & Kim, 2011; Hornbæk, 2006; Molich et al., 2010; Rummel, 2017). A lower time on task is typically associated with more efficient user interfaces because it is assumed that the user spent less time completing the task they intended. As such, achieving a low level of this metric is a key user interface design principle held by many industry professionals and practitioners (Teodorescu, 2016).
However, not all digital properties are intended to skew toward low times. Websites that primarily derive revenue from advertisements gauge their pages’ success with metrics that essentially measure time on task, but which are conceptualized across more wide-ranging types of activities that often span multiple pages and types of content. For example, time on page measures the total amount of time a user spent on a specific page, and session duration measures the total amount of time a user spent on the website within a single visit or session (Baker, 2017; Juviler, 2022). Because online advertisements generate revenue through user views or engagement, longer time on page or session is considered positive because it produces higher amounts of revenue. The advertising technology field refers to longer times on pages or sessions as dwell time (Forrester, 2019). As such, websites that generate revenue in this manner tend to be designed for higher session durations, leading individuals to meaningfully interact with content rather than race through content to some other efficiency-oriented end. Good design from this perspective is that which optimizes time spent on task to enable users to complete whichever tasks initially brought them to the website; ideally, good design also captivates and inspires users to pursue further, albeit unexpected actions (Sherman & Deighton, 2001).
These two competing perspectives of time spent on task lead to the following question: To what extent can time on task be conceptualized as having positive or negative effects on user experience? To answer this question, I analyzed 128 unmoderated, video- and audio-recorded user tests in which participants were asked to visit a specific website using a remote desktop. Participants were new to the websites. They stated their likelihood for purchasing from, or returning to, the website. They searched for a competitor based on what they saw. Then, participants stated which website was their favorite of the two. Participants were given a variety of question types allowing verbal responses, written responses, rating scales, and card sorting. Last, they were asked to sketch out what they liked and disliked most about the initial websites that they visited. For the purpose of narrative analysis, the combination of these different question types was designed to tap into participants’ underlying sentiments by enabling them to verbally and even tactilely give voice to their impressions (Hollway & Jefferson, 1997; Williams, 2019).
The metrics for time on task were supplemented with video and audio self-interview data from which positive and negative sentiment could be gleaned from facial expressions, body language, verbal utterances, and intonation. Self-interviews are unmoderated tests in which participants are asked questions similar to moderated interviews (Mace, 2022). Because no moderator was present, the self-interviews allowed for a high degree of candor among participants; they often described in great detail why or why not a website met or exceeded their expectations. Analytically, this method enables the testing of time on task on three levels: the raw, time-based task level (derived from time measures captured by the software); the reflective, individual-level data (derived from verbal and written responses); and the observational data (derived from watching the video recordings and listening to the audio feedback that participants gave as they completed the study).
The study revealed that time on task is not a one-dimensional variable, but indeed it is better conceived of as having negative and positive dimensions based on users’ context of engagement. This was first derived thematically by noting that participants who spent more time evaluating the first website tended to choose it over the competitor that they searched for. This was complicated by the finding that those who spent more time searching for a competitor were less likely to prefer the initial website. Both the quantitative sentiment data and the video and audio recordings explained this complication: Time spent on the first website tended to represent engagement and positive sentiment, but time spent searching for a competitor tended to represent a sense of confusion and negative sentiment with the first website. When the first website was difficult for participants to understand, or they perceived it as too dense or cluttered, it tended to increase the amount of time they spent searching for alternatives. When the initial website was clear, it reduced their search time.
Taken together, these findings support that time on task has positive and negative dimensions because the same time-based metric led to very different preferences based on sentiment and context of use.
A discussion and analysis of the data that was collected follows with an argument in support of an approach to website design that takes users’ broader goals and motivations into account.
Time on Task and Session Duration: Different Perspectives on Digital Content Consumption
The extent to which the time a user spends on a piece of digital content is conceptualized as positive or negative depends on the frame of reference being used. When practitioners use the term time on task, the user experience is generally considered to be secondary to the overall goal being achieved, be that purchasing shoes, signing up for a newsletter, or finding a restaurant. As such, digital content is more broadly construed as existing as a series of tasksthat users engage with, and optimization of these tasks means reduced task time, or time on task.
However, when viewed from the perspective of internet connectedness, the time a user spends on a website is part of a larger process of engagement and discovery, which is stratified by factors like age, ethnicity, and peer networks (Lee, 2009). Sometimes users have a clear goal in mind (like opening a bank account), and other times they don’t (like browsing through an e-commerce store that they discovered via an ad). Users also frequent websites to learn about current events, chat with friends and colleagues, and learn (Jung, 2008). Internet use is frequently found to be associated with many positive outcomes ranging from financial to mental and physical health (Cheshmehzangi et al., 2022). From this perspective, a prolonged amount of time on task for a given user could indicate experiences such as satisfaction, engagement, and a desire for discovery, rather than a lack of efficiency.
Indeed, many websites are intentionally designed to serve these broader educational and leisure purposes. One needs only think of the numerous news and social media outlets across the world wide web to think of one such example. The popularity of this type of website has fueled the rise of a large digital advertising industry that simultaneously capitalizes on dwelling behavior while making it financially possible for website owners or publishers to continue to create content. For example, leading weather websites, news media websites, and social media all feature banner ads on their websites. Banner ads are placed where views are most dense and concentrated to maximize the amount of time these ads are viewed alongside popular content (Lohtia et. al, 2003; Sherman & Deighton, 2001).
Because these websites cater to individuals’ desire for exploration and content discovery, these websites optimize longer user time on page and session duration. A good user experience on these sites is one that keeps users engaged and consuming their content, rather than helping them complete a task in the most efficient way possible. This fits the framework of internet connectednessin that time spent on a website is not something to be curtailed, but rather something to be optimized. Increasing time on website, or dwell time, is beneficial for both website owners (through advertising revenue) and for individual internal users (through discovery of and engagement with digital content).
Methods
To test the extent to which time spent on task accurately serves as a proxy for efficiency in websites, I analyzed 128 unmoderated, video- and audio-recorded user tests in which participants were asked to visit a specific site using a remote desktop. Participants were new to the websites. They stated their likelihood of purchasing from, or returning to, the website. They searched for a competitor based on what they saw. Then, participants stated which website was their favorite of the two. The metrics for time on task were supplemented with video and audio self-interview data from which positive and negative sentiment could be gleaned from facial expressions, body language, verbal utterances, and intonation. The remote usability software UserTesting® automatically detected positive and negative sentiment based on verbal utterances, so the qualitative component of the analysis was triangulated with quantitative sentiment data.
Participants were sampled from across North America and represented a wide range of socio-economic and ethnic backgrounds. Participants were split roughly evenly in terms of gender (N = 68 female and N = 60 male). The average age was 30 with the average household income being roughly $60,000 USD. In a total of 34 independent tests with a total of 128 participants (with each test having between three and four participants), participants were directed to websites across the following eight categories: retail, financial services, business-to-business (B2B), telecommunications, entertainment, food, travel, and gaming. The wide variety of websites allowed discovery and examination of different contexts of engagement and website design, so analysis covered both websites designed with longer intended time on task (such as B2B solutions selling complex offerings) and those with lower intended time on task (such as relatively low-price retail websites). Testing covered both aforementioned perspectives toward internet use: namely, more goal-directed and transactional use versus more discovery-oriented and interconnected use.
All participants agreed via the software’s terms and conditions to be video- and audio-recorded, and they understood that the data obtained from their interviews may be used for research purposes. I have opted not to include any of their direct video or audio recordings and have used pseudonyms throughout to maintain their anonymity.
The intent in these studies was to determine the amount of time users spent on each of the three stages of the test: 1) exploring the initial website; 2) searching for a competitor; and 3) exploring the second, competitor website. Results could meaningfully be connected to positive or negative sentiments about the initial website.
Results
On average, users in this study spent 3 min and 4 s evaluating the first websites, 1 min and 1 s searching for a competitor, and 1 min and 54 s evaluating the second, competitor website. Time on task—whether on the first website, the competitor search, or the second, competitor site—did not have a significant relationship with choice of website (Table 1). The reason is that users’ time spent at each phase of the test showed two opposing sentiments, each leading to different amounts of time spent: engagement and frustration. When expressing either of these two sentiments, users tended to spend more or less time than average on each of these phases. This is because each of these sentiments sometimes led to further discovery, or sometimes it led to closure.
Table 1. Correlations Between Time on Task and Website Preference
Relationship | Pearson’s R | P Value (N=128) |
---|---|---|
Time on first website and first website preferred | 0.16 | 0.07 |
Time on first website and odds of returning | 0.08 | 0.37 |
Time on search and first website preferred | -0.004 | 0.96 |
Time on second website and first website preferred | 0.05 | 0.58 |
Although there is a significant relationship in the first correlation (time on the first website and first website preferred at a 0.10 significance level), this tended to only be the case for users who experienced a great deal of delight or surprise on the website, and as such it was not an artifact of time spent itself. This is reinforced by the finding that participants’ stated odds of returning were not significantly correlated with time spent on the first website. The qualitative analysis supports this finding as well. They may have preferred the first website to the second but not actually felt particularly compelled to return to the website based on the time spent.
As observed in the video and audio feedback, when an individual could easily determine what the overall purpose of a website was, they tended to spend a greater amount of time than average engaging with that website or spend a reduced amount of time due to seeing it as highly intuitive and worth returning to in the future. By contrast, when a website’s overall purpose was unclear, users tended to spend a great deal of time trying to understand the website, or they would rapidly bounce; that is, leave the website hastily while remarking that they were not likely to return (Kamerer, 2020). These highly emotionally charged states— feelings of engagement and frustration that participants verbalized during their recorded sessions—essentially canceled one another out when viewed in the aggregate. There was no significant correlation between time on task and choice of website based upon the analysis of the total sample data. But the raw video data showed interaction effects that could be determined qualitatively.
Similarly, participants tended to spend more time searching when the first site was unclear or when they could not easily think of a competitor based on what they saw in the first website. This matches the finding that time spent on the first site increases the odds of choosing the first site, but it also increases the time spent searching. This represents the qualitative finding that there are two reasons why users spent more time on the first site: 1) They are positively engaged with the site’s offering. Or, 2) They are confused by it and seeking additional information. This is further reinforced by an association between time spent on the first site and a negative net promoter score, a metric used to capture users’ likelihood of referring the brand in question to friends, colleagues, and family (Reichheld, 2003; Sharp, 2008). Users who spent a longer time searching for competitors were less likely to endorse the first website, due to having an unclear understanding of its core offerings.
Rather than being squarely indicative of either efficiency or value and purpose, time on task represented two different axes that all websites seemed to cue for users. Results were mixed, with several participants associating time on task with engagement, inspiration, and discovery, yet for others it was associated with frustration and even futility. Taken together, time on task represents both efficiency and engagement, supporting a nuanced view of website time use based on users’ expectations, goals, and objectives.
Engaging and Inspiring: Delivering on a Mission Statement and Exceeding Expectations
When users found the content of the first website to be clear, they tended to spend an above average amount of time on the site browsing and engaging with its content; or they spent a less than average amount of time due to finding its content to resonate and inspiring enough for them to return in the future. Moreover, when the content was clear, this tended to reduce both the amount of time spent searching for a competitor as well as their overall impressions of the competitor’s site.
For example, Mindy, a 38-year-old living in the American Midwest wrote the following to summarize why she preferred the high-end, niche retail website that she was taken to over a competitor that she searched for:
The first website does everything better. It sells the brand better, it gives me the brand’s mission statement, and I’m able to browse everything the brand offers. I cannot get that on [the second/competitor] website. I just got a sample of the brand.
Although the first website gave Mindy a clear conception of what the brand was “all about,” the second was “spread too thin.” Rather than make it clear to users what the website’s primary focus was, due to being a department store, the latter website focused on demonstrating the fact that it has a wide variety of products available. Unfortunately, this emphasis on variety came at the cost of minimal focus on “brand mission,” as Mindy explained further when summarizing why she preferred the first website:
Like, I get that they have a ton of different brands, and many of them are really great. But it’s just… it’s just spread too thin, I guess. It’s selling a bunch of different items without a story behind it. The first website is the opposite: It doesn’t seem to have that many items, but that’s okay since what it does have is on point.
This notion of being “on point” was similarly important for Fred, a 39-year-old Canadian. Fred was screened for a test evaluating a financial services website due to his self-declared interest in economics and finance. When evaluating the first website that he was directed to—a challenger fintech company that provided medical professionals financial planning and retirement tools—he felt that there seemed to be the “potential” of a brand promise, but “… it just wasn’t there:”
I mean, I can tell that this is something designed to get medical professionals to save and plan ahead for their financial futures, but something’s missing. If I look at other major Canadian banks, even if those aren’t as focused on medical professionals as this site is, they seem more… well… dedicated to financial wellness. I think this company’s a bit confused between convincing doctors that they should save money and plan better financially and actually letting people know what services they have to support this. Right now, it seems more like a blog or something rather than a business.
Even though this financial services website had a fairly clear niche and focus, for Fred it was missing a clear throughline: A way of translating its stated mission statement of improving medical professionals’ financial well-being into actionable steps.
Mindy’s preference of a more tailored website helps explain the need and desire for this sort of end-to-end experience. In both cases, these users found that websites with a clear vision and a user interface that demonstrated how the particular website delivered on its mission were preferable to those that were missing either of these components.
The importance of brand mission and deliverability was found to be just as important for the second website’s evaluation. As noted by Brian, a 45-year-old tech professional who was asked to evaluate a business-to-business ecommerce website:
This second website effectively leveraged the power of its brand. It may not have offered a treasure trove of info like the first page, although it flexed its muscle to communicate superiority in this field… its layout and content are a lot more engaging, making it overall look a lot more modern and professional. I think I’ll actually come back to check it out later!
Despite the first website’s content being seen as a “treasure trove,” the competitor’s website that was more focused on legitimizing its status in its particular e-commerce niche did a much better job of compelling Brian to engage with and return to its website. As with the examples of Mindy and Fred, less was more.
However, sometimes a richer variety of content was desired, as made clear by Bridget, a 29-year-old Californian, and her experience with an American luxury retail website:
The first website has a lot more varieties in terms of style and quality. And the first website sells bags and some apparel which the second website doesn’t seem to sell at all. There are just so many amazing things here! I didn’t realize how much there was, and I definitely am going to come back.
Bridget spent nearly 5 min exploring the first website that she referenced and only about 2 min on the competitor site because it was “too focused.” For users like Bridget, efficiency is not as important as having a breadth of offerings. Even more to the point, what really fascinated Bridget about the first website was the fact that she did not initially expect many of the items that she found to be available on the website.
While seemingly at odds with Brian’s preference of a more pointed website over one offering a “treasure trove” of information, Bridget’s choice was in line when looking holistically at her experience on both websites. While Brian found that the more minimal of two websites had a clearer brand mission, or “muscle” as he put it, Bridget found that both websites she visited had similar brand appeal. For her, the availability of “amazing things” that she did not know were available on her chosen site added to the overall appeal. Rather than try to quickly evaluate this site, she became compelled to search for items that she did not expect to see, and she stated that she would be likely to come back to explore even further.
Cluttered and Confusing: No End in Sight
In contrast to situations of engagement and inspiration, when users could not quickly determine the overall purpose of the initial website that they were directed to, their sentiments tended to be those of frustration and confusion. When a significant amount of time was spent on these sites, it tended to be marked by frustrated efforts to determine the website’s purpose. When time on these sites was short, it was because effort seemed “futile” without a clear goal to accomplish; users felt they were meandering around the website and struggled to determine its purpose, and as such they were likely to bounce relatively quickly.
An example of a user that spent an above average amount of time on a website that they felt was confusing is Claudia, a 53-year-old Torontonian. Claudia experienced a Canadian retail site aimed toward average income earners. When first being taken to the website, Claudia visibly looked confused and exclaimed, “What is this?” She navigated toward the bottom of the page, stating as she did so, “This just doesn’t end! Look, I can keep going and going and going… What’s the point?” Determined to figure out just what “the point” of the website was, she spent close to 7 min navigating different product pages until she matter-of-factly stated, “Well, it’s designed for families. They should’ve made that more obvious up front.”
This sentiment of not knowing what the “point” of a website was and feeling a sense of “scroll fatigue” (Streit, 2021) was echoed by several other users, particularly when visiting retail websites on their mobile devices. Often determined to pin down the overall aims of the website, these users would stay on task for above average durations, but they also expressed lower than average rates of return and overall satisfaction. Key for these websites not leading to immediate churns or bounces was that they also tended to be reputable brands, and users had an element of shockat the lack of organization and clear “agenda” of the page, as one user put it. The brand appeal inspired users to “struggle through” the websites in search of a logical end point or pattern, often leading them to look for items they “knew” the brand must have.
In contrast to the experience of scroll fatigue induced by a dull discovery process, Eddy, a 27-year-old Product Manager who is currently looking for a new cell phone provider, encountered a website that was similarly “too busy.” He quickly bounced from the website. “I don’t even know where to look!” He exclaimed while directing his cursor toward the bottom of the page. “It just looks like ad after ad.”
Without a strong reputation, front-end design, and infrastructure that typically coincides with well-known and established websites, including “refined” and “expensive” palettes, as users tended to say, challenger brands (De Chernatony & Cottam, 2009) tended to be perceived as more overwhelming and less worthy of the effort required to determine their purpose. A lack of clear organization coupled with an unknown brand often resulted in users labeling the experience as “chaotic” and “a mess,” as opposed to more established brands’ websites tending to be called “broad” or “boring.”
However, both types of sites induced scroll fatigue and were subject to bouncing because of participants’ shared sentiment of futility at understanding the website. These sites also caused a frustrating protracted experience when users searched for a competitor. For example, when asked to search for a competitor to the auto insurance website that she was first taken to, Cindy, a 46-year-old South African woman, stated the following: “A competitor? I don’t even know what the first website did! Was it a blog? Let me look again…” In her initial exploration of the first auto insurance website, she was unable to see a clear call to action such as a sign-up button or even a clear menu or banner indicating that this website sold auto insurance. So, she stated that she was unsure if she had perhaps been directed to an “educational blog or something.” Not only did this cause her to exert a great deal of effort navigating different landing pages, but this effort carried through to her search attempt as she had no clear baseline from which to compare the experience.
Conclusion
Through an analysis of remote usability tests in which users were asked to compare their initial impressions of a target website to that of a competitor that they searched for, it was determined that time on task could not accurately be used as a proxy for efficiency. Time spent on a task represented both positive and negative sentiments in the form of engagement and frustration. Users’ imagined future states were also impacted by their time on task, including their ability to imagine returning to purchase or discover more about the website. In conclusion, time on task as a metric does not accurately capture the intentions or emotional states of users.
The primary implication of these findings is that time spent on task may be favorable or unfavorable when it comes to users’ engagement with websites. This is because more time may be spent seeking clarity from a confusing experience or actively engaging with content, while less time may be spent when one feels abjectly confused by a site or is so inspired by its content that they nearly immediately state that they want to return for further exploration or purchase. This does not mean that time on task is not meaningful as a metric, but rather that it should be used as a signal when its expected duration does not match the users. For example, for the B2B websites analyzed in this study, a low time on task was unexpected and often was the result of a lack of engagement; a high time on task was often sparked by a perception of vague or unclear value propositions on the website. In this sense, time spent on task should serve as a catalyst for further analysis, rather than as a conclusive ease-of-use metric.
Rather than attempt to categorize users’ time on the web in terms of discrete tasks, it may be more fruitful to conceptualize time spent as part of a larger series of interactions that do not necessarily have clearly defined end goals. Although users can and do sometimes have clear intentions for particular tasks, such as signing up for an account, given how much sentiments can vary around clarity, brand mission, and imagined future states, this metric should be analyzed within users’ broader context of use or for potential missteps in their journey on the website.
Rather than strictly be associated with efficiency, time on task can be optimized to encourage engagement and discovery and ward off frustration, especially for emotionally charged activities. In doing so, lead brands might merit longer dwell times and reduce bounce rates, conditions that represent the poles of high and low times that are canceled out when time on task is viewed as a one-dimensional rather than context-dependent measure of efficiency. This more expansive view of time on task can therefore leverage the metric as a tool for health checks, rather than as a sole diagnostic, of website performance.
Tips for Usability Practitioners
- When designing a website, even one intended to be largely transactional like a checkout page, start with the goals of delight and surprise rather than efficiency.
- Do not take time on task measures at face value. They can often be counterintuitive. Instead, use time spent on task as a signal for deeper analysis into why users are trending lower or higher than expected.
- When tasked to improve the efficiency of a website, find ways to optimize time on task while maintaining features of the website that users delight in beyond just the task at hand.
- Start with the broader customer journey in mind when optimizing for efficiency on a website. In addition to the obvious goal of the website or landing page, like purchasing items, consider why else users might visit. Including features that might surprise or delight could increase the amount of time spent in their journey and lead them to further engage with your design.
References
Baker, J. (2017, August 28). Brafton 2017 content marketing benchmark report. Brafton. https://www.brafton.com/blog/strategy/brafton-2017-content-marketing-benchmark-report/
Cheshmehzangi, A., Zou, T., & Su, Z. (2022). The digital divide impacts on mental health during the COVID-19 pandemic. Brain, Behavior, and Immunity, 101, 211–213.
Coursaris, C., & Kim, D. (2011). A meta-analytical review of empirical mobile usability studies. Journal of Usability Studies, 6(3), 117–171.
De Chernatony, L., & Cottam, S. L. (2009). Creating and launching a challenger brand: A case study. The Service Industries Journal, 29, 75–89.
Forrester, D. (2019, February 21). What is dwell time and why it matters for SEO. SearchEngine Journal. https://www.searchenginejournal.com/dwell-time-seo/294471/#close
Hollway, W., & T. Jefferson. (1997). Eliciting narrative through the in-depth interview. Qualitative Inquiry, 3, 53–70.
Hornbæk, K. (2006). Current practice in measuring usability: Challenges to usability studies and research. International Journal of Human-Computer Studies, 64, 79–102.
Juviler, J. (2022, May 31). 16 website metrics to track for growth in 2023 and beyond. Hubspot. https://blog.hubspot.com/website/engagement-metrics
Jung, J. (2008). Internet connectedness and its social origins: An ecological approach to postaccess digital divides. Communication Studies, 59(4), 322–339.
Kamerer, D. (2020). Reconsidering bounce rate in web analytics. Journal of Digital & Social Media Marketing, 8, 58-67.
Lee, C. J. (2009). The role of internet engagement in the health-knowledge gap. Journal of Broadcasting & Electronic Media, 53, 365–382.
Lohtia, R., Donthu, N., & Hershberger, E. K. (2003). The impact of content and design elements on banner advertising click-through rates. Journal of Advertising Research, 43, 410–418.
Mace, M. (2022, October 9). Discovery phase made easier: The joy of self-interviews. Center for Human Insight. https://centerforhumaninsight.com/discovery-phase-made-easier-the-joy-of-self-interviews/
Molich, R., Chattratichart, J., Hinkle, V., Jensen, J. J., Kirakowski, J., Sauro, J., Sharon, T., & Traynor, B. (2010). Rent a car in just 0, 60, 240 or 1,217 seconds? – Comparative usability measurement, CUE-8. Journal of Usability Studies, 6(1), 8–24.
Reichheld, F. F. (2003). The one number you need to grow. Harvard Business Review, 81, 46–54.
Rummel, B. (2017). Beyond average: Weibull analysis of task completion times. Journal of Usability Studies, 12(2), 56–72.
Sharp, B. (2008). Net promoter score fails the test. Marketing Research, 20(4), 28–30.
Sherman, L., & Deighton, J. (2001). Banner advertising: Measuring effectiveness and optimizing placement. Journal of Interactive Marketing, 15, 60–64.
Streit, J. (2021). How to use scrolling to enhance the user experience. Blue Frog. https://www.bluefrogdm.com/blog/how-to-use-scrolling-to-enhance-the-user-experience
Teodorescu, D. (2016, April 26). A UX designer’s guide to improving speed of use. Medium. https://medium.com/@davidteodorescu/a-ux-designer-s-guide-to-improving-speed-of-use-8f4b2b7263f3
Williams, L. (2019). Thinking through death and employment: The automatic yet temporary use of schemata in everyday reasoning. European Journal of Cultural Studies, 22, 110–127.