What 96 Designers Taught Us About Harm: The Behaviors Around Considering Harm in Digital Products

Peer-reviewed Article

pp. 125-142Download full article (PDF).

Abstract

Potential harm within digital product design has long been underexplored, despite the growing influence that consumer-facing digital products exert on individuals’ daily lives. This paper presents the methodology and findings from an online survey conducted with 96 US-based UX- and product-designers working on customer-facing digital products. The survey focused on the attitudes, behaviors, challenges, and needs that designers encounter while considering harm in their daily work. Our findings resulted in several recommendations for future research to develop practice-based design solutions that enable designers to more effectively identify, discuss, and mitigate potential harm stemming from their work.

Keywords

ethical awareness, harm-aware design, harm reduction, user experience design, digital product design, survey

Introduction

The role of harm has been historically missing from design conversations, classrooms, and team discussions. Despite adopting a human-centered approach to problem-solving, there have been no established or widely known design patterns or definitions related to harm reduction within digital design (Brignull, 2011; Gray et al., 2018; Nelissen & Funk, 2022; Sohail et al., 2017). In today’s interconnected world, in which digital products and services wield significant influence over individuals and societies, designers have a responsibility to mitigate harm and promote well-being. Without an intentional emphasis on harm assessment, designers have been at risk of introducing harm in their designs. This paper has used harm-aware design as a broad term describing the various facets of identifying and reducing harm throughout the design process. We have recognized the inevitability of harm arising from design and the necessary focus on striving to mitigate risk.

Real-world examples have underscored the urgency of addressing these ethical gaps. In healthcare, Magrabi et al. (2011) found 12 instances in which poor user interfaces in digital software led to patient overdoses; a software application requiring users to select medication from a 225-option dropdown, listed in a non-intuitive order, resulted in harm to the patients. In gaming, the design of a popular augmented reality, AI-driven game caused users emotional distress. Through the game’s reliance on consumer-generated content, the interface inadvertently led players to the United States Holocaust Memorial Museum, where a fictional character was featured emitting poisonous gas in an auditorium displaying testimonials from gas chamber survivors (Peterson, 2016). In law enforcement, multiple people were wrongfully arrested due to flawed facial recognition software that falsely matched them to criminals (Hill, 2024). Given how rapidly technology has advanced with little to no oversight, it is unsurprising that there are countless examples of digital products leading to harm that users and related stakeholders are forced to absorb (PenzeyMoog, 2021). It has become easy for organizations to get caught up by the excitement of innovation, that is, “chasing the next idea, the next dollar, the next trend, without asking if what we are building should even exist” (Shariat & Saucier, 2017, p. 7). By hyperfocusing on scale and growth, unintended consequences have been ignored (Botsman, 2022), leading to a proliferation of digital products plagued with privacy, surveillance, safety, addiction, and equity concerns (Carroll, 2020; Falbe et al., 2020). 

Despite a recent emergence of scholarship and discussion on design ethics, a significant gap has remained between ethical theory and ethics in practice. Previous research has focused on ethical tensions and awareness, deceptive design patterns, and ethical design methods, providing a rich foundation for discussing ethical complexities that designs face (Chuvukula et al., 2021; Chivukula et al., 2020; Gray and Chivukula, 2020; Gray et al., 2018). Although this has provided a basis for discussion on the various ethical complexities that designers face, it also suggested that there is a lot of confusion about precisely what it means to design ethically (Gosset, 2021). Researchers and practitioners have proposed a variety of ethical design principles, values, and codes (Buwert, 2018; Falbe et al, 2020; Zhao, 2018), but there has been no agreement on how designers can structurally integrate ethics into their daily practices (Overkamp, 2022). User, practitioner, and academic communities have separate ethical vocabularies, so trying to describe what constitutes good design is impossible due to the subjectivity and malleability of our vocabularies (Buwert, 2016). Although it has been fine for the industry to use the term ethical design as “a catch-all term for the multitude of horrible problems in tech right now” (PenzeyMoog, 2001), focusing on ethical design doesn’t address any problems or solutions as it is too vague to be practical.

The disconnect we studied highlighted the need for more practical insights to help designers navigate daily challenges related to harm and ethics; this became the rationale for our research focus. Previous approaches have been to utilize design methods, which are approaches that help generate and apply knowledge to support the improvement of the design process (Chivukula et al., 2021a). In response to the numerous ways that design can lead to harm, the industry and research communities have developed and published a wide variety of ethical design and harm reduction methods (IDEO, 2019; PenzeyMoog, 2021; Rafit, 2021; Sohail, 2017; Spotify Design, 2020; The Digital Ethics Compass, 2021; Zhao, 2018). Yet, there has been no centralized repository of how and when they should be used. Seeing a need for the synthesis of these methods, Chivukula et al. (2021) identified and analyzed a collection of 63 ethical design methods and found that, although all were created for practitioner use, nearly half were published in scholarly journals often inaccessible due to paywalls. The rigor of peer-review can be critical to ensure high quality, but in this case, it created a barrier between methods and the very people they were designed for.

Building off Chivukula’s work, Namer (2023) analyzed and filtered 87 ethical design methods into a collection of “six grab-and-go tools that could be quickly used while designing” (para. 11). Reviewing the utility of the tools available, Namer assessed each tool using a set of criteria to ensure that the selected tools were open source, helped design teams surface unintended consequences, and included templates with a low level of complexity to use. The six tools that were selected ranged in usage from speculating harmful scenarios (Black Mirror Brainstorm and Provocatype) to predicting harmful behaviors (Inverted Behavioral Model and Spotify® Ethics Assessment), or to mapping positive and negative outcomes (Dichotomy Mapping and Consequences). Except for the Spotify Ethics Assessment, the six tools were very broad and included a generic template which, although it can be beneficial for diverse use cases, doesn’t help teams determine the relevance or probability of what types of harm might manifest.

The Spotify Assessment helps categorize and rank a list of potential negative impacts, but it has been the only tool of the six to do so. Similarly, the Nielsen Norman™ Risk Management Process has been one of the few comprehensive risk-management tools within design that provides a six-step process for identifying and ranking potential harms (Fessenden, 2023). Although both provide more structure and guidance than many of the other tools, they have limitations. The Spotify Assessment hasn’t provided guidance for how to identify potential sources of harm, and the Norman Process has tended to suggest time-intensive research requiring access to analytics, which assumes that design teams have significant time and resources to dedicate toward harm reduction. The authors of this paper do not want to undermine the important work that has been done in this space or discredit the valuable impact of these existing tools and methods. However, we identified a need for additional work into understanding the characteristics of, and what, resources will best serve designers, so those resources might be adopted by the industry.

Description of This Study

This study was built on existing research to address the following research question: How do digital product designers consider harm within their daily work? This study consisted of an online survey of 96 US-based designers and explored the behaviors and barriers they encountered in becoming more harm-aware in their daily work, as well as the resources they needed to overcome these challenges. We scoped the study population to designers working on consumer-facing digital products because of the profound impact these products have on individuals’ lives. We used the National Institute of Standards and Technology (n.d.) to define harm as “any adverse effects that would be experienced by an individual that may be socially, physically, or financially damaging.” The study was designed to be exploratory, aimed at gathering insights from a large population of designers to inform future research into harm-aware design. It has highlighted designers’ needs and shed light on pain points and areas for further investigation.

Our findings provide focus and scope for subsequent research while validating the importance and necessity of continued efforts to develop practice-based solutions that help design teams mitigate harm in their designs and organizations.

Positionality Statement

To protect the privacy and confidentiality of the participants, no identifying information was collected or stored during the survey, and any shared personal data was kept confidential. This study and paper were designed by two researchers; the first author took the role of the principal researcher, and the second author acted as an advisor. The first author has over a decade of experience working as a digital product designer, design manager, and design educator within the technology industry. The second author is a researcher, professor, and associate dean specializing in ergonomics and human-centered design research. Our aim with this study and paper was to help connect the research and practitioner communities. 

Methods

Research Questions

The study was designed to address three research questions:

  • RQ1: What are designers’ attitudes and behaviors around considering harm that could occur because of their work?
  • RQ2: What are the primary barriers that designers face preventing them from being more harm-aware within their work? 
  • RQ3: What resources do designers believe would help them mitigate those barriers and be more harm-aware within their work?

Methodology

This study received exempt status from the Institutional Review Board (IRB) of North Carolina State University (study number 25865). It consisted of an online survey conducted across the United States over the course of 6 weeks from May 22 to June 30, 2023. The study was distributed via Qualtrics®, a cloud-based software application. We used an online survey to facilitate anonymous responses from diverse participants with minimal risk. Although surveys as a methodology have limitations, the survey provided a consistent approach to quickly collect insights, trends, and patterns. Additionally, it helped facilitate easier data analysis by enabling conclusions to be drawn based on a wide range of perspectives.

Population and Recruitment

The target demographic of this study was US-based UX- and product-designers with 3 or more years of experience currently working in-house on customer-facing digital products. We selected this demographic to gain perspectives from those who had substantial professional experience designing for real users on product teams across differing experience levels, industries, and organizational sizes. Participants were recruited through a variety of channels, including design-related Slack™ communities, industry listservs and newsletters, and the social media websites LinkedIn™ and Reddit™. To gain a more comprehensive understanding of the topics, a broad net was cast across a diverse pool of potential participants. Using purposive, criterion sampling (Palinkas, 2015), participants self-selected into the study by meeting the demographic requirements.

Survey Design

Participants received no compensation for completing the survey. We designed the survey to address three research questions (RQ1–3) and ensured it could be completed in approximately 10 min. The survey was divided into two parts. The first part of the survey focused on consent and participant inclusion through five categorical questions using nominal scales. The second part collected demographic information and responses related to the three research questions. Three demographic questions gathered categorical data using nominal scales. Of the 10 questions addressing the research questions (Table 1), six were related to RQ1 (attitudes and behaviors), two to RQ2 (barriers), and two to RQ3 (resources). Each question was explored through a mix of multiple-choice and open-ended questions; the multiple-choice options were informed by literature findings or common knowledge of the design process.

The multiple-choice questions were intentionally structured to minimize participants’ cognitive effort, allowing for faster and clearer responses. This format also enabled direct comparisons across demographic groups for systematic data analysis. The open-ended questions followed the multiple-choice questions to offer richer and more nuanced insights for thematic analysis. While the question order potentially introduced bias by framing responses based on previous multiple-choice answers, it helped participants reflect more deeply on the topics. It reduced confusion, unnecessary complexity, and encouraged them to elaborate on their thoughts and experiences. This structured combination of quantitative and qualitative data collection was a practical response to the inherent limitations of surveys.

Questions were carefully designed to align with their purposes, target the desired information, and minimize cognitive complexity. Some questions allowed multiple selections (select all that apply), whereas others required a single response (select one). For example, one question (Table 1, Question 4) asked participants to select a single type of harm most relevant to their work. Although participants recognized the interconnectedness of various harm types, this single-select format highlighted their primary concerns and encouraged focused prioritization. Certain questions also included additional context or examples to ensure clear and objective responses, particularly on nuanced topics. In the question about harm types, definitions and examples were provided for each response option. Although these definitions and examples might have conflicted with participants’ preconceptions, we provided them to foster a shared understanding and common language, which is consistent from learnings from previous design workshops (Girard & Namer, 2022a; Girard & Namer, 2022b).

The survey underwent several internal iterations focusing on question ordering, style, and response scales before we piloted it with five designers who met the target demographic requirements. The decision to use 10 questions was made to maintain a balance between gathering sufficient information and ensuring participants could complete the survey without feeling overwhelmed. Krosnick (1991) suggested that participants become fatigued and distracted if they are presented with seemingly endless questions. As participants become more cognitively overloaded, they “are likely to compromise their standards and expend less energy instead” (p. 215). During the pilot survey, the average reported duration was 8–12 min, which indicated that we asked the appropriate number and type of questions. Following this, minor changes were made to the language of the questions and selectable responses to provide additional clarity and reduce cognitive burden for participants.

Table 1. Survey Questions

Survey QuestionResponse OptionsQuestion TypeRelated RQ
What is your highest level of training focused on UX or product design? (Select one)Bachelor’s degree, Master’s degree, PhD or other doctoral degree, Certificate or bootcamp program, Self-taught / No formalized training, I prefer not to answerMultiple ChoiceNA
What types of organizations are most closely related to where you work? (Select all that apply)Technology company, Government agency, Nonprofit organization, Healthcare organization, Educational institution, Financial institution, Other (please specify) I prefer not to answerMultiple ChoiceNA
What is the size of your current organization? (Select one)Less than 50 employees, 51-250 Employees, 251-1000 Employees, 1001-10,000 Employees, More than 10,000 employees, I prefer not to answerMultiple ChoiceNA
What do you feel is the top type of harm that is most important to consider within your work? (Select one)Physical harm (i.e.: stalking, medical malfunction), Emotional harm (i.e.: cyberbullying, triggering content), Financial harm (i.e.: identify theft, shopping scams), Privacy harm (i.e.: data breach, surveillance), Societal harm (i.e.: fake news, propaganda), Other (please specify) I prefer not to answerMultiple ChoiceRQ1
How confident do you feel in identifying potential harm that could occur as a result of your work? (Select one)Extremely confident, Very confident, Moderately confident, Slightly confident, Not at all confident, I prefer not to answerMultiple ChoiceRQ1
Please elaborate on your above answer. Can you describe the factors that contribute to your confidence or lack thereof? Please click next if you do not wish to respond. NAOpen EndedRQ1
How easy do you feel it is for designers to consider the harm that could occur as a result of their work? (Select one)Extremely easy, Very easy, Moderately easy, Slightly easy, Not at all easy, I prefer not to answerMultiple ChoiceRQ1
How often do the topics of ethics and harm come up in team discussions? (Select one)Daily, Weekly, Monthly, Occasionally, Never, I prefer not to answerMultiple ChoiceRQ1
When do you feel that it is most important for teams to identify and discuss potential harm? (Select all that apply)Research and discovery, Definition and conceptualization, Design and prototyping, Testing and QA, I prefer not to answerMultiple ChoiceRQ1
What, in your view, are the biggest barriers to considering the potential harm that could occur in the products you are designing? (Select all that apply)Time or resource constraints, Conflicting priorities or goals, Lack of knowledge or training, Resistance from stakeholders, Company or team culture, Other (please specify), I prefer not to answerMultiple ChoiceRQ2
Please elaborate on your answer above. How have you seen these barriers manifest in yourself, your team, and/or your organization? Please click next if you do not wish to respond.NAOpen EndedRQ2
Which of the following might help improve your confidence and ability to identify and discuss harm within your work? (Select all that apply)Formal ethical knowledge and theory, Tools or checklists for identifying harm in your work, Ethical codes or guidelines for your team/organization, Training on safer design practices, Other (please specify), I prefer not to answerMultiple ChoiceRQ3
Please elaborate on your answer above. Can you describe how these resources or training might help you within your work? Please click next if you do not wish to respond (this will complete the survey).NAOpen EndedRQ3

Data Analysis

We used Qualtrics to remove incomplete responses and check for consistency within the data. We checked the responses throughout the 6 weeks of data collection to ensure quality responses were being received. As the survey consisted of qualitative and quantitative questions, we used a two-pronged approach to analyze the responses. The tools used for analyzing the data were Qualtrics, Google Sheets™, and Condens.io, an online software popular with UX designers and design researchers that aids in coding and analyzing qualitative research data. All data was cross-checked between Qualtrics and Google Sheets to ensure the accuracy and reliability of our findings.

Quantitative data was collected and analyzed using descriptive statistics to calculate the frequency and percentages of the responses. Descriptive statistics provided a concise summary of the data to help understand patterns and themes within it. Qualtrics’ Data IQ® presented the breakdown for each of the questions. The raw data was then exported into Google Sheets and cross-checked for accuracy. Google Sheets’ filtering capabilities isolated and calculated the frequencies of certain responses to identify common trends across questions and among specific participant demographics.

We collected and analyzed qualitative data using thematic analysis to provide context and depth. Braun and Clark’s (2006) framework was loosely applied to discover concepts and themes within the data. We reviewed the data to generate an initial code set for each of the questions, and we calculated the frequency of response for each of the codes (Table 2). Common themes were then identified within each of the code sets and defined across all questions. This approach allowed for analysis of the individual questions as well as across all questions. Condens.io coded and categorized the data in an aggregate format to protect the privacy of the participants. Pull quotes were cross-checked in Google Sheets to ensure data accuracy and anonymity and to assign participant numbers.

Table 2. Codes and Themes for the Qualitative Survey Questions

Survey QuestionCodes for Each QuestionThemes Across Questions
Can you describe the factors that contribute to your confidence or lack thereof [in identifying potential harm that could occur as a result of your work]?Awareness (28), Challenges (12), Lack of awareness (18), Processes (8), Resources (11)Role of education, Designerly responsibility, Business/industry tensions, Integrated resource needs, Multi-faceted barriers
How have you seen barriers [to considering the potential harm that could occur in the products you are designing] manifest in yourself, your team, and/or your organization?Culture (6), Priorities (19), Profit/Business Interests (23), Tools/training (5), Lack of awareness (6)
Can you describe how these resources or training might help you within your work [in improving your confidence and ability to identify and discuss harm]?Checklists (10), Tools (8), Education/training (17), Standards (14), Organizational support (10)

Results

The following results from the 96 completed survey responses reveal that accounting for harm within the product design process is difficult to accomplish. Participants have faced many barriers when trying to be more harm-aware within their work, and they could benefit from having resources. We organized results based on our research questions while detailing the demographics, attitudes, behaviors, barriers, and needs of the participants.

Participant Demographics

The survey received 186 responses with an incidence rate of 63% after 118 passed the screener. Qualtrics filtered participants who did not fully complete the survey, and 96 participants successfully completed the survey. As shown in Table 3, there was an even distribution across the years of experience, with over three quarters of participants holding a bachelor’s degree or higher. Although most participants selected multiple organizational types, nearly one fourth (n = 23) of participants worked in healthcare, suggesting a focus for future research into harm-aware design. Although there was an even distribution across the number of employees at the participants’ companies, enterprises (employing 10,000+ employees) were the top size represented (n = 26), followed by startups/small organizations (n = 22). 

Table 3. Demographics of Survey Participants

Years of ExperienceHighest Level of TrainingOrganization Types (select all)Number of Employees
3–5 years (34)Bachelor’s Degree (41) Technology (75)< 50 (22)
6–10 years (29)Master’s Degree (30)Healthcare (23)50–250 (13)
10+ years (33)PhD or Doctorate (4)Nonprofit (13)251–1000 (19)
No response (0)Bootcamp or Certificate (9)Education (13)1001–10,000 (15)
 Self-Taught (12)Government (9)10,000+ (26)
 No response (0)Financial (9)No response (1)
  Other (12) 
  No response (0) 

Attitudes and Behaviors of Designers in Considering Harm

Participants answered questions about their attitudes and behaviors regarding harm-awareness in their design processes, teams, and workflows. Over one third of participants (n = 35) identified privacy as the most relevant harm to their work, a finding consistent across industries. This supports Parrilli’s (2021) argument for the design industry needing a privacy-centered ethical framework, as digital privacy laws have not kept up with technological advancements.

Participant 87 expressed a lack of confidence in the backend technology they design, which raises concerns about privacy risks. Societal, financial, physical, and emotional harm were closely ranked following privacy harm, as shown in Figure 1. Harm types were unsurprisingly industry-specific. Participants in finance prioritized financial harm, followed by privacy and emotional harm. Participants in education and nonprofits focused on privacy harm, followed by emotional harm. And those in technology and government ranked privacy harm first, followed by societal harm. Additionally, participants noted that these harm types are interconnected. Five participants discussed this overlap. For example, Participant 70 discussed an account management product that touched on both privacy and financial harm.

Bar graph showing types of harm. Privacy harm ranks highest.

Figure 1. Types of harm that are most important for designers to consider.

Fewer than one quarter of participants (n = 20) reported feeling unconfident in their ability to identify harm, and nearly half (n =39) expressed confidence (Figure 2). The highest percentage of participants expressing extreme confidence had the least amount of experience, which may indicate a Dunning-Kruger effect, in which individuals with limited experience overestimate their abilities.

Eleven participants attributed their confidence to safeguards and processes in place within their organizations. Participant 57 noted that “risk mitigation [is] a part of the UX/Experience Designer’s responsibility.” This aligned with the higher confidence levels of participants working in organizations with over 1,000 employees, likely due to the greater resources and support available in larger companies. However, despite possessing confidence, nearly half of participants (n = 45) expressed a lack of ease in doing harm-focused work (Figure 3). Although PhD holders displayed the highest levels of confidence, no one with a PhD claimed that the work was extremely easy. We attribute this to many factors including a lack of awareness of the types of harm that could occur. As Participant 31 explained, while they understood the general harm that could occur, they were not knowledgeable about the industry-specific and nuanced ways that harm could manifest within their specific work.

Bar graph showing levels of confidence.  Very or extremely confident ranks highest.

Figure 2. Designer’s levels of confidence in identifying potential harm.

Bar graph showing ease of identifying harm. Not at all or slightly easy ranks highest.

Figure 3. Designers’ levels of ease in being able to identify potential harm.

Nearly one third (n = 28) of participants discussed bringing their own awareness, training, and self-learning on being harm-aware to their daily practices. This aligns with Grey and Chivukula’s (2019) findings that current ethical awareness often falls to the individual designer.

Building on Participant 57’s previous response about designer responsibility, they said they “work to identify scenarios that may arise as someone uses our products and solutions and devise ways to eliminate the potential issues. This includes learning about various scenarios from subject matter experts.” The discussion frequency about designers’ responsibility among design teams substantiates the tension between the participants’ high level of confidence and the expressed difficulty of doing this work.

Over half (n = 50) of participants said the topics of harm are never or only occasionally discussed, and only four participants said that they discussed topics of harm daily (Figure 4). All four of those participants held a bachelor’s or master’s degree and worked at a technology company. While they all expressed high confidence in their ability to identify harm, they said that doing this work was only moderately easy, highlighting the complexities of considering harm throughout the design process. Those four respondents pointed to their in-depth training and technical backgrounds that enable them to be more successful at doing this work. Participant 96 explained that harm must be “a cultural thing that senior leadership need to make a priority everywhere.”

Bar graph showing frequency of discussion. Occasionally ranks highest.

Figure 4. Frequency of discussion among design teams about harm-related topics.

Barriers Preventing Designers from Considering Harm

Participants were asked about the barriers preventing them from being more harm-aware within their design practices. Over three quarters (n = 81) of participants selected two or more barriers they frequently encountered; rich qualitative insights and context were provided by nearly half (n = 46) of participants. No one barrier was deemed overwhelmingly more or less important than others (Figure 5), so the results indicate that these barriers are interconnected and not mutually exclusive. The most frequently identified barriers were conflicting goals and priorities (n = 65), a lack of time or resources (n = 61), and a lack of knowledge or training (n = 57). The two least frequently selected barriers were resistance from stakeholders (n = 52) and team or organizational culture (n = 42), which is interesting as both were implicitly discussed throughout the qualitative responses. There is no way to explain why this phenomenon occurred beyond participants alluding to interconnectedness between all the barriers. These results remained consistent across organizations of different sizes, indicating that these barriers are universal across various types of design companies in the industry. This suggests that regardless of a company’s scale or structure, the challenges faced are similar, pointing to systemic issues that affect design practices industry-wide.

Bar graph showing barriers. Conflicting goals or priorities ranks highest.

Figure 5. Barriers designers face preventing them from being more harm-aware (select all).

Three participants discussed how a lack of diversity contributes to a deficient workplace culture. Participant 29 highlighted that “the burden of identifying harm falls disproportionately on women and underrepresented groups in tech.” They emphasized that culture starts at the top, further noting the critical role of psychological safety in the workplace: “Whenever I notice something, I have to weigh the cost of letting it go versus the personal cost of speaking up.” This insight aligns with findings from Wong (2021) and Pillai et al. (2022), which suggested that advocating for ethical and human-centered practices in companies is often exhausting. It is compounded by the fear of retaliation and the uncertainty that one’s work is contributing to the safety of the products being designed. This points to the need for a bottom-up/top-down approach. Designers need the ability to convince leadership of the value in identifying and mitigating risks, but this can only happen within a safe and supportive culture where employees can voice concerns without fear of reprisal. As Pillai et al. (2022) emphasized, “this highlights the urgency of creating safe spaces for discussing ethical tensions” (p. 7). This approach not only creates a more inclusive and ethical work environment but also ensures that the perspectives of those most impacted by design decisions are considered, ultimately leading to more responsible and meaningful outcomes.

Nearly one quarter of participants (n = 23) highlighted the tension and struggle of being harm-aware in a capitalist environment in which designers often have limited time and resources to consider potential harm. This sentiment was expressed across organizations of various types and sizes. Participant 43 emphasized that “potential harm is not always considered in the list of priorities. Usually revenue is the first priority, and timelines don’t account for time spent to consider harm.” Participant 28 pointed out that there is an organizational cost to being harm-aware, which companies are often reluctant to invest in. Many participants voiced frustration that stakeholders—who drive the priorities, timelines, and resource allocation—are typically less concerned with safety and “much more concerned with the bottom line” (Participant 78). The responses suggest that designers often lack clear direction in how to operationalize harm-awareness in their daily design practices; Participant 26 noted that designers “know this is something that should be done, but they do not necessarily possess a framework to approach this.” This gap between awareness and actionable steps underscores the need for better support and resources to help designers navigate the complexities of harm-aware design within their existing constraints.

Perceived Resource Needs to Enable Designers in Considering Harm

The resources identified by the participants to help them consider harm throughout the design process closely mirrored the barriers previously mentioned. Over three quarters (n = 77) of participants selected two or more resources, as shown in Figure 5. The most frequently identified resource was tools and checklists (n = 70), followed closely by actionable guidelines and codes of conduct (n = 65). Participants discussed the need for standards that could be used to socialize around the organization, which Participant 36 explained could “help create a standard of UX safety in our projects.” Training on safer design practices was the next highest selected resource (n = 53). Given the lack of standardized educational pathways or certifications in design for technology, Participant 63 emphasized the need for team-specific training as each designer brings a different understanding of how harm can manifest. This represents an opportunity for the development of targeted courses or workshops, particularly for teams, to help designers build a shared vocabulary and understanding of harm identification and mitigation.

Metrics or KPIs for teams and organizations closely followed the need for training (n = 49), with participants noting that they must be paired with other resources to be effective. While formal ethical theory was ranked lowest (n = 42), it can still serve as an important foundational framework to provide a basis for unification around guidelines and systems, but theories must be approachable and relatable for working practitioners in a rapidly evolving industry.

Bar graph showing helpful resources. Tools and checklists ranks highest.

Figure 6. Perceived resources that would enable designers to be more harm-aware (select all).

The request for tools and checklists is notable given the wide range of ethical design methods that already exist, such as Zhao’s (2018) widely recognized toolkit, Design Ethically. Despite the availability of these tools created by both the practitioner and academic communities (Chivukula et al., 2021a), our data suggests they have not sufficiently influenced practice, reinforcing the literature that argues ethical design methods often fail to shape real-world outcomes. As Wong (2021) noted, this could be due to the limited power that designers typically have in shaping product outcomes, which mirrors our survey findings that there is tension and conflict between stakeholders focused on profit and designers focused on users. Participant 47 explained that “a big barrier is getting stakeholders to support those sorts of activities and actually do them.” Nevertheless, the strong demand for tools and checklists signals that designers are seeking practical, structured approaches to integrating harm assessment and mitigation into their workflows. Notably, 62% (n = 24) of participants who felt confident in doing harm-aware work, and 77% (n = 17) of those who discussed harm at least monthly, expressed a desire for tools and checklists, indicating a need for actionable resources even among design teams that are already discussing harm in their practices. This represents a clear opportunity to develop and disseminate tools to connect ethical frameworks to everyday design practice.

Although navigating harm and ethics requires nuanced thinking and considerable effort, some participants expressed concerns about the potential risks of relying solely on checklists. Three participants specifically highlighted this issue, suggesting that such resources might oversimplify complex ethical considerations. However, nine participants recognized the value of checklists as useful starting points, particularly for teams with limited resources. An overarching theme in the qualitative responses was the desire for a resource that could be quickly and easily referenced throughout the design process to help teams surface and discuss harm. Participant 32 summarized by suggesting that something to “quick[ly] reference would be ideal as a ‘first pass’ to consider, even if it’s just a list of questions to answer as a product team.” This suggests a strong preference for resources that are both practical and accessible, enabling designers to make harm-aware decisions without being overwhelmed by complexity. Additionally, the future development of tools should consider the limitations that designers possess in a company and focus on ways to influence leadership, which would support both individual decision-making and also help steward and foster broader organizational change. 

Recommendations

Account for the Business Case

There is a business case for accounting for risk at the beginning of the design process before a product goes to market, yet participants said that organizations were not prioritizing harm reduction and often deprioritizing it instead of meeting aggressive deadlines and profit goals. While designers don’t have the responsibility or power to influence organizational decisions and priorities, participants expressed a need for resources to enable them to persuade stakeholders about the incentives for investing in harm reduction. A common theme among participants who were successful at being harm-aware within their design process was being able to gain buy-in and resource allocation from organizational leadership. Designers should translate and reframe concerns of harm and safety in ways that will resonate with stakeholders, such as reputation, legal, or regulatory issues.

Start Small and Leverage Existing Resources

Participants overwhelmingly reported that a lack of time is a major barrier to considering harm in their design processes. With tight deadlines and business pressures, designers often feel they cannot prioritize harm-aware practices. To overcome this, designers should take small, incremental steps to integrate harm-aware practices into their existing workflows. By focusing on one or two key areas—such as initiating cross-functional discussions about harm or conducting basic assessments early in the process—designers can begin addressing harm without requiring significant time or resources. Additionally, designers can explore existing ethical design tools to identify those that best suit their needs. One helpful starting point is reviewing the six ethical design tools vetted by Namer (2023), which can be used with minimal setup or prior knowledge.

Look to the Accessibility Movement

Several participants alluded to the lack of operationalization of ethical principles, emphasizing the need for specific, actionable guidelines to integrate these values into products. To accomplish this, participants suggested that harm-aware design could benefit from a similar approach to the accessibility movement. Participant 29 mentioned that aligning this process with how accessibility is communicated would make it easier to adopt. The Web Content Accessibility Guidelines (WCAG), established by the W3C in 1999, provide a structured model consisting of principles, guidelines, success criteria, and checklists; the model has become the industry standard for integrating accessibility into tech processes (WCAG, 2023). Future research should examine these standards to develop practical harm-aware strategies that can become naturally embedded into designers’ existing workflows. 

Conclusion

Potential harm within digital product design has long been underexplored, despite the growing influence that consumer-facing digital products exert on individuals’ daily lives. Harm from design can manifest in various ways, from privacy breaches to psychological impacts, yet it remains largely absent from design discourse, education, and practice. This exploratory study builds on existing research to examine how digital product designers consider harm in their daily work by surveying 96 US-based designers. It explored not only the behaviors and attitudes of designers towards harm, but also the barriers they encountered and the resources they believed are necessary to better address these issues. The findings reveal complex issues across varying levels of practice. Designers generally understand their responsibility and often feel confident in their abilities to identify potential harm, yet they face challenges that preclude them from being more harm-aware in practice. These challenges include a lack of formalized tools, resources, and support systems both within organizations and the broader industry.

There are several avenues for future research to help address these gaps and build on this study. One key area is to further explore harm-aware design through qualitative studies, which can provide deeper context into designers’ understanding of harm, particularly across different industries and levels of seniority. Second, the accessibility movement provides a useful precedent for operationalizing complex issues in design, and future research could look to it as a model for creating flexible, adaptable tools and checklists that designers can easily integrate into their existing workflows. Finally, to gain wider buy-in from stakeholders and executive leadership, future efforts should emphasize the business case for harm-aware design, demonstrating the long-term value of prioritizing user safety and well-being. Continued research and actionable outcomes in these areas are critical to ensuring that the design community is better equipped to integrate harm-awareness into their practices, ultimately leading to digital products that are not only functional and efficient but ethical and responsible.

Limitations

There are limitations to the study. First, due to the sampling strategy and privacy considerations, little demographic data was collected, leaving no way to determine race, gender, age, location, or other demographic representations. Second, the small sample size of 96 participants means these results cannot be generalized to all designers or organizations. Third, due to the exclusion criteria, there may be missing insights in other populations, such as designers with less than 3 years of experience or those who don’t see the value of ethical or harm-aware design. Finally, due to the study design and anonymity, there is no context beyond what the participants chose to provide and no opportunity to follow up with participants. This left a lot up to our interpretation and analysis, which inevitably includes bias regardless of reflexivity.

Tips for Usability Practitioners

  • Research the different forms of harm relevant to your industry, such as physical, psychological, or social impacts. Focus on understanding how these harms manifest in your specific products or services to anticipate potential risks.
  • Recognize that harm cannot be fully eliminated but can be minimized through intentional and proactive decisions. Encourage open discussions with your team to identify and mitigate potential harms early in the development process.
  • Create practical resources, such as checklists or question sets, to help your team stay mindful of potential harms throughout their work. These tools should be designed to prompt critical thinking and ensure consistent evaluation of risks at every stage.
  • Establish routine checkpoints and feedback mechanisms to continuously assess and address potential harms. Encourage team members to regularly revisit and refine their approach to harm reduction, ensuring that it remains a dynamic and integral part of the workflow.
  • If you are in a position of power or management, emphasize building diverse teams and cultures that promote open dialog and foster psychological safety, so that designers feel empowered and safe to voice their concerns about any potential harm.

References

Botsman, R. (2022, May 24). Tech leaders can do more to avoid unintended consequences. Wired. https://www.wired.com/story/technology-unintended-consequences/

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101.

Brignull, H. (2011, November 1). Dark patterns: Deception vs. honesty in UI design. A List Apart. https://alistapart.com/article/dark-patterns-deception-vs-honesty-in-ui-design/

Buwert, P. (2016). Ethical design: A foundation for visual communication [Doctoral Thesis, Robert Gordon University]. OpenAIR@RGU. https://rgu-repository.worktribe.com/output/248849

Buwert, P. (2018). Examining the professional codes of design organisations. DRS Biennial Conference Series. https://doi.org/10.21606/drs.2018.493

Carroll, A. (2020). Design no harm: Why humility is essential in the journey toward equity. Adobe Max. https://www.adobe.com/max/2020/sessions/design-no-harm-why-humility-is-essential-in-the-jo-od6302.html

Center for Humane Technology. (2021, June). Ledger of harms. https://ledger.humanetech.com/

Chivukula, S. S., Hasib, A., Li, Z., Chen, J., & Gray, C. M. (2021). Identity claims that underlie ethical awareness and action. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Article 295.

Chivukula, S. S., Li, Z., Pivonka, A. C., Chen, J., & Gray, C. M. (2021). Surveying the landscape of ethics-focused design methods. arXiv [cs.HC]. http://arxiv.org/abs/2102.08909

Chivukula, S. S., Watkins, C. R., Manocha, R., Chen, J., & Gray, C. M. (2020). Dimensions of UX practice that shape ethical awareness. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–13.

Dance, L. (2020). Categories of harm [PDF]. Service Ease. https://serviceease.net/categories-of-harm

DDC – Danish Design Center. (2021, October 5). Toolkit: The digital ethics compass.   https://ddc.dk/tools/toolkit-the-digital-ethics-compass/

Falbe, T., Andersen, K., & Frederiksen, M. M. (2020). The ethical design handbook. Smashing Media AG.

Fessenden, T. (2023). Design risks: How to assess, mitigate, and manage them. Nielsen Norman Group. https://www.nngroup.com/articles/design-risk-management/

Gossett, S. (2021, October 5). What exactly is ethical design? Built In. https://builtin.com/design-ux/ethical-design

Gray, C. M., Chivukula, S. S., & Lee, A. (2020). What kind of work do “asshole designers” create? Describing properties of ethical concern on Reddit. Proceedings of the 2020 ACM Designing Interactive Systems Conference, 61–73.

Gray, C. M., & Chivukula, S. S. (2019). Ethical mediation in UX practice. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–11.

Gray, C. M., Kou, Y., Battles, B., Hoggatt, J., & Toombs, A. L. (2018). The dark (patterns) side of UX design. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Paper 534, 1–14.

Hill, K. (2024, June 29). Facial recognition led to wrongful arrests. So Detroit is making changes. The New York Times. https://www.nytimes.com/2024/06/29/technology/detroit-facial-recognition-false-arrests.html

IDEO. (2019). A new tool for testing your design concepts ethically. https://www.ideo.com/blog/a-new-framework-for-testing-your-design-concepts-ethically

Krosnick, J. A. (1991). Response strategies for coping with the cognitive demands of attitude measures in surveys. Applied Cognitive Psychology, 5(3), 213–236.

Magrabi, F., Ong, M.-S., Runciman, W., & Coiera, E. (2011). Patient safety problems associated with heathcare information technology: An analysis of adverse events reported to the US Food and Drug Administration. AMIA… Annual Symposium Proceedings. AMIA Symposium, 2011, 853–857.

National Institute of Standards and Technology. (n.d.). Computer security resource center glossary. Retrieved August 23, 2023, from https://csrc.nist.gov/glossary/term/harm

Girard, E., & Namer, L. (2022, June 20). Black Mirror Brainstorming: Centering ethics and identifying harm in design [Workshop]. Design Research Society 2022, online.

Girard, E., & Namer, L. (2022, August 25). Black Mirror Brainstorming: Centering ethics and identifying harm in design [Workshop]. Participatory Design Conference 2022, online.

Namer, L. (2023, May 3). To prevent harm, look to ethical design tools. Dscout. https://dscout.com/people-nerds/ethical-design-tools

Nelissen, L., & Funk, M. (2022). Rationalizing dark patterns: Examining the process of designing privacy UX through speculative enactments. International Journal of Design; Taipei, 16(1), 77–94.

Overkamp, L. (2022, May 12). Designers, (re)define success first. A List Apart. https://alistapart.com/article/redefine-success-first/

Palinkas, L. A., Horwitz, S. M., Green, C. A., Wisdom, J. P., Duan, N., Hoagwood, K. (2015). Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Adm Policy Ment Health, 42(5), 533–544.

Pillai, A., Sachathep, T., & Ahmadpour, N. (2022). Exploring the experience of ethical tensions and the role of community in UX practice. Nordic Human-Computer Interaction Conference, 1–13.

Parrilli, D. M. (2021). Why digital design needs a privacy-centered ethical framework. Advances in Design and Digital Communication, 216–221.

Peterson, A. (2016, July 12). Holocaust Museum to visitors: Please stop catching Pokémon here. The Washington Post. https://www.washingtonpost.com/news/the-switch/wp/2016/07/12/holocaust-museum-to-visitors-please-stop-catching-pokemon-here/

PenzeyMoog, E. (2021). Design for safety. A Book Apart.

Rafit, Z. (2021). Design ethicquette: Toolkit (2021). Ziqq Rafit. https://www.ziqq-rafit.com/design/designethicquette-toolkit/2021

Shariat, J., & Saucier, C. S. (2017). Tragic design: The impact of bad product design and how to fix it. O’Reilly Media.

Sohail, M., Honeywell, A., & Poh, A. (2017, April 14). The state of ethics in design. Muzli – Design Inspiration. https://medium.muz.li/the-state-of-ethics-in-design-60a1088f8358

Spotify Design. (2020). Investigating consequences with our ethics assessment. https://spotify.design/article/investigating-consequences-with-our-ethics-assessment

Wong, R. Y. (2021). Tactics of soft resistance in User Experience professionals’ values work. Proc. ACM Human Computer Interaction, 5(CSCW2), 1–28.

Zhao, K. (2018). Toolkit. Design Ethically. https://www.designethically.com/toolkit