Volume 16, Issue 1, November 2020

Introduction to Volume 16, Issue 1

Our November 2020 invited essay is “No Interface? No Problem. Applying Human-Centered Design and HCDAgile to Data Projects,” by Carol Righi. The essay describes the insights of a group of human-centered designers, agilists, data scientists, and other technology enablement practitioners who tackled the question of how to apply the principles and practices of human-centered design (HCD), Agile development, and the overarching process of HCDAgile, to products that have no obvious user interface. 

The issue also includes a short essay by Ann Chadwick-Dias memorializing the life and contributions of Tom Tullis. 

In addition to the essays, this issue includes three research papers, all on different aspects of standardized UX measurement. […] [Read More]

 

 

Invited Essay:
No Interface? No Problem. Applying Human-Centered Design and HCDAgile to Data Projects

In October 2019, a group of human-centered designers, agilists, data scientists, and other technology enablement practitioners joined to share their thoughts about a topic of common interest: How should the principles and practices of human-centered design, Agile development, and the overarching process of HCDAgile be applied to products that have no obvious user interface?

The group’s objective was to develop guidance based upon shared knowledge across disciplines and industries for leveraging HCDAgile in data projects. In this paper we share our initial observations from the meeting. […] [Read More]

 

 

Special Essay:
In Memoriam—Dr. Thomas (Tom) S. Tullis

Dr. Thomas S. Tullis was born on April 16, 1952, in Memphis, Tennessee and died from complications of COVID-19 on April 29, 2020. He leaves behind his wife, Susan (Richardson) Tullis, and two adult children, Cheryl Tullis Sirois (along with her husband, Craig Ernest Sirois, Jr.) and Virginia Tullis. He is also remembered fondly by his brother, Kenneth Frank Tullis Sr., MD, his sister-in-law Madge (Wood) Tullis, his nephew Kenneth Frank Tullis, Jr., his nieces Meg (Tullis) Morris and Mary (Tullis) Barker, and his sister Kay (Tullis) Ledbetter.

Tom contributed 30+ years of experience in the field of human-computer interface studies, he has published more than 50 papers in numerous technical journals, and has been an invited speaker at national and international conferences. He also holds eight U.S. patents and was an Adjunct Professor at Bentley University. He was the 2011 recipient of the Lifetime Achievement Award from the Usability Professionals Association (now UXPA, 2012). He was also inducted into the CHI Academy of ACM SIGCHI. […] [Read More]

 

 

Validity of Three Discount Methods for Measuring Perceived Usability

Within the domain of subjective usability assessment, several potential discount methods exist. However, there is little or no prior research investigating how these methods compare in their impact on subjective usability ratings. This study compared four methods of collecting subjective usability data with the System Usability Scale (SUS). Users were asked to use and rate three products with the SUS: a library website, an electric can opener, and a digital timer. Lab-based assessment, measurement within the context of a usability assessment, was used as a reference group to compare the performance of the other three methods. A delayed retrospective usability assessment proved the most promising of those three methods as it generated mean SUS scores that were not statistically distinguishable from the lab-based assessment. Both the video-based assessment (rating products based on video footage) and the prospective inspection (judging before use) groups generated mean SUS scores higher than the lab-based group. The delayed retrospective usability assessment has the most support as an alternative method to lab-based usability assessment for collecting subjective usability scores. More research is needed to understand if video-based assessment and prospective inspection assessment can be utilized effectively. […] [Read More]

 

 

TrustDiff: Development and Validation of a Semantic Differential for User Trust on the Web

Trust is an essential factor in many social interactions involving uncertainty. In the context of online services and websites, the problems of anonymity and lack of control make trust a vital element for successful e-commerce. Despite trust having received sustained attention, there is a need for validated questionnaires that can be readily applied in different contexts and for various products. We, therefore, report the development and validation of the TrustDiff scale, a semantic differential that measures user trust on three dimensions. Compared to Likert-type scales, semantic differentials have advantages when it comes to measuring multidimensional constructs in different contexts. Using 10 items, the TrustDiff semantic differential measures user perceptions of the Benevolence, Integrity, and Competence of an online vendor. The scale was investigated in three independent studies with over 1,000 participants and shows good structural validity, high reliability, and correlates expectedly with related scales. As a test of criterion validity, the TrustDiff scale showed significant differences on all subscales in a study involving a manipulated website. […] [Read More]

 

 

Validation of the GUESS-18: A Short Version of the Game User Experience Satisfaction Scale (GUESS)

The Game User Experience Satisfaction Scale (GUESS) is a 55-item tool assessing nine constructs describing video game satisfaction. While the development of the GUESS followed best practices and resulted in a versatile, comprehensive tool for assessing video game user experience, responding to 55 items can be cumbersome in situations where repeated assessments are necessary. The aim of this research was to develop a shorter version of the scale for use in iterative game design, testing, and research. Two studies were conducted: the first one to create a configural model of the GUESS that was then truncated to an 18-item short scale to establish an initial level of validity and a second study with a new sample to demonstrate cross-sample validity of the 18-item GUESS scale. Results from a confirmatory factor analysis of the 18-item scale demonstrated excellent fit and construct validity to the original nine construct instrument. Use of the GUESS-18 is encouraged as a brief, practical, yet comprehensive measure of video game satisfaction for practitioners and researchers. […] [Read More]