Welcome to the first issue of volume 5 of JUS! This issue marks the beginning of the fifth year since JUS was founded.
Many in the usability discipline agree that we need to evolve into a more scientific discipline rather than anecdote-based. Susan Dray, in her invited editorial:” Engaged Scholars, Thoughtful Practitioners: The Interdependence of Academics and Practitioners in User-Centered Design and Usability” is addressing this issue. While the tension and gap between practice and academic research can be explained and understood, Susan’s main message is: “..bridging the gulf is crucial to the future of our field…”. The approach Susan suggests to achieving such a goal is to have more practice-oriented, scientific, academic research into usability, along with more critical thinking in the usability practice.
The web is dynamic, and this is what most of us expect it to be. However, this can be associated with familiar content or links changing thus introducing human memory overhead. Our first peer-reviewed article titled: “When Links Change: How Additions and Deletions of Single Navigation Links Affect User Performance” focuses on this problem. The authors, Lauren Scharff and Philip Kortum have conducted a controlled experiment and demonstrated the detrimental impact of changes in website links on users’ search performance. Bottom line message: Now you have the empirical evidence to think twice before making drastic changes in your web site…
Determining usability metrics, particularly which metrics are “truly” valid in reflecting usability, is an ever present challenge. Triangulation of metrics is often the route taken by many to address the usability metrics’ challenge. Timo Partala and Riitta Kangaskorte, the authors of our second peer-reviewed article titled: “The Combined Walkthrough: Measuring Behavioral, Affective, and Cognitive Information in Usability Testing” propose a multifaceted view of users’ interactions using several concurrent metrics. Their combined behavioral, affective, and cognitive metrics resulted in a more effective approach to uncovering usability problems.
The issue of sample size for usability studies is another daily challenge in the life of the busy usability practitioner. Moreover, it may often involve getting into the somewhat intimidating statistical aspects of doing research. Ritch Macefield, in our third peer-reviewed article titled: “How To Specify the Participant Group Size for Usability Studies: A Practitioner’s Guide” takes on the challenge of summarizing some of the considerations in determining sample sizes in our discipline. Ritch’s key conclusion is that sample size depends on the study objectives and nature, with smaller samples for problem discovery and larger sizes for comparative-type studies.