When Perishing Isn’t a Problem: Publication Tips for Practitioners

Invited Essay

pp. 1-6

No PDF available for download.

I took my first course in experimental psychology in the mid-1970s. As part of our lab assignments, we were required to write reports in APA style for each experiment we conducted. The instructor told us that as experimental psychologists our publications would be the only permanent record of our work. Furthermore, if we pursued a career in academics, we would face a „publish or perish“ environment in which promotions, including achievement of tenure, would greatly depend on our publication records.

Some of my classmates did follow the academic path, but many (including me) moved into industrial work instead. Industrial usability practitioners generally experience little pressure to publish because their primary artifacts are products rather than reports. The goals of this editorial are (a) to argue that despite this lack of pressure there is value in publication by practitioners and (b) to provide some tips to practitioners about how to publish when working in a non-academic environment.

Why Should Practitioners Publish?

„So what happens to these students when they take jobs in industry? The colleagues I talk to within industry almost unanimously agree that their companies don’t encourage or reward them for publishing now“ (J. Dumas, personal communication, June 28, 2015).

In the larger companies that have research divisions, publication incentives can approach the „publish or perish“ pressures of academia, although with equal or greater weight given to patenting. For practitioners working in product development (traditional research and development, or R&D), there may still be motivations, both extrinsic and intrinsic, for publishing.

Extrinsic Motivators

Regarding extrinsic motivation, it is important to know your company’s policies (either corporate or local) related to external publication. Even if there is no specific corporate policy promoting publication, your manager might have the discretion to expense a trip to a conference, and one way to justify that expense is if you have submitted a paper that has been accepted for publication at the conference. In other words, getting a paper published in the proceedings of a conference could lead to an all-expenses-paid trip to the conference (with potential for career-enhancing relationships).

When it’s time for annual appraisals, managers look for ways to differentiate their employees to justify their ratings. Publication will rarely be a key differentiator, but might provide the edge needed to get a slightly higher rating. The higher the rating, typically the more benefit to the employee in terms of bonuses, raises, and consideration for promotion.

It is important to understand the criteria for promotion at your company. Low-level technical promotions will usually not require a history of production of intellectual capital, but higher levels well might benefit from having a record of publication in the promotion package. It can take time for a beginner to get his or her first publication, so it’s better to start sooner than later.

Although it is rare, some companies have provided cash incentives promoting publication through author recognition programs. It has been my experience that these types of programs are rarely permanent, but rather come and go as corporate executives experiment with ways to increase the production of intellectual capital. If such a program is available to you, why not take advantage of it?

Once you have published, you will be regarded as having a certain level of expertise in that topic, both inside and outside your company. Despite the shortcomings of the peer review method, passing peer review gives a publication and its authors a certain degree of credibility. This enhances your professional reputation, putting your name out into the professional community.

Having peer-reviewed publications in your portfolio can be of value when seeking a position with a new company. For many non-research positions it may not be a key differentiator, but it could be a tie-breaker by virtue of the credibility you get from the journal or proceedings editor viewing your work as a valuable contribution.

Another aspect of recognition as an expert practitioner with a body of published work is that you can use your work to create short courses for presentation at conferences. The publications give you credibility and make you a known quantity to the people who make decisions about what short courses to offer at their conferences. Payment for teaching a four-hour short course ranges from about $500 to $1,250. This helps cover the cost of attending the conference if your company does not reimburse you.

Finally, publication in refereed conferences and journals can be the launching point for publishing a book. It is rare for books published in the field of usability or user experience to make enough money to change the author’s life, but there will usually be several thousands of dollars in royalties, and you’ll have copies to give to friends and family!

Just a quick anecdote… I published my second book in 2012 (Quantifying the User Experience: Practical Statistics for User Research co-authored with Jeff Sauro). Several usability practitioners at a large insurance company had purchased copies. An IBMer working there noticed this and asked if they’d be interested in having me join the consulting team. This resulted in my getting a consulting assignment that has lasted over three years with full utilization (if you consult, you know this is a good thing) and resulting benefits to my annual appraisals—in addition to the modest royalties.

Intrinsic Motivators

Extrinsic motivators for publication are fine, but as noted previously, are typically not strongly compelling for industrial practitioners. Following are some of the intrinsic motivators for publication.

There is a certain level of prestige and honor associated with refereed publication. Even without the previously mentioned monetary or career considerations, there is an internal satisfaction and pride with seeing one’s name in print and knowing the effort that it took to achieve that goal.

Another aspect of satisfaction with publishing is the knowledge that you have contributed to the body of knowledge in your field. It is true that one might also contribute via other types of communications (blogs, tweets, etc.), but there is no other method of communication with one’s community that has the status, permanence, or procedural quality checking of peer-reviewed publication.

Furthermore, writing for a peer-reviewed publication forces one to express one’s thoughts logically and completely. It is rare that a paper will be accepted for publication with no request for revision. Responding to the comments of reviewers invariably improves the quality of a paper, and sometimes leads to insights that would not have occurred without review.

What Should Practitioners Publish?

Early in my career, I realized that the results of standard usability studies, although of great interest to me and my stakeholders, were usually of little to no interest to the usability community—and those who would be most interested would be competitors, so it would be counterproductive to give them an advantage by providing information about our product’s weaknesses. Indeed, most companies consider the specific results of a usability study to be proprietary, and thus would not allow publication.

Furthermore, over the past ten years, the opportunities to take advantage of standard studies have diminished. For Agile development and methods such as the Rapid Iterative Test and Evaluation (RITE; Medlock, Wixon, McGee, & Welsh, 2005), the emphasis is on gathering minimal data in the shortest possible time and fixing issues while continuing to evaluate a system or product. Although the current approach to user experience has increased the challenge, it is still possible to contribute to the literature. If you’re running faster studies with smaller sample sizes, it just might take a little longer to accumulate data.

If you’re interested in satisfying your company’s needs (which is the main reason why they pay you) but are also interested in participating in the evolution of usability science, then you’ve got to look at standard usability studies as opportunities to also collect information that (a) will be of interest to the broader UX community and (b) that your company won’t object to publishing.

How Might Practitioners Publish?

If you decide that you want to contribute to the growth of knowledge in our profession, here are some strategies that might help:

  • Talk with your colleagues about what they are doing. It is often stimulating to work with other practitioners, and you might have writing opportunities that are closer than you know.
  • Keep on top of what people are publishing in conference proceedings and key journals so you know the hot topics, especially any hot topics in methodologies because companies are less likely to object to methodological than substantive publication. The ACM digital library is a good source to watch. Think about sharing the work of monitoring the literature with a colleague.
  • Figure out ways to collect additional data that will be of interest and value to other practitioners. For example, if you have a few years worth of System Usability Scale (SUS) data, you could write up a conference paper or JUS submission about the correlations between the SUS and other metrics you routinely collect, and compare/contrast that with other similar findings reported in the literature, or provide your findings on the factor structure of the SUS. The trick is doing this without increasing the cost or time required to complete the work for which you’re getting paid, but it often isn’t that hard to do.
  • Think about data you might have already collected that could be put to a second use in a methodological publication—you don’t always have to collect new data. You might collect only a small amount of data in any given usability study, but if you’ve collected some data consistently over time, you might get enough to be able to conduct, for example, reasonably powerful correlation or factor analyses (for example, see Lewis, 2002). You might be able to combine your data with those of other colleagues to achieve the necessary sample sizes (for example, see Sauro & Lewis, 2009).
  • Establish relationships with co-authors who provide complementary skills. For example, if you’re in a position to collect data but aren’t comfortable with statistical analysis, find someone who is, either inside or outside of your company.

Some Recent Personal Examples

I occasionally get contacted by people who are just starting out in the field and are trying to figure out how to publish data they’ve already collected. Recently, I worked with a Slovenian graduate student who had conducted a standard usability study and wanted to publish the results. Before she contacted me, she had focused on the specific usability findings from a study of a digital rights sharing application, and had been rejected by the journals she had contacted. After looking at her data, I saw that although her sample size was relatively small for published work (n = 18), her participant groups and metrics would enable her to report on a variety of topics that would be of more general interest—specifically, the correlations among prototypical usability metrics, applying the emerging industrial norms for the SUS and comparing novice and experienced users. One of the reviewers objections we had to overcome was an early decision to use the SUS, but to drop two of its items. If you’re going to use a standardized instrument, you should use the standard version unless you’re specifically studying what happens if you don’t (e.g., if you’re trying to find a way to develop a more concise questionnaire). Fortunately, I have thousands of completed SUS questionnaires, and was able to compute the correspondence between scores using the standard version and scores using her variation and could demonstrate that the effect of excluding those two items was negligible. The paper was recently accepted for publication in IEEE Software (Lah & Lewis, in press).

I’ve twice had the opportunity to work with international practitioners who had translated standardized questionnaires into different languages, specifically, translation of the Computer System Usability Questionnaire (CSUQ) into Turkish (Erdinç & Lewis, 2013) and the SUS into Slovenian (Blažica & Lewis, 2015). This is not particularly glamorous work, but it is important to translate and validate the translations of standardized questionnaires that have proved to be useful to English-speaking UX practitioners.

A couple of years ago, I was working with Mary Hardzinski at State Farm. It turned out she had several hundred complete cases (collected during an unmoderated usability study) of a questionnaire that had been developed to assess the service quality of interactive voice response applications. I knew that particular questionnaire had been developed by having participants listen to rather than participate in the interactions they were rating, so this set of data could be used to further investigate the psychometric properties of the questionnaire in the context of actual use as opposed to vicarious ratings. I felt like I’d stumbled upon a gold mine. The paper we produced was recently published in the International Journal of Speech Technology (Lewis & Hardzinski, 2015).

Probably my most fruitful partnership has been with Jeff Sauro. In 2004 he contacted me to discuss some of the research I’d published, and it turned out that we had many common interests. Conversations following that initial contact led to a number of conference papers (Lewis & Sauro, 2009; Sauro & Lewis, 2005, 2009, 2010, 2011), a journal article (Sauro & Lewis, 2006), and ultimately a book (Sauro & Lewis, 2012). We are continuing to collaborate… stay tuned.

Some Closing Thoughts

If you haven’t published before, it can seem overwhelmingly complicated to get started. There will be uncertainty regarding the risk of making the effort but not getting published versus the rewards of publication. I hope this editorial will help practitioners consider the various potential rewards of publication and will provide some guidance on how to publish successfully.

Before you write, prime the pump. Be sure to communicate with your management about your publication goals and strategies. If you have a development plan, get your manager to include publication in the plan. Try to think about the potential extrinsic motivations of companies. For example, remind management that successful publication can enhance a company’s brand by virtue of increasing the external visibility and reputation of employees.

There is an interesting historical example of a missed branding opportunity that was directly due to the publication policies of Guinness, the famous brewery. One of the most commonly used statistical tests, the t-test, was developed by William Gossett, a statistician who had joined Guinness in 1899 (Cowles, 1989). Because Guinness had a strict no publications policy (Salsburg, 2001), Gossett published his work anonymously using the name „Student,“ which is why the test is often called Student’s t-test, especially in statistics textbooks. Imagine if Guinness had encouraged publication and had asked Gossett to name the new (at that time) method the Guinness t-test—what a wasted opportunity!

Finally, make sure you understand the policies and procedures for publication, which will usually involve management and legal review before submission. Then start looking for opportunities to collect data that will be of interest to the UX community, assemble your writing team (if necessary), and start writing. Consider submitting your work to the JUS—after all, it was established to provide a venue specifically for usability and user experience practitioners to publish their research.

Acknowledgements

Many thanks to Joe Dumas for inviting this editorial and for his guidance in developing its content. Also, I want to express my appreciation to all the people with whom I have co-written articles and papers. Thank you very much.

References

Blažica, B., & Lewis, J. R. (2015). Slovene translation of the System Usability Scale: The SUS-SI. International Journal of Human-Computer Interaction, 31(2), 112–117.

Cowles, M. (1989). Statistics in psychology: An historical perspective. Hillsdale, NJ: Lawrence Erlbaum.

Erdinç, O., & Lewis, J. R. (2013). Psychometric evaluation of the T-CSUQ: The Turkish version of the Computer System Usability Questionnaire. International Journal of Human-Computer Interaction, 29(5), 319–323.

Lah, U., & Lewis, J. R. (In press). The effect of expertise on the usability of a digital rights management sharing application. To appear in IEEE Software.

Lewis, J. R. (2002). Psychometric evaluation of the PSSUQ using data from five years of usability studies. International Journal of Human-Computer Interaction, 14(3), 463–488.

Lewis, J. R., & Hardzinski, M. L. (2015). Investigating the psychometric properties of the Speech User Interface Service Quality questionnaire. International Journal of Speech Technology, 18(3), 479–487.

Lewis, J. R., & Sauro, J. (2006). When 100% really isn’t 100%: Improving the accuracy of small-sample estimates of completion rates. Journal of Usability Studies, 1(3), 136–150.

Lewis, J. R., & Sauro, J. (2009). The factor structure of the System Usability Scale. In M. Kurosu, (Ed.), In Proceedings of HCII 2009, Human-Centered Design (pp. 94–103). Berlin, Germany: Springer-Verlag.

Medlock, M. C., Wixon, D., McGee, M., & Welsh, D. (2005). The rapid iterative test and evaluation method: Better products in less time. In R. G. Bias & D. J. Mayhew (Eds.), Cost-justifying usability: An update for the Internet age (pp. 489–517). Amsterdam, Netherlands: Elsevier.

Salsburg, D. (2001). The lady tasting tea: How statistics revolutionized science in the twentieth century. New York, NY: W. H. Freeman.

Sauro, J., & Lewis, J. R. (2005). Estimating completion rates from small samples using binomial confidence intervals: Comparisons and recommendations. In Proceedings of the Human Factors and Ergonomics Society 49th Annual Meeting (pp. 2100–2104). Santa Monica, CA: Human Factors and Ergonomics Society.

Sauro, J., & Lewis, J. R. (2009). Correlations among prototypical usability metrics: Evidence for the construct of usability. In Proceedings of CHI 2009 (pp. 1609–1618). Boston, MA: Association for Computing Machinery.

Sauro, J., & Lewis, J. R. (2010). Average task times in usability tests: What to report? In Proceedings of CHI 2010 (pp. 2347–2350). Atlanta, GA: Association for Computing Machinery.

Sauro, J., & Lewis, J. R. (2011). When designing usability questionnaires, does it hurt to be positive? In Proceedings of CHI 2011 (pp. 2215–2223). Vancouver, Canada: Association for Computing Machinery.

Sauro, J., & Lewis, J. R. (2012). Quantifying the user experience: Practical statistics for user research. Burlington, MA: Morgan Kaufmann.