A Methodology for Measuring Usability Evaluation Skills Using the Constructivist Theory and the Second Life Virtual World

Peer-reviewed Article

pp. 178-188

Abstract

The skills of usability analysts are crucial to software success, so mastery of these skills is essential. This study presents a methodology for teaching and measuring usability evaluation skills of graduate students using the constructivist theory, diaries, checklists, and final reports. As part of the study, students spent 4 months as active participants in Second Life, an online virtual world. In the end, most students had a manageable amount of measurable usability evaluation skills in that they could identify a number of heuristic problems with the Second Life software. A smaller number of students had a greater amount of skill; they could explain a heuristic problem with the software and then explain why it was problematic.

Practitioner’s Take Away

Both the constructivist approach to teaching and measuring usability evaluative skills using the methodology discussed in this study contributed to the following lessons learned:

  • Teach usability evaluation skills in one semester.
  • Utilize students as evaluators from the unique perspective of users.
  • Allow for students to obtain a deep understanding not only of usability evaluation skills but of their own learning process.
  • Provide some structure for the diaries, even if it is no more than day and date headings, otherwise students will make up their own, which may or may not be useful for your purposes.

Introduction

At a minimum, online interactive systems or software should ensure that users are kept informed of consequences of their actions, can select and sequence tasks, do not have to memorize a great deal, feel privacy is maintained, are not encumbered with irrelevant elements, can easily find information, have aesthetically pleasing and highly functional experiences that enhance the quality of their work, can customize the system to their needs, and feel that the system conforms to their expectations. Software is usable if it adheres to these established usability principles. Heuristic evaluation is an effective way to determine adherence.

Because expert heuristic evaluators help software developers avoid costly problems, mastery of usability skills is not only highly desired for software development, but essential. Relying on constructivist theory, this study presents (a) a means to teach and measure usability evaluation skills of graduate students using structured and unstructured assessment tools and (b) the use of diaries to facilitate reflection and deep learning of usability principles. The Second Life virtual world was the platform for this study. The study questions were as follows: Can student usability evaluation skill levels be measured using structured and unstructured assessment tools? Do diaries facilitate deep learning of usability evaluation skills?

Background

The following sections discuss heuristic evaluation, the constructivist theory, and reflection and learning diaries.

Heuristic Evaluation

Heuristic evaluation, one of the most commonly used methods for determining the usability of a software system, detects a high number of problems using a minimal number of people. Employing established usability principles (or heuristics), evaluators study an interface and identify problems. The literature supports heuristic evaluations primarily because of the affordability and convenience (Desurvire et al., 1992; Doubleday et al., 1997; Gerhardt-Powals, 1996; Liljegren & Osvalder, 2004; Nielsen & Mack, 1994). Yet, heuristic evaluations have limitations. Compared to user testing, for instance, heuristic evaluations fall short of discovering many severe usability problems (Jeffries, Miller, Wharton & Uyeda, 1991). When inexperienced evaluators are used, there is a greater potential for applying the wrong heuristics, and in turn, the wrong solution to problems (Nielsen & Landauer, 1993). Also, heuristic evaluations produce a high number of false positives (Bailey, 2002; De Angeli, et al. 2003; Kantner & Rosenbaum, 1997; Molich, 1998). Typically, experts conduct heuristic evaluations. However, studies have found successful heuristic evaluations conducted by system users after receiving training (Lowry & Roberts, 2003; Nielsen, 1993).

Constructivist Theory

Studies of online learning report a paradigm shift from information transmission to active participation models. One such model is the constructivist approach whereby students construct their own understanding of concepts by becoming active participants, rather than passive listeners, in the learning process (Biggs, 2003; Bruner, 1973; Papert, 1990).

Though constructivists differ in how they define knowledge and learning (Gredler, 2001), most support the notion that knowledge is user constructed, emphasizing individual learning whereby students integrate new experiences into their knowledge base over time. “[Learning] requires self-regulation and the building of conceptual structures through reflection and abstraction” (Von Glasersfeld, 1995, p.14).

Mayes and Fowler (1999) conceptualized the learning process as follows: (a) learning evolves from individualized understanding through performance of actual tasks, (b) frequent feedback facilitates learning, (c) learning progresses in a series of stages, and (d) learning takes place through both personal and social constructs. This learning process is described in three levels: conceptualization, whereby students are exposed to information, ideas, and concepts; construction, in which students apply what they know to a task; and dialogue, where students test their conceptualizations through personal reflection and engagement with others.

Reflection and Learning Diaries

Constructivist learners use experience, observation, and reflection to understand concepts and generate new ideas (Kolb, 1984). Of these, personal reflection, an active process of self-awareness and review of one’s experiences, allows for the acquisition of personal knowledge and development of personal views on the learning process or a specific issue (Gibbs, 1991; Kolb, 1984; Stewart, Keegan & Stevens, 2008). Diaries, or learning journals, facilitate reflection in a self-directed environment (Crème, 2008; Honey & Mumford, 1989; Mansourian, 2008; Pavlovich, 2007). Diaries allow for regular and deep reflection of experiences, aid active processing of knowledge, help students determine their progress, and assist in identifying gaps in knowledge. Barclay (1996) successfully used learning logs in the form of diaries. Pink, Cadbury, and Stanton (2008) used e-portfolios with similar results.

Using qualitative and quantitative measures, Gilar, Ruiz, and Costa (2007) determined that more knowledge was acquired using diary-based assessment than an inventory-based learning strategy assessment. Using the structured diaries, participants recorded temporal elements as well as every day activities, activities related to the task of learning, and activities aimed specifically at the learning of the task. The inventory-based assessment evaluated learning strategies using a questionnaire.

In an online environment, reflection and action are crucial to learning, allowing students to become proactive in the learning process. Hence, critical thinking, introspection, interaction, and synthesis of knowledge are sought. This requires resources beyond lectures.

Methodology

The following sections discuss the overview, participants, the software application, assessment tools, parameters, scope, knowledge, standardized time, benchmarks, and skills assessment.

Overview

The methods used in the study assessed the heuristic evaluation skills and deep knowledge of usability issues held by graduate students in a Human-Computer interaction (HCI) distance learning course. Table 1 presents an overview of the methodology. The approach is similar to Kirmani and Rajasekaran’s (2007) method of quantifying heuristic evaluation skills.

Table 1. Overview of Assessment Methodology
Method Application

Participants

9 graduate students

Application

Second Life virtual world

Assessment tools

Diaries, checklists, reports, and online discussions

Parameters

Twelve usability guidelines from the Xerox usability checklist (form) and Nielsen and Molich (1990) were used for determining Website usability.

Scope

User interface designed by Second Life developers.

Knowledge

Online lectures, readings, assignments, and online discussions

Standardized time

August 28th through December 8th 2007, for all evaluators

Benchmark

Issues identified by students were collated, repeated issues eliminated, and explanations of violations counted to arrive at a benchmark.

Assessment

Individual ratings were calculated as follows:
Naming of issues = 10 pts each
Identification of other valid issues = 5 pts each
Explanation of identified issues = 10 pts each

Participants

Graduate students in an HCI course at the University of South Florida, School of Library and Information Science spent a semester (fall of 2007) becoming active participants in the virtual community Second Life. Of the 14 students in the course, 9 consented to participation in the study, completing their diaries and other required tasks. Of these, 7 were new to Second Life, while the remaining 2 had used it sporadically. With the exception of 2 single incidences of work use, most interaction with Second Life was from home. Students were reassured that their identities (including that of their avatars, or online persona) would be kept confidential.

The Software Application

Second Life is a virtual community that attempts to recreate an idealized version of the real world (Jones, 2006; Ondrejka, 2004; Rheingold, 2000). In virtual communities, people “become authors not only of text but of themselves, constructing new selves through social interaction” (Turkle, 1995, p.12). For the current study, Second Life provided a platform for teaching the heuristic evaluation method. Second Life was selected for this study because it is well-established and usability problems were not expected to be so great that they overwhelm students. Yet, there was room for improvement with the software.

Assessment Tools

The assessment tools consisted of diaries, online discussions, usability checklists, and usability reports. As part of an assignment, students were asked to keep semester-long diaries of their experiences, feelings, usability problems, progress, and opinions related to Second Life. To encourage the development of the diaries, they were advised to write often, develop a comfortable style of doing so, and visit Second Life on a regular basis. Rather than impose too much structure, the instructor asked students to include simple day and date headings for their diary entries. The purpose for this informal approach was to gather information that the instructor might not have anticipated and to solicit more free-flowing, reflective narrative than might result from a form with checkboxes.

An online discussion forum in Blackboard helped students hone their abilities to identify usability issues. In the forum, they were sometimes able to identify issues they thought were usability problems as personal preference or other problems or system concerns unrelated to Second Life. Students presented and discussed numerous problems based on feedback from the instructor and other students.

At the end of the semester, students conducted heuristic evaluations of Second Life using a usability checklist of established HCI principles. By this time, they had a greater understanding of the Second Life environment and could evaluate specific elements from a user point of view. The checklist allowed students to record usability problems and insights in the framework of a set of 12 usability heuristics.

Students reported the results of their individual evaluation in an independent usability report at the end of the semester. Though typically a report is a joint effort by a set of evaluators, the independent report served the purpose of identifying which student knew what. As part of the report, students identified and attempted to explain specific heuristics violations. The reports included supporting literature where necessary.

The assessment tools ensured a balance between unstructured diaries and discussions and the focus provided by the checklists and reports. Combined, the methods support active involvement of students in the learning process and deep understanding of topics. The instructor reviewed the diaries, usability checklists, and reports to assess student evaluative strengths and weaknesses.

Parameters

The parameters of the usability test covered a detailed set of 12 heuristics from a modified version of the Xerox (1995) usability checklist form. The first ten principles were derived from Nielsen and Molich (1990). This standardized format for rating and evaluation of the system was made available to all students via Blackboard on August 28th, 2007.

The checklist contains a set of questions under the following headings: (a) visibility of system status, (b) match between the system and the real world, (c) user control and freedom, (d) consistency and standards, (e) error handling, (f) recognition rather than recall, (g) flexibility, (h) privacy, (i) minimalist design, (j) help and documentation, (k) skills, and (l) pleasurable and respectful interaction with the user. Each numbered heuristic included a related set of questions designed to expand on that particular problem.

Scope

Students evaluated the Second Life interface. However, they did not evaluate personal or business sites or other landscapes not designed by Second Life developers.

Knowledge

Through online lectures, readings, assignments, and discussions, students gained interface knowledge and understanding of heuristic evaluation principles. The first two weeks of readings consisted of background information about Second Life. During this time, students were instructed to take a Second Life tutorial and explore help menus. Additionally, students brought problems they did not understand about Second Life to a discussion board called “Second Life Q&A.” Through these methods, students learned about the philosophy of Second Life (i.e., avatars, Linden dollars, purchasing, building communities, selling, etc.), avatar customization, networking, avatar maneuverability, communicating, and using help screens. They were also given time (4 months) to become familiar with Second Life before being asked to evaluate it. This immersion in the interface is unusual in heuristic evaluation methodology, but supports the constructivist approach to learning. Though they could not gain expertise in heuristic evaluations in one semester, students learned the basics about the process through assigned reading materials and online discussions. This way, they could discuss usability problems before identifying them as such.

Standardized Time

All student evaluators were given an equal amount of time to learn Second Life and complete the checklist. They could begin on August 28th, 2007 but needed to have the checklist and report completed by December 8th, 2007.

Benchmark

The benchmark used to compare student evaluators is a collation of all the unique and valid heuristic issues identified by the instructor or students. Skills were assessed by counting the frequency of issues each student identified and by determining whether the student could explain why the heuristic violation could be harmful. For instance, under the heuristic, “User Control and Freedom,” one student commented on the fact that after going back to a previous scene, users must remember what happened in the old scene because there is no real undo. He stated, “There is no undo function as such, but mistakes can be recovered from if their previous selections are remembered. The problem with this is that people should not be required to hold too much in short-term memory.” This manifested a clear understanding of the HCI issue involved.

Skills Assessment

After students completed the checklist, they reported their findings in the usability reports. Valid problems identified by students or by the instructor were listed as usability issues. Though the reports were the primary sources used to assess student skills, the professor reviewed the diaries and checklists and interacted with Second Life to understand the students’ knowledge. The checklist provided organization for student reports and was the source document for the issues each student found with the Second Life interface. The diaries allowed for insights that informed the write-up.

The instructor identified five issues that an evaluator might easily detect as usability problems. It was believed that identification and understanding of these would establish a minimum amount of usability knowledge. These included lag time, unhelpful sounds, confusing messages, avatar maneuverability, and reversal of actions. Ten points were allotted for each of the above problems identified by students. Five points were given for other valid usability problems stated by students in their reports. The remaining issues identified were lack of white space, misnomer of world menu items, inappropriate metaphors, and distracting elements. If students could also explain which heuristic was violated and potential effects of the violation they were given 15 additional points.

Results and Discussion

Comments about Second Life were overwhelmingly positive. In fact, reports, diaries, and checklists consisted of three times as many complimentary or descriptive comments than problems. In short, students could easily identify what worked. Yet, they also identified 9 usability issues, including ones also identified by the instructor. Beyond this, 46 issues were presented and discussed in the Blackboard forum but only 16 appeared in the final reports and diaries. Seven issues were determined to be false positives. A false positive is defined here as an issue that was identified by students as usability problems but were not. Table 2 lists the 9 valid usability related problems identified by students.

Table 2. Problems and Potential Problems Identified by Students
  Identified issues

*Lag time

There was a general agreement that lag time would not allow complex tasks to be completed within 12 seconds. Even for simple movements, lag time made response times inappropriate to the tasks.

*Avatar maneuverability

According to one student avatar movement is “like working a joystick. It takes practice.” New users ended up in trees and were given no prompts for getting out. One avatar became locked into a backward movement. User skills did not increase considerably between the first and second visits.

*Unhelpful sounds

About the sounds, one student stated, “There are sounds for almost everything…Sounds, like constant beeping, are not always helpful.” Sound is often used to differentiate between error messages and background conversation or gestures. A user performing an action correctly may mistake the beeping for something she is doing wrong.

*Confusing messages

Though selections within the drop-down menu are brief and familiar, some were confusing or inappropriate, such as the “Wear on” option. “Where,” one student asked, “does one ‘wear’ a unicorn?”

*Reversal of actions

When users go back to a previous scene, they cannot change their earlier choices; they are irreversible. According to one student, “There is no undo function as such, but mistakes can be recovered from if their previous selections are remembered.”

Lack of white space

Meaningful groups of items are not separated by white space. The text is often small and spaced tightly, making it difficult to read.

Misnomer of world menu items

Choices are not always logical. The Edit menu, for example, displays options to go to a friend’s and group’s lists, allowing for direct communication with them. While a user can edit these lists, this is not the main function of these selections. The World menu features selections for chat, navigation, and account information. None of these seem to generate “world” in the mind.

Inappropriate metaphors

Sometimes metaphors are not clearly understood for objects that indicate a script (i.e., floating dance machines).

Distracting elements

Nonessential elements sometimes distract. Users can control menus by minimizing windows with nonessential items. Novices, however, may find themselves searching for essential menus. At the same time, dialog boxes with background noise may prompt them for a selection in reference to some other task.

Instructor-identified usability issues

Table 3 shows results based on the valid issues students identified and those they discussed or explained in their final reports. This table shows that most students could identify all the problems named by the instructor and additional problems as well. Further, for several issues, a number of students could explain the heuristic violated and why it is important.

Table 3. Breakdown of Issues Identified and Explained
Identified issues # of students
identifying issue
# of students
explaining

*Lag time

9 of 9

8

*Avatar maneuverability

9 of 9

9

*Unhelpful sounds

9 of 9

8

*Confusing or unhelpful messages

7 of 9

6

*Reversal of actions

7 of 9

4

Lack of white space

5 of 9

4

Misnomer of world menu items

2 of 9

1

Metaphors as cues

3 of 9

3

Distracting elements

6 of 9

2

*Instructor-identified usability issues

Table 4 shows the point breakdown of skills. Students garnered more than two thirds of points possible for the identification tasks, suggesting that identification of the problems was a manageable task for students to master. Explaining the problem, however, was a bit more difficult for students. More than half of the points available were distributed for this task.

Table 4. Skills Assessment Point Breakdown

Points for identification

Points for explanation

Possible

Actual

Possible

Actual

630

490

1215

675

There were, however, some heuristics a majority of students did not identify in their reports and even fewer who could explain. The diaries provide clues to why students may not have explained the heuristic. In her diary, one student referenced the problem with the misnomer of the world menu. She wrote, “That [world menu] doesn’t make any sense, the menu, but I guess I’m the only one that cares about that. Picky me.” This was not repeated in the student’s report. The results in Table 5 show that the total number of usability problems listed for the diaries are more than for the reports. These findings suggest that students thought the issues bothersome but either could not explain how they might affect usability or thought they were not worth mentioning in their report.

Table 5. Identified Issues Listed in Reports and Diaries
Identified issues # of students
identifying
issue
# of students
explaining
Issues
identified in
diaries

*Lag time

9

8

9

*Avatar maneuverability

9

9

9

*Unhelpful sounds

9

8

8

*Confusing or unhelpful messages

7

6

3

*Reversal of actions

7

4

5

Lack of white space

5

4

3

Misnomer of world menu items

2

1

3

Metaphors as cues

3

3

6

Distracting elements

6

2

8

*Instructor-identified usability issues

Conversely, students identified 7 problems that were determined to be false positives. Kantner and Rosenbaum (1997) found that 43 percent of issues in their study were false positives. In the current study, the percentage was 44 percent. The false positives discovered by inexperienced students in the current study indicate that there is a greater possibility of applying the wrong heuristics to a particular problem. Table 6 identifies the issues determined to be false positives.

Table 6. List of False Positives
List of false positives # of students
identifying issue

“[It’s] like a game, so no one can really get any work done.”

1

“Walking backwards does not always reverse walking forward.”

1

 The system performs “…actions too quick to cancel.”

1

“There are too many unavailable items in the tool bar because a membership is required to use them.”

1

Expressed sentiments of fear (or the “willies”) or danger from “predators” on Second Life.

4

Character customization of avatars is “ugly.”

1

“There is a condescending attitude of many of them [avatars] that make you feel uncomfortable, and the identity of the avatar is uncertain.”

 

1

Percentage of false positives of all issues identified

44%

Usability issues identified

9

False positives

7

Total issues identified

16

De Angeli, et al. (2003) determined several reasons for false positives, including observations based on the personal preference of the evaluators but that were not usability problems, problems that reflected the misjudgments of the evaluator, system defects due to hardware configuration, and unclear or confusing statements. As shown in Table 6, most false positives were attributed to the personal feelings or preferences of individual students.

Revisiting the research questions, this study suggests that heuristic evaluation skills can be measured using diaries along with a systematic process that includes established usability guidelines, standard instruments to assure soundness, and benchmarks to measure progress. Most students had a minimum amount of measurable usability evaluation skills in that they could identify heuristic problems. A fewer number had a deeper amount of knowledge in that they could explain the problems and the reasons why they were harmful.

The results of this study suggest that diaries provided a means for students to express their opinions and feelings about the usability process, Second Life, and heuristics. The detailed descriptions in the diaries beyond the usability checklist and report suggest that the diaries served as instruments for unstructured and deep reflection.

Recommendations

The methodology discussed in this article applied in a distance learning context with graduate level students. Applications beyond this setting and demographic should be undertaken with caution. In particular, the interpretation, synthesis, and research required for the reports may not work as well for an undergraduate classroom. Further, the method is most appropriate where the desire is to generate deep understanding of topics or to build skills in a specific area like usability. An essential component of the distance learning experience is a home space for the discussions in a setting such as Blackboard.

Conclusion

In this study, students were exposed to information, ideas, and concepts that they applied to the task of evaluating Second Life. A usability checklist and final report provided structured means of completing the task. Students were then allowed to test their conceptualizations through personal reflection in diaries and by engaging with other students and the instructor via Blackboard. The results of the constructivist approach to teaching and the methodical approach to evaluating usability skills confirmed that heuristic evaluation skills can be measured and that diaries facilitate deep learning of these skills.

References

Bailey, B. (2002). Do’s and don’ts of effective web design: A summary of the UIU-2002 Research. Retrieved February 19, 2009 from http://webusability.com/article_dos_and_donts_UIU_01_summary_1_2002.htm

Barclay, J. (1996). Learning from experience with learning logs, Journal of Management Development, 15(6), 28–43.

Biggs, J. (2003). Teaching for quality learning at university. Buckingham: Society for Research into Higher Education/Open University Press.

Bruner, J. (1973). Going beyond the information given. Norton: New York.

Crème, P. (2008). A space for academic play: student learning journals as transitional writing, Arts and Humanities in Higher Education: An International Journal of Theory Research, 7(1), 49–64.

De Angeli, M. Matera, M.F. Costabile, F. Garzotto, & Paolini, P. (2003). On the advantages of a systematic inspection for evaluating hypermedia usability, International Journal of Human–Computer Interaction 15 (3), 315–335.

Desurvire, H. W., Kondziela, J. M., & Atwood M. E. (1993, January). What is gained and lost when using evaluation methods other than empirical testing. Proceedings of the Conference on People and Computers VII (89–102)  York, United Kingdom.

Doubleday, A., Ryan, M., Springett, M., & Sutcliffe, A. (1997, August 18-20). A comparison of usability techniques for evaluating design. Proceedings of the Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques (101–110) Amsterdam, The Netherlands.

Gerhardt-Powals J. (1996). Cognitive engineering principles for enhancing human-computer performance, International Journal of Human Computer Interaction, 8(2), 189–211.

Gibbs, G. (1991, November). Improving student learning – the CNAA project. In Brown, S. (Ed.) Students at the Centre of Learning. Standing Conference on Educational Development, Paper No. 66.

Gilar, R., Martínez Ruiz, M.A., & Castejón Costa, J.L. (2007). Diary-based strategy assessment and its relationship to performance in a group of trainee teachers, Teaching and Teacher Education, 23 (8),1334–1344.

Gredler, M. E. (2001). Learning and instruction: Theory into practice (4th ed.). Upper Saddle River, NJ: Merrill Prentice Hall.   

Honey, P., & Mumford, A. (1989). The manual of learning opportunities. Berkshire: Ardingly House.

Jeffries, R., Miller, J.R., Wharton, C., & Uyeda, K.M. (1991). User interface evaluation in the real world, Communications of the ACM, 44(3), 119–124.

Jones, D.E. (2006). I, avatar: Constructions of self and place in Second Life and the technological imagination, Gnovis 6, 1-32

Kantner, L., & Rosenbaum, S. (1997, October 19-22). Usability studies of WWW sites: Heuristic evaluation vs. laboratory testing. Proceedings of the 15th Annual International Conference of Computer Documentation, Crossroads in Communication (pp. 153-160)  Salt Lake City, Utah.

Kirmani, S., & Rajasekaran, S. (2007). Heuristic evaluation quality score (HEQS): A measure of heuristic evaluation skills, Journal of Usability Studies, 2(2), 61–75.

Kolb, D. A. (1984). Experiential Learning. Englewood Cliffs, NJ: Prentice Hall.

Liljegren, E., & Osvalder, A.L. (2004). Cognitive engineering methods as usability evaluation tools for medical equipment, International Journal of Industrial Ergonomics 34(1), 49–62.

Lowry, P., & Roberts, T. (2003). Improving the usability evaluation technique, heuristic evaluation, through the use of collaborative software, Proceedings of the 9th AMCIS, (pp. 2203–2211).

Mansourian, Y. (2008). Keeping a learning diary to enhance researchers’ understanding of and users’ skills in web searching, Library Review, 57(9), 690–699.

Mayes, J. T., & Fowler, C. J. (1999). Learning technology and usability: A framework for understanding courseware, Interacting with Computers, 11, 485–497.

Molich, R. (1998). Comparative evaluation of usability tests. Proceedings of the Usability Professionals Association Conference, Washington DC, June 25-26. Retrieved February 21, 2009 from http://www.dialogdesign.dk/tekster/cue1/cue1paper.pdf

Nielsen, J., & Molich, R. (1990). Heuristic evaluation of user interfaces. Proceedings of the ACM CHI 90 Human Factors in Computing Systems Conference (pp. 249-256)  Seattle, WA.

Nielsen, J. (1993). Usability Engineering. Boston: Academic Press.

Nielsen, J., & Landauer, T.K. (1993). A mathematical model of the finding of usability problems. Proceedings of ACM INTERCHI’93 Conference (pp. 24-29, 206-213) Amsterdam, The Netherlands.

Nielsen J., & Mack, R.L. (1994). Usability inspection methods. New York: John Wiley & Sons, 1–24.

Ondrejka, C. (2004). Escaping the gilded cage: User created content and building the metaverse, New York Law School Review 81 (49) 1-28.

Papert, S. (1990). An introduction to the fifth anniversary collection. In I. Harel (ed.), Constructionist Learning. Cambridge, MA: MIT Media Laboratory.

Pavlovich, K. (2007). The development of reflective practice through student journals, Higher Education Research and Development, 26(3), 281–295.

Pink, J., Cadbury, N., & Stanton, N. (2008). Enhancing student reflection: The development of an e-portfolio, Medical Education, 42(11), 1132–1133.

Rheingold, H. (2000). The virtual community: Homesteading on the electronic frontier. Cambridge, Mass: MIT Press.

Stewart, J., Keegan, A., & Stevens (2008). Postgraduate education to support organisation change: A reflection on reflection, Journal of European Industrial Training, 32(5), 347–358.

Turkle, S. (1995). Life on the screen: Identity in the age of the Internet. New York: Simon & Schuster.

von Glasersfeld, E. (1995). A constructivist approach to teaching. In L. Steffe & J. Gale (Eds.), Constructivism in education, 3–16. New Jersey: Lawrence Erlbaum Associates, Inc.

Xerox Corporation (1995). Usability analysis & design. Retrieved November 1, 2008 from http://www.stcsig.org/usability/resources/toolkit/he_cklst.doc