A Modified Delphi Approach to a New Card Sorting Methodology

Peer-reviewed Article

pp. 7-30

Abstract

Open card sorting is used by information architects to gather insights from users to incorporate feedback into an information architecture. In theory, it is one of the more inexpensive, user-centered design methods available to practitioners, but hidden costs make it less likely to be conducted properly and affect the quality of results produced. The following proposes a new card sorting method called the Modified-Delphi card sort to replace the Open card sort. The Modified-Delphi card sort is based on a well-known forecasting technique called the Delphi method. Instead of producing individual models that are then analyzed as a whole, participants work with a single model that is proposed and modified throughout the study. The Modified-Delphi card sorting method produces more useful results to aid in the design of an information architecture than the Open card sorting method.

A series of studies were conducted to directly compare the Modified-Delphi and Open cart sorting methods. First, two parallel studies using both methods were conducted with the same dataset and number of participants. Then, two studies were conducted on the results of the parallel studies: a heuristic review and ranking with information design experts and an Inverse card sort with additional users of the proposed architecture. The Modified-Delphi card sorting method produced results that were found to be at least as good as the Open card sorting method results and in some cases, better.

Practitioner’s Take Away

The Modified-Delphi card sort is an exciting new method that promises to replace Open card sorting as a pre-design method in information architecture. The following are some of the discussed benefits learned by this study:

  • Get better results for feedback in to the design of an information architecture.
  • Save time in laboratory studies by reducing the number of participants in a study and the amount of data to analyze.
  • Possibly save money in laboratory studies by using fewer participants, fewer days of facilities costs, and fewer hours of analysis time.
  • Number your cards; it really helps when recording results.
  • A digital camera can help save time in recording data between tightly scheduled participant sessions.

Introduction

Card sorting is a participatory, user-centered design activity that information architects use to gain an understanding of how users understand and model information (Maurer et al., 2004). This method is used to draw out underlying mental models (Nielsen et al., 1995; Rosenfeld et al., 2002) that will later aid in the design or validation of an information architecture. A participant in a card sorting study is given a set of cards that each contains a piece of information. The participant sorts the cards into groups and labels each group. These results are then analyzed against a hypothesis by cluster analysis, by affinity mapping, or by simple pattern matching. Specifications on the methodology and what the researcher does with the results depend on the type of card sort conducted. Studies may be conducted in a laboratory setting with a stack of note cards and a table, on a computer in a laboratory (Toro, 2006; Classified, 2007), or over the Internet (OptimalSort, 2007; Socratic Technologies, 2007; WebCAT, 2007; WebSort, 2007).

There are a number of card sorting methods that have been used as research tools by psychologists and information designers; however, there are two types of card sorts that are used at different stages in the design of an information architecture: pre-design and post-design methods. Pre-design methods are used early in the design process to gather input for creating an information architecture. Post-design methods are used after an information architecture is developed to validate or edit an existing architecture.

The Open card sort is a pre-design method where participants sort cards into categories they create themselves. It is one of the earliest design methods information architects employ to aid in creating an information architecture. Participants have very few restrictions on how they can work with the cards; they can rename cards with better labels, add or remove cards from the final structure, or place the same card in multiple places. This freedom makes the method one of the strongest for drawing out an underlying mental model of the participants. A number of methods for analyzing the results of Open card sorts, with some of the more common metrics being cluster analysis-analyzing the relationship of a card to a category and the relationship of a card with another card (Rosenfeld et al., 2002; Toro, 2006; Tullis et al., 2004).

The Closed card sort is a post-design method where participants sort cards into preexisting categories. Participants do not have as much freedom with the cards as in an Open card sort and must use the categories and labels provided to them by the study administrator. This method can be used in two ways: to add new content to an existing information architecture or to test an information architecture by scoring participant results with the existing structure.

The Inverse card sort, also known as reverse card lookup, is another post-design method which is a variation of the Closed card sort. The top levels of the information architecture are provided to the participants, and they are asked to select where they would expect to find low-level content based on task- or topic-based scenarios (Classified, 2007; Scholtz et al., 1998). This is a useful method to quantitatively validate or rate an information architecture, similar to administering a quiz or test.

A card sort offers a number of benefits that make it attractive in the practice of information architecture (Maurer et al., 2004). It is widely practiced and effective for gathering user insights for an information architecture. Card sorting is also simple to conduct and relatively cheap, with the cost of materials being low compared to other user-centered design study methods. However, the method has several weaknesses, some of which negate the proposed benefits. First, the activity of organizing cards is out of the context of the user’s goals and tasks. Organizing a set of data in a laboratory session is much different than wayfinding on a live website. Second, consistency between participants may vary, especially when a dataset has multiple possible organization schemes. Lack of consistency can weaken category and card relationship results. Open card sorting tries to solve this issue with larger numbers of participants to gather more data.

The question of how many participants to include in a card sorting study is under debate, particularly with regard to Open card sorting. Some card sorting guides suggest as few as four to six participants (Gaffney, 2000; Robertson, 2002), others suggest 10 to 15 participants (Maurer et al., 2004; Nielsen, 2004), while others suggest as many as 20 or more (McGovern, 2002; Tullis et al., 2004). Tullis and Wood (2004) have noted a minimum number of 20 to 30 participants are necessary to get meaningful numbers from an Open card sorting study. More participants may help provide more consistent results, but the larger sample size also increases costs and analysis time. In practice, even 10 to 15 participants is a high number of participants for a study. Informal polls conducted at the 2007 Information Architecture Summit and local Usability Professionals Association meetings revealed that many practitioners were conducting card sorting studies with 6 to 12 participants.

Using 6 to 12 participants is practical for practitioners for a number of reasons. A large number of participants mean higher costs of additional participant stipends, moderator fees, facility costs, and analysis hours. For companies who do not have their own facilities, the cost of renting a lab is a significant part of the study costs. Limiting the number of participants in a study to a number that can be scheduled in one day could reduce some of these costs. However, without conducting an Open card sorting study with an adequate number of participants, results from the study may not be reliable or provide the quality of input necessary for designing an information architecture.

A strong need exists for a more reliable and less expensive card sorting method that information architects can use early in the design process. This method must have strengths in three areas: results, time, and cost. It must be easy to conduct and not require many participants or a long study period. It must provide results that are both useful and worth the amount of time and money necessary to collect them. It must be low in cost so it can be easily funded and justified.

As previously mentioned, there are a number of web-based card sorting tools and services available to conduct an online card sort. OptimalSort (2007) supports closed card sorting. WebCat (2007), Soctratic Online Card Sort (Socratic Technologies, 2007), and WebSort (2007) support both open and close card sorting. Web-based card sorting has alleviated some of the expense of conducting an open card sort by turning to the web instead of a laboratory. Because participants can participate online instead of commuting to the testing facility, there are no facility rental fees and participation stipends can be lower. This cost savings can be invested in recruiting more participants to reach the 20 to 30 participant range recommended by Tullis and Wood (2004). However, the quality of the results gathered from an Open card sort is still in question.

The goal of this research was to develop a new card sorting method that provides better results and overcomes the previously discussed weaknesses of Open card sorting. I propose a new methodology similar to the Open card sort that is based on a forecasting technique called the Delphi method. The Delphi method is a moderation process for allowing multiple participants to work together towards a solution, while minimizing collaboration bias. Some of these biases include the bandwagon effect, which is the believing in a certain position because others do (Nadeau et al., 1993); herd behavior, which is a defense mechanism that results in following of the crowd (Hamilton, 1971); or the dominance of a strong personality on a group (Boy, 1997). Instead of producing a number of models created by individual participants that are then combined, averaged, and analyzed to draw a final conclusion, this new method allows participants to work individually on a single model until that model can be accepted without major additional modifications. While Open card sorting records multiple mental models and tries to draw conclusions from the results by analyzing statistical characteristics, Modified-Delphi card sorting proposes a single mental model that is then modified until a consensus is met.

Current and Related Work

In Wisdom of Crowds (2004), Surowiecki discusses how, when accessed collectively, the masses have an intelligence that is rival to none. Four elements are necessary to form a “wise crowd”: diversity of opinion, independence from other people’s opinion, decentralization of knowledge, and a method for gathering the crowd. As an example of the power of collective knowledge, he discusses several kinds of markets, including prediction markets that rely strongly on expressing a position, rather than selecting a choice. According to Wikipedia (2007), “A prediction market would ask a question such as, ‘Who do you think will win the election?’ where an opinion poll would ask more specifically, ‘Who will you vote for?'”

Similar to prediction markets, the Delphi method is a forecasting technique used to collect the opinions of a group of people in an objective way. It was developed by the RAND Corporation in the 1950’s and 1960’s (Helmer-Hirschberg, 1967; Linstone, 1975) as a way of gathering a knowledge base of military intelligence and experience without the influence of politics, rank, or other bias. It has since been applied to other domains such as technology, population sciences, and business.

The Delphi method is a technique that controls information gathered from participants through the study moderator. There are three key elements to the Delphi method: structure of information flow, feedback to the participants, and anonymity of the participants. Participants are often knowledgeable experts in a field and may have a personal stake in the resulting knowledge base generated from the study. This technique is similar to the Hegelian Principle of thesis, antithesis, and synthesis, where an argument and counter-argument are proposed and discussed until a consensus is reached (Stuter, 1996).

During the Delphi method, each participant was given the following information:

  • Asked to provide an answer to a problem or question
  • Given the combined work of previous participants
  • Allowed to modify their answer after review of the previous work

Figure 1. Hegelian Principle (the Delphi method)

Figure 1. Hegelian Principle (the Delphi method)

The moderator combines the previous session results and presents the material to the participants, careful to filter any bias or include personal opinion or experience that may be important. This direct interaction of the moderator is an important communication channel between the participants; however, it has also been noted as one of the weaknesses in the protocol (Teijlingen et al., 2005). Because the moderator has control over the collection and combination of information, they should maintain an objective view of the information and remain neutral on any presented positions. Conflict of interest may arise due to possible personal or business gains based on the results. To avoid this potential problem, unbiased, third-party moderators may be used for critical evaluations. Independent verification and validation is one such method used in the development of mission critical systems (Whitmore et al., 1997).

The Delphi method has benefits over other group communication methods. Instead of direct collaboration, participants interact with each other through the artifacts they leave at the end of their study session. An individual participant may not have the final answer, but a piece of what they have proposed may be valuable to the next participant. The opinions of others can be influential, valuable, insightful, or useless in a certain context. This anonymous collaboration alleviates peer pressure and other performance anxieties that are common to group collaboration methods and allows participants to focus on the problem.

Delphi in user-centered design techniques

There are a number of methods currently employed in the practice of user-centered design that have been influenced by or are fundamentally similar to the Delphi technique. Although not strictly limited to user-centered design, the Delphi method of interviewing (Boucher et al., 1972) is a protocol designed to gather information from multiple experts while limiting the influence or bias of any one expert. For example, when researching user groups of a product, an iterative interviewing model would involve members of the client company who are involved in the development of a product (see Figure 2). A participant is asked to create a list of who he or she thinks are users of the product. Once the participant provides his or her answer, he or she is provided with the combined list of the previous participants answers. The participant is then allowed to modify his or her answer based on this new information. What usually results from this review is one of three things: the participant included a user group that was not previously listed, the participant did not consider a user group from the combined list that he or she thinks is valid, or the participant will notice a combination of groups that include the same information. The participant’s results are combined with the current list by the moderator for use with the next participant. This interviewing process is continued until a consensus has been reached or obvious patterns of conflict and agreement have been identified.

Figure 2. Iterative moderating model

Figure 2. Iterative moderating model

Iterative interface design is a development strategy used in user-centered design. This strategy often includes frequent usability testing sessions with a small number of participants over the development of a product (Krug, 2000). A linear version of the Delphi moderating model can be incorporated into the testing protocol as a method for gathering feedback and insight (see Figure 3). Participants are given the prototype design to work with and are asked to provide feedback. They are then presented with alternate design ideas that have been created based on feedback from previous participants and allowed to provide additional feedback, particularly if one design is better than another. Because the goals of the testing sessions are to gather design feedback, and not to validate the prototype design, this method is valuable for trying out design ideas developed during the study.

Figure 3. Linear moderating model

Figure 3. Linear moderating model

Other group collaboration design methods

Focus groups are a way to gather a group of people to talk about a topic. The number of people can range from a few to up to 15, and last from a single one hour session to multiple day sessions. Although good for gathering product experience and preference information (Ede, 1998), focus groups have a number of drawbacks that make them less desirable as a research method compared to other user-centered design methods. It is difficult to understand how participants may use a system without actually observing them using it (Nielsen, 1997). If asked how they would use a system, what participants may do is much different than what they say they might do.

The Collaborative Analysis of Requirements and Design (CARD) technique is a task-based game-like procedure that helps direct the design of a task or work flow (Muller, 2001; Tudor et al., 1993). First, the participants are introduced to each other and examine the materials they will be working with before they begin the work session. The work session consists of three parts: the analysis session, where participants describe their work; the design session, where participants work with their own work and incorporate other participant’s ideas into their work; and the evaluation session, where participants comment on each others’ work. It is similar to card sorting in which the cards are used to draw out a participant’s model, but different in the participants work with pieces of a task or work flow and not topics or labels.

Plastic Interface for Collaborative Technology Initiatives through Video Exploration (PICTIVE) is a participatory design technique that allows users to modify a proposed interface as a method for gathering design input and feedback (Muller, 1991). It is similar to paper prototyping where it provides pieces of the interface for participants to interact with; however, in PICTIVE, the participants are engaged in designing the interface instead of interacting with it. Each participant is given a goal, similar to a job or task scenario. A group of participants interact with pieces of the interface and each other to create an interface that meets all of their goals.

The Group Elicitation Method (GEM) is a brainstorming technique that provides a decision support system for group design and evaluation (Boy, 1997). During a half day session, 6 to 10 domain experts brainstorm and collaborate to reach a consensus on an argument. The moderated collaboration model aims to reduce bias often found in other group collaboration methods. GEM is conducted in six phases: (a) formulation of problem and selection of participants, (b) generation of point of view, (c) reformulation of points of view in to concepts, (d) generation of relationships between concepts, (e) derivation of a consensus via computer scoring, (f) analysis of results. It is suggested as a plausible replacement for interviewing, card sorts, or Twenty Questions (Tognazzini, 1992).

The Modified-Delphi card sorting method

There are a number of methods that utilize collaboration and iterative information flow in order to gather knowledge (Boucher et al., 1972; Boy, 1997; Ede, 1998; Krug, 2000; Muller, 1991; Tudor et al., 1993). Of these, the Delphi method is the best method for gathering knowledge from a group of experts. By modifying this method and applying it to card sorting, we can take advantage of its structured information flow that is suitable for linear studies, such as card sorting, and minimizes the bias found in other collaboration techniques.

When modifying the Delphi method for use in card sorting, each participant was given the following information:

  • Given the combined work of previous participants
  • Allowed to modify the work after review

The step that asks participants to first provide their answer is omitted for several reasons. Card sorting is a cognitively intense activity. Requiring participants to participate in an open card sort, review work that may be similar or different from their own, and then modify the combined participants’ answers is a lot of work, and may discourage them from making changes to the presented structure and becoming active in the design process. The traditional application of the Delphi method usually involves knowledge experts who have a personal stake in the information and are willing to put forth the necessary amount of work to sufficiently get their point across. Participants recruited for user research studies, including those that would be recruited for the Modified-Delphi card sort, have no personal connections with the product or company. The work they are modifying is that of their peers, therefore there is less influence to restrict their opinion that frees them to agree, criticize, or make changes (see Figure 4).

The Modified-Delphi card sort can be summarized in the following four steps:

  1. The seed participant creates the initial structure from a stack of cards and proposes an information structure model.
  2. The following participants comment on the previous participant’s model and make modifications to the proposed model or propose a new model.
  3. The card structure changes throughout the study, evolving into a model that incorporates input from all of the participants.
  4. A consensus is reached when the information structure stabilizes and there are no more significant changes or obvious patterns of conflict and agreement arise.

Figure 4. Example workflow of a Modified-Delphi card sorting study

Figure 4. Example workflow of a Modified-Delphi card sorting study

The seed participant can be selected in a number of ways. The seed participants can be a single participant, a group of participants collaborating, an information architect aiding a participant, or an information architect. A single participant working alone is similar to a participant in an Open card sorting study. Groups of participants are also popular in these kinds of studies, and a pair of participants may be a good way to begin a study with a dataset that is difficult to classify or contains information new or unknown to participants. However, a pair of participants working together should only be used as the seed participant. Although the use of groups of participants in card sorting studies is common, using them in applications of the Delphi method is rare. The use or assistance of an information architect may be useful, but should be considered as a last alternative, when information in the dataset is new to participants and the first participant may have a difficult time with organization.

The goal of a participatory design method is to gain insight from users of a system, not the designers. Introducing a model influenced by the information architect transforms the method from an information gathering technique to an information validating technique. Using an information architect in the study also presents a social issue. An information architect-because of his or her professional status, knowledge of the information, relationship with the client, and relationship with the participant-may be held in higher regard than a peer, and knowledge of an information architect’s influence may intimidate participants from altering the information structure model. Additionally, participants should never be told how many previous participants have worked on the information structure, because the number of participants may be intimidating and prevent a participant from being comfortable making changes.

Recruiting for the Modified-Delphi method is similar to recruiting for any other user-centered design study. The Delphi method is traditionally a method of expert opinion, and the users of a product could be considered an “expert” on the product. Approximately 8 to 10 participants have been the typical number of experts recruited in traditional Delphi studies. Depending on the goals or needs of the product design, you may recruit participants who are the target users and mixed user types, a single user group of particular interest, or the primary user group. However, if the participant types are very different and propose very different models to work with, this instability may prevent the study from reaching a consensus.

Other reasons for not reaching a consensus may be the existence of conflict cards. These are topic cards that do not stabilize in a category and participants cannot reach a consensus by the end of the study. There are a number of reasons why there may be conflict cards including:

  • A topic card’s label is incorrect, misleading, or ambiguous.
  • The categories that participants are presented with do not capture a location suitable for a specific topic card, but fit with the rest of the data and so is left unchanged.
  • Multiple user groups may have different opinions on where a specific topic card should be.

Patterns of conflict may be identified during the study or during analysis. It is useful to look back at participants’ comments from the study sessions to see if there were any indications as to why a topic card may have had issues. Conflict cards are not necessarily a bad thing; they identify weak points in the information dataset so the information architect can pay special attention to them when designing the information architecture.

Analysis can be done as with the Open card sort using affinity mapping or another pattern matching technique; however, special attention should be paid to the final participant’s work. The final participant has had the influence of all the previous participants and should have had the fewest significant changes. In a model study, their work will be very similar to the final results analysis and can be used as a preliminary result or a metric to compare against the final analysis.

The goal of the Delphi method is to reach a consensus in a body of knowledge. In information architecture, there is rarely a single correct answer, but usually a few suitable answers that will accommodate most of the audience. If a proposed model is not agreeable to a participant, the participant is free to propose a new model by scooping up the cards and beginning from scratch. During a Modified-Delphi card sorting study, it is possible that many models are proposed and no consensus can be reached; in that case, variables such as homogeny of the dataset, participant experience with the topic, and participant makeup should be considered.

There may be several logical structures for the information, and it is up to the information architect to choose which one is the most appropriate. In this case, the Modified-Delphi card sort may be too relaxed an information gathering tool, and more directed studies after a model is selected may be more useful. The Modified-Delphi card sort is meant to be a more practical pre-design activity to aid in the design of an information architecture. Improving the quality of results from each participant, reducing the time to conduct the study and analyze the results, and lowering the costs of conducting a study and possibly cognitive costs to the participants are all potential and expected benefits.

Methodology

Using the Modified-Delphi card sort may reduce the financial and cognitive costs of a study. I have investigated if the results from this method are at least as good as results generated from Open card sorting study. If the results cannot provide the same value to the researcher, the cost benefit is lost.

This brings me to my hypotheses:

  • There is no difference between the results generated by the Modified-Delphi card sort and Open card sort.
  • The Modified-Delphi card sort generates better results than the Open card sort for aiding in the design of an information architecture.

In the approach I have chosen to answer my research question, I have directly compared the Modified-Delphi card sort and Open card sort by conducting parallel studies in a laboratory environment using the same user configuration and card collection to generate result structures. I use the term information structure, rather than information architecture, because these are generated from the results of the studies without any modifications based on heuristics, logic, or experience. The information structures are a representation of the results without the assistance of an expert. Two independent studies were conducted: (a) an expert heuristic review and ranking by information design experts (b) and an Inverse card sort with the website’s target user groups. The results of these studies have provided data for directly comparing the Modified-Delphi card sort versus the Open card sort.

The University of Baltimore Usability Lab conducted a usability study of a new website design for the University of Baltimore School of Law in the Fall of 2006. Results from the usability study revealed issues relating to the information architecture, and consultation with an information architect was recommended (Nomiyama, Abela, & Summers, unpublished). This presented an opportunity to conduct parallel studies using the Modified-Delphi card sort and Open card sort for direct comparison and to test the new method with a real-world problem. Using a dataset such as weather or food may not have produced realistic results. These kinds of information have strong preexisting social constructs that define how they are categorized, conventions that are learned early and are hard to break.

Modified-Delphi and Open card sorting studies

Eighteen participants of the University of Baltimore School of Law website’s target user groups were recruited to participate in one of two card sorting studies. These user groups included: law students, both current law students and undergraduate students interested in law school; law school staff, which included administration support staff, law professors, and other faculty members; law professionals, both attorneys who may or may not be university alumni and support specialists such as paralegals, records managers, and so on. Students and staff are the most frequent users of the website, but it was important to support professionals seeking information about clinics and seminars, especially alumni who are interested in donating to the school. The Modified-Delphi card sort was conducted with eight participants: one undergraduate interested in law school, two current law school students, one school administrator, one faculty member, two law professionals, and one attorney. The Open card sort was conducted with 10 participants: one undergraduate interested in law school, three current law school students, one school administrator, two faculty members, two law professionals, and one attorney.

Sessions for both the Modified-Delphi and Open card sorting studies lasted no longer than 60 minutes per participant. Before the session began, all participants were asked to complete a study consent form and answer additional background information questions about their experience using law school websites.

Participants of both studies were given a large table with the cards, a pen, and extra cards to use for naming groups, renaming card titles, or adding missing content. All of the Open card sort participants and Participant 1 of the Modified-Delphi card sorting study were given a set of 90 index cards containing high level topics from the current University of Baltimore School of Law website (http://law.ubalt.edu/). Participants 2 through 8 of the Modified-Delphi study were given the previous participant’s results to work with. Participants of both studies were asked to create an organization for the content provided that made the most sense to them. They were permitted to change labels, remove cards that did not seem to fit, and add missing information. It should be noted that 90 cards were not enough to represent all of the content in the website and comments about missing content were expected.

There were two major differences in the instructions given to the participants of the Modified-Delphi card sort versus the Open card sort:

  • Modified-Delphi card sort participants were asked to review the work of the previous participant, where Open card sort participants began with an empty table and a stack of cards.
  • Each Open card sort participant began with the original set of 90 cards while the Modified-Delphi card sort participants began with a modified collection based on the changes previous participants made.

After the participants of both studies were satisfied with their work and declared themselves to be finished, a review of their work was conducted to clarify grouping and labeling decisions. A final questionnaire was administered that asked participants to select 10 of the most important topics from the original set of 90 information cards. The results of the questionnaire helped formulate questions for the Inverse card sort and to ensure that the possibility that unfamiliar cards were presented to participants of the Modified-Delphi card sorting study did not affect the final comparison of the two methods. Results of each participant session were recorded by taking a series of digital photographs of the participant’s work so they could be recorded and analyzed at a later time.

Generating information structures from card sorting study results

Once the two studies were completed, information structures were generated based on pre-defined guidelines that guided objective creation and limited expert interaction. The same guidelines and analysis methods were used for the results of the Modified-Delphi and Open card sorting studies.

The results were recorded in a spreadsheet, similar to the popular spreadsheet template published by Joe Lamantia (2003). For each card sorting study, a separate spreadsheet was created to record the results. Categories generated in a study were given a separate column and similar categories labels were combined. For each participant, the card topics were listed in the category it was placed by the participant. When the card topics from all of the participant sessions were placed, an agreement weight (see Formula 1) was calculated for each card in each category. The agreement weight for a card topic is calculated by dividing the number of occurrences of a single card topic in a category by the total number of topic cards.

An agreement weight is a way to describe the strength of a card in a single category. This calculation is used instead of a correlation because a correlation finds the relationship between multiple variables. For example, a correlation would be used to find the relationship of a card between two specific categories (two variables). The agreement weight finds the single strongest category (one variable) for a card.

Formula 1. Calculating agreement weight

Formula 1. Calculating agreement weight

Result structures were created by organizing cards with greater than 50% agreement weight into categories in the final information structure. Greater than 50% agreement weight means that more than half of the participants agreed on the location of the card. These high percentage agreement cards also helped determine the strongest categories in the information structure. Cards with less than or equal to 50% agreement weight were organized based on their highest agreement. Additional categories were created as cards with lower agreement percentage that did not fit in to existing categories were added as needed. Agreement weight ties between categories were decided based on one of the following heuristics, in order of importance:

  1. If the category to be organized in was created based on inclusion of cards with greater than 50% agreement weight, that choice was selected.
  2. If the selection of a category would result in the creation of a new category that was not popular with the results, that choice was omitted.
  3. If the organization of a card in a specific category was obviously illogical, that choice was omitted.

The third heuristic was rarely relied on because of its need for expert opinion, but necessary in order to prevent an anomaly that may affect the later studies. The goal of these guidelines was to create information structures that explicitly represented the results of the card sorting study and not the expertise or opinion of the analyst.

Expert review

Fifteen information design experts were recruited to participate in a heuristic evaluation and ranking of the resulting information structures from the Modified-Delphi and Open card sorting studies. On average, the participants had 3 or more years of professional experience as an information expert and spent most of their time at work on information architecture-related activities. The study was conducted online via a browser-based form that provided instructions and a method for answering a series of questions. Participants were given background information on the users and goals of the website and a downloadable copy of the results of the Modified-Delphi and Open card sorting studies (Appendix 1, Appendix 2). They were then asked to provide a score from 1 (very poor) to 5 (very good) for a series of information architecture heuristics based on industry best practices and Rosenfeld’s (2004) information architecture heuristics:

  1. Breadth and depth are balanced.
  2. Labels are clear and meaningful.
  3. Data is of similar granularity and dimension.
  4. Naming scheme is consistent and logical.
  5. Visual hierarchy is clear.
  6. Organization fits users’ needs.

In addition to the heuristics, the participants were asked for their overall impression of the information structure and asked to rank the two information structures. This was a within subject study with all the participants rating both information structures from the Modified-Delphi and Open card sorting studies. The information structures were anonymized and counterbalanced to prevent bias. For half of the expert reviewers, the Modified-Delphi information structure was labeled as Information Structure A and presented first, and the Open information structure was labeled as Information Structure B and presented second. For the other half of the expert reviewers, the Open information structure was labeled as Information Structure A and presented first, and the Modified-Delphi information structure was labeled as Information Structure B and presented second. Participants were informed that the two information structures were generated from card sorting studies, but not that the card sorting studies employed different methodologies.

After both of the information structures were reviewed, the participants were asked to rank which structure was better, or if they were the same.

Inverse card sort

Seven participants were recruited for an Inverse card sort of the resulting information structures from the Modified-Delphi and Open card sorting studies. The recruiting was based on the same criteria as the card sorting studies: four current law students, two law professionals, and one undergraduate pre-law student. The study was conducted online via a browser-based form that provided instructions and a method for answering a series of questions. Participants were asked to select the category where they would expect to find the answer to the question. The questions were derived from the results of the exit questionnaire administered during the Modified-Delphi and Open card sorting studies. The categories were the top level categories from the information structures generated from the Modified-Delphi and Open card sorting studies.

The Inverse card sort was also a within subject study, with all the participants answering the same set of questions for the anonymized Modified-Delphi and Open information structures. The order in which the questions appeared was counterbalanced to prevent a learning bias. Half of the participants were asked to answer questions for the Modified-Delphi information structure first and the Open information structure second, and the other half of the participants were asked to answer questions in the reverse order.

Results

The following sections present information about information structures, the exit questionnaire, the heuristic review results, independent information structure ranking, dependent information structure ranking, and the Inverse card sort results.

Information structures

The information structures generated were based on guidelines specified in the Methodology section. Guidelines for the Modified-Delphi and Open card sorting studies can be found in Appendix 1 and Appendix 2, respectively.

In the results from the Modified-Delphi card sorting study, 66 out of 90 of the original set of cards had a greater than 50% of agreement weight (see Table 1). More than half of the participants agreed on the location of 73% of the cards. Also, nine out of the final 10 categories were represented in these cards. The final participant’s work was also very similar to the combined information structure. Eight out of the final 10 categories were represented by participant’s raw results; an additional category from this participant had been merged with another that made 9 out of 10 in the information structure. Seven cards did not match with the final information structure; they were “floating” cards of special interest and were not directly organized. Fewer than 90 of the original cards were represented in the final participant’s work. There were a total of 75 cards represented in their structure including several “grouped” cards (bound by paper clips) that were counted as a single card and nine new cards were added through the study. In the results from the Open card sorting study, 19 out of 90 of the original set of cards had a greater than 50% agreement weight. More than half of the participants agreed on the location of 21% of the cards. Eight of the final 11 categories were represented in these cards.

Although the number of categories represented by the high agreement cards in the Open card sorting study is very close to the number represented by the Modified-Delphi card sorting study, there is a much greater difference between the two studies in the number of high agreement cards in general. The Modified-Delphi card sort provided 47 more high agreement cards (with a total of 66 out of 90 cards) than the Open card sort (with a total of 19 out of 90 cards).

Table 1. Summary of Agreement Weight Comparisons
Comparison Modified-Delphi Open
Number of cards with > 50% agreement weight 66/90 (73%) 19/90 (21%)
Number of categories represented by cards with > 50% agreement weight 9/10 8/11

Exit questionnaire

An exit questionnaire was administered to the 8 Modified-Delphi card sort participants and 10 Open card sort participants for a total of 18 completed questionnaires. Participants were asked to select what they felt were 10 of the most important cards to them. Their selections were based on their perspective of their individual audience needs, but because of the diversity of participants involved in the two card sorting studies, the selections were deemed to provide insight as to the most important topics of the dataset. Results of this questionnaire also helped determine questions for the Inverse card sort (see Table 2).

Fifty-seven out of 90 cards were selected by at least one participant with a median number of votes of 2 (excludes 0 vote cards). Given that half of the participants were current law students or undergraduate students interested in law school, it follows that the top selected cards relate to topics students are interested in: course listings and schedules, costs and ways to pay for school, and finding a job.

Table 2. Top 15 Responses to the Exit Questionnaire
Card Title Number of Votes

(18 Participants)
1. Catalog/Course Listings 9
2. Tuition 8
3. Career Services 7
4. Financial Aid Information 7
5. About the School 7
6. Academic Calendar 7
7. Scholarships & Loans 6
8. How to Apply 6
9. Academic Requirements 6
10. Course Descriptions 5
11. Library Services 5
12. Centers & Programs 5
13. Admissions Process 5
14. Faculty Profiles 5
15. University Facilities & Services 5

Heuristic review results

Fifteen information architects completed the heuristic review that asked them to rank each of the information structures on a scale from 1 (very poor) to 3 (average) to 5 (very good) (see Table 3). In addition to six heuristics, they were asked to provide an overall rating of the information structure on the 1 to 5 scale, independent of the other information structure (see Table 4).

In each heuristic, the average score of the Modified-Delphi information structure was greater than the Open information structure. The Wilcoxon matched-pair, signed-rank test is used when there are small numbers of matched samples (NIST, 2007) and does not require a normally distributed sample like other tests. The differences are calculated and then ranked by magnitude. (One participant did not fully complete the heuristic review so only 14 pairs were calculated.) The nominal alpha criteria level (α) and the limit for the probability (p) of the outcome under the null hypothesis is set at 0.05. Probability is the measurement of the likelihood of an event.

According to the expert heuristic review responses gathered, the Modified-Delphi information structure is better than the Open information structure in granularity and dimension, consistent and logical naming, and overall, significant at the α = 0.05 level. The comparison of other heuristic responses gathered are not statistically significant based on response distribution and sample size.

Table 3. Average of All Results from Heuristic Review
Heuristic Modified-Delphi Open
Breadth and depth are balanced 4.0 3.4
Labels are clear and meaningful 4.1 3.8
Data is of similar granularity and dimension 4.1 2.9
Naming scheme is consistent and logical 4.0 3.1
Visual hierarchy is clear 4.1 3.6
Organization fits users’ needs 4.1 3.4
Overall rating
Individual question, not an average of the heuristic score
3.9 3.2

 

Table 4. Average and Significance of Paired Results from Heuristic Review
Comparison Modified-Delphi Open Probability
(of the difference being significant)
Breadth and depth are balanced 4.0 3.6 0.087*
Labels are clear and meaningful 4.0 3.8 0.461*
Data is of similar granularity and dimension 4.1 2.9 0.004
Naming scheme is consistent and logical 4.0 3.1 0.016
Visual hierarchy is clear 4.1 3.6 0.188*
Organization fits users’ needs 4.1 3.4 0.074*
Overall rating
(individual question, not an average of the heuristic score)
3.9 3.2 0.023

* not statistically significant at the α = 0.05 level

Independent information structure ranking

Fourteen participants completed paired responses to the question of usefulness of an information structure. Participants were asked, independent of the other information structure, if the presented information structure would be useful in aiding in the design of an information architecture (see Table 5).

A binomial test is a one-tail test that determines the significance of the deviation from an expected distribution (an equal number of yes and no responses). The tail is the part of the bell-shaped normal distribution curve that is far from the center. Determining that there is no significant difference between the scores is the same as saying that there is a high probability that both scores came from the same distribution. In this case, the scores would fall under the bell part of the distribution and not in the low probability tail.

In a completely random sample, the expected response would be an equal number of yes and no votes for a particular information structure. If the structure is truly helpful in designing an information architecture, the responses would lean towards yes.

  • {response yes} = {response no}
    There is no difference in the helpfulness of the information structure.
  • {response yes} > {response no}
    The information structure is helpful in designing an information architecture.

The binary test confirms the chance of observing 14 or more yes votes for Modified-Delphi in 14 trials is significant at the α = 0.01 level where p = 0.0001. The chance of observing 12 or more yes votes or 2 or fewer no votes for Open in 14 trials is significant at the α = 0.05 level where p = 0.0065. According to the expert responses gathered, both the Modified-Delphi and Open information structure are helpful in designing an information architecture, significant at the α = 0.05 level.

Table 5. Results from Independent Structure Ranking
Would you find this structure helpful in designing an information architecture? Yes No
Modified-Delphi information structure 14 0
Open information structure 12 2

Dependent information structure ranking

Fourteen participants completed the information structure ranking. Participants were asked to rank which information structure was better to aid in the design of an information architecture (see Table 6). They were presented with two information structures representing results from the Modified-Delphi and Open card sorting studies that were anonymized and order counterbalanced. They were asked to rank the first information structure as being better, the same, or worse than the second information structure. Ten participants responded that the Modified-Delphi information structure was better than the Open information structure. Two participants responded that the Open information structure was better than the Modified-Delphi information structure. Two participants responded that the information structures were the same.

The sign test is a two-tail test based on the assumption that two cases have equal probabilities of occurring. The sign test simply observes whether there is systematic evidence of differences in a consistent direction. This test is ideal for small samples as these differences may prove to be significant even if the magnitudes of the differences are small. The binomial test is a one-tail test based on the probability of observing each case in a series of trials where one answer or the other is selected. All equal observations (where Modified-Delphi information structure = Open information structure) are ignored.

The sign test confirms that the chance of observing 10 or more votes for the Modified-Delphi (or two or fewer votes for Open) in 12 trials is significant at the α = 0.05 level where p = 0.0386. The binomial test confirms the chance of observing 10 or more votes for the Modified-Delphi in 12 trials is significant at the α = 0.05 level where p = 0.0193. According to the information expert ranking, the Modified-Delphi information structure is not equivalent to the Open information structure; the Modified-Delphi information structure is better than the Open information structure, significant at the α = 0.05 level.

Table 6. Results from Dependent Structure Ranking
Ranking Description Number of Votes
Open is better than Modified-Delphi 2
Modified-Delphi and Open are about the same 2
Modified-Delphi is better than Open 10

 

Inverse card sort results

Seven participants completed the Inverse card sort. Participants were asked to select the category where they would be able to find the information asked for in a question (see Table 7).

Both the Modified-Delphi and Open information structures did poorly in the Inverse card sort with very close cumulative scores of 33 and 32 total correct out of 70 possible answers, respectively. Although the Modified-Delphi had more high scores (5 out of 7 or greater) than the Open information structure, it also had the most low scores (1 out of 7 or fewer).

Questions where both the Modified-Delphi and Open information structures did well (5 out of 7 or better) were the following: where to find the final class drop date, the Dean’s Notes Newsletter, and school computing services information. Additionally, the Modified-Delphi information structure also had a good score on where to find the Tax Law clinic application, and the Open information structure had a good score on where to find information on the Attorney Practice internship.

Questions where both the Modified-Delphi and Open information structures did very poorly (1 out of 7 or worse) were the following: where to find the Judge Solomon Liss visiting scholars program, a copy of your transcripts, and information about the Student Council. Additionally, the Modified-Delphi information structure also had a very poor score on where to find forms to register for classes at another institution.

There were an insufficient number of responses to conduct a statistical test on the results.

Table 7. Results from Inverse Card Sort
Question # Correct/7 Participants
Modified-Delphi Open
1. Where would you find information about the last day to drop or add a class for the semester? 6 4
2. Where would you read the Dean’s Notes Newsletter? 5 5
3. Where would you find the application to the Tax Law Clinic? 5 4
4. Where would you find information about the Judge Solomon Liss Visiting Scholars program? 1 0
5. Where would you request a copy of your transcripts? 1 1
6. Where would you find information about the Student Council? 0 1
7. Where would you find help with connecting your computer to the school network? 7 5
8. Where would you find the form to substitute a class at another institution? 1 4
9. Where would you find information about the Attorney Practice Internship? 4 5
10. Where would you find a copy of the Honor Code? 3 3
Total # correct out of all answered questions 33/70 32/70

Discussion

In the real world, the results of a pre-design card sort should not be converted to an information architecture or be used to verify the final information architecture without the influence or input from an information design expert. However, for the purpose of comparing two pre-design methods without the bias and experience of an information design expert, this was the best way obtain accurate results for this study. Pre-design card sorting methods are designed to gather insights on how users model information, but do not take in to account user tasks, context of use, or client goals. They offer a useful way to gain design insight to aid in the design of an information architecture and involve users in the design process.

During both of the studies, I observed an interesting behavioral difference between participants of the Modified-Delphi card sorting study and the Open cart sorting study. Most of the Modified-Delphi study participants were very talkative during their sessions, asking questions about cards and the instructions, talking about the decisions they were making as they organized the cards, their experiences they have had on law school websites, or about their lives in general. Participants of the Open study were quite the opposite, with only a few participants asking questions about the cards or the instructions, and no one talking aloud during their process or sharing any experiential or personal information. Although this behavior is reported anecdotally, this may be an indication of how high the cognitive load is for the Open card sort in comparison to the Modified-Delphi card sort. Each Open card sort participant must review all of the cards, create a model, and then refine that model. The Modified-Delphi participants had help with the creation of a model by only having to make modifications to other participants’ work that they review. The cards were presented to them in a logical order, and participants simply had to form an opinion about the model and modify it, rather than creating a model from scratch and refining it. Besides being more participant-friendly, the possibility of a lower cognitive load may make the Modified-Delphi method appropriate for studying larger or more difficult datasets that are usually avoided in Open card sorting because of the cognitive costs to participants.

During the Modified-Delphi card sorting study, three of the participants (not including the seed participant) decided to start from scratch and scooped up all the cards in a pile before beginning their work, destroying the previous participants’ work. This was expected to create a noticeable change in the evolving structure; however, during analysis, there was still a great similarity between the participants who based their work off of others and the participants who started from scratch. During the study instructions, participants were asked to review the provided structure before beginning their work. This priming (Tulving, 1990) of the previously proposed model may have influenced them on some scale. Their comments indicated that they were already familiar with information in the dataset and felt more comfortable organizing it on their own terms.

During the analysis phase of the parallel card sorting studies, there was a noticeable difference in the difficulty analyzing results between the two methods. It took more than twice as long to analyze the Open card sort results than the Modified-Delphi card sort. In addition, there were two additional participants in the Open card sorting study than the Modified-Delphi study, but contrary to expectation, it was not during the card entry in to the spreadsheet where the time was lost. This difference in analysis time may be attributed to the Modified-Delphi card sorting study results having a much higher agreement weight than the Open card sorting study. Seventy-three percent of the cards in the Modified-Delphi card sorting study had a greater than 50% agreement weight that made the decision on where to place card in the final information structure category very quick and easy. However, only 21% of the cards in the Open card sorting study had greater than 50% agreement weight. This meant that for the remaining 71 cards, at least one of the placement heuristics described in the Methodology section had to be employed. This process was very time consuming and essentially made the analysis of the Open card sorting study more expensive.

The two studies had very similar user group combinations, which strengthens the comparison between the two methods (see Table 8). The Modified-Delphi method did have a disproportionate number of women over men, but I do not think this had an impact on the study. The Inverse card sort had a small number of participants (seven); however, this number is typical in most user-centered design studies. There were also a disproportionate number of students (four current law students, one undergraduate pre-law out of seven participants) who participated compared to the other study participant makeups. This may have had an effect on the Inverse card sort study results.

Table 8. Summary of Study Participant Groups
User Group Modified-Delphi Card Sort Open Card Sort
Undergraduate Pre-Law Students 2 1
Current Law Students 2 3
Law School Faculty 1 2
Law School Administration 1 1
Law-Related Professional (non-Attorney) 1 2
Law-Related Professionals (Attorney) 1 1
Total number of participants 8 10

Summary of results

The results from the studies presented in this thesis provide compelling evidence that as a laboratory method, the Modified-Delphi card sort provides an alternative that is better than the Open card sort for gathering input for an information architecture early in the design process (see Table 9).

The independent information structure ranking suggests that information design experts thought both the Modified-Delphi and Open information structures would be useful for aiding in the design of an information architecture. However, the overall rating and the dependent information structure ranking both offer statistically significant evidence that information design experts thought the Modified-Delphi information structure was better than the Open information structure in terms of heuristics and more useful for aiding in the design of an information architecture. Although not every heuristic ranking was found statistically significant (3 out of 4 rankings were significant at the α = 0.5 level), the Modified-Delphi information structure was ranked higher than the Open information structure in all counts.

The results from the Inverse card sort were surprisingly very poor. Low results were expected because the tested information structures were not refined information architectures, however a performance of 46% was well below the expected score. Although only seven participants were tested, a number too small for statistical tests, this number of participants is typical to how a usability study would recruit. Interestingly, the information structures from the Modified-Delphi and Open studies performed similarly in most of the questions and high or low scores, even though the questions were selected because of the differences in the information structures. This suggests that the topic questions that had the poorest scores may be sensitive to the types of participants (who were overwhelmingly students) and must be given special consideration when refining the final information architecture.

Table 9. Summary of Results from Method Comparison Studies
Comparison of Sorting Method Heuristic Review Overall Rating Independent Ranking Dependent Ranking Inverse Card Sort
Modified-Delphi < Open          
Modified-Delphi = Open     X   X**
Modified-Delphi > Open X* X   X  

* not all heuristic scores were statistically significant at the α = 0.05 level
** not found to be statistically significant at the α = 0.05 level due to small sample size

Conclusion

When I proposed the Modified-Delphi card sort, there were three goals I wanted to meet in order to have successfully proposed a pre-design method to replace the Open card sort: (a) improve the quality of results from each participant, (b) reduce the time to conduct the study and analyze the results, (c) lower the costs of conducting a study and possibly the cognitive costs to participants.

Results. There is a proven benefit of participants working with a single model, rather than many. The information evolutionary model influenced by the Delphi method helps control randomness and outliers that are commonly encountered when analyzing multiple models as a single set of data. Results from the studies conducted also suggest the superiority of results generated from the Modified-Delphi card sort over the Open card sort.

Time. By reducing the recommended number of participants to complete a successful study, the amount of time required to both conduct a pre-design card sorting study and analyze the results has been reduced. The amount of time saved during analysis may be even greater due to the quality of results gathered, as discussed in the Discussion section. The weak agreement between the Open card sorting results required extra analysis time to follow heuristics for card placement, thus increasing the difficulty in analysis and the time for analysis.

Costs. Time is money (Walker et al., 2003), and by reducing the overall time of a study, the costs are also reduced. Combined with fewer participant stipends and fewer days of facility costs, the overall return on investment of the Modified-Delphi card sort is significantly higher over the Open card sort. While not monetary in nature, there is also a savings of cognitive costs to the participants. As discussed previously, participants of the Modified-Delphi card sorting study were much more talkative than the participants of the Open card sorting study. It is possible that the method reduced cognitive cost and helped the participant to be more engaged in the problem solving parts of the task.

There is strong statistically significant evidence that as a laboratory method, the Modified-Delphi card sorting method is better than the Open card sorting method. Results from the heuristic review and ranking by information experts show the overall rating of the results from the Modified-Delphi card sort were better than the results from the Open card sort. The information structure generated from the Modified-Delphi card sort was also considered to be more helpful for aiding in the design of an information architecture than the information structure generated from the Open card sort. Also, many of the expert heuristic review scores from the Modified-Delphi card sort were significantly better than the results from the Open card sort; those that were not statistically significant still provide evidence that the two methods are at least equivalent.

The Modified-Delphi method is a new card sorting method that requires fewer users per study and provides better results than the traditional Open card sort. Additionally, results from the study suggest a savings of time and costs from using the Modified-Delphi card sort over the Open card sort. A parallel study of the two methods has provided statistically significant evidence that results from the Modified-Delphi card sort are at least as good as the results from the Open card sort, and in some cases, better.

Next Steps

The Modified-Delphi cart sort provides a promising alternative to Open card sorting for use as a pre-design method. However, it is a new method and must be further researched in order to refine the methodology and maximize the return of investment it requires. Questions about the method surfaced during the design of method and analysis and preliminary reporting of results. Some of these questions included the following:

  • Does the number of 8 to 10 participants hold up for all datasets and participant diversity cases?
  • How do characteristics of the dataset affect the validity of the method?
  • How much does participant diversity matter (for example, using a single user group versus a sampling of all user groups)?
  • How much influence does the selection of the seed participant have on the study?
  • How does the method hold up with groups of participants per session, rather than single participant sessions?
  • Does the Modified-Delphi card sorting method provide as good of results executed as a web-based method as it does in the laboratory?

The answers to these questions can only be obtained by conducting additional Modified-Delphi card sorting studies. Parallel Open and Modified-Delphi studies are not cost effective and only make sense for research purposes. Modified-Delphi card sorting has already been proven to be as at least as good and very likely better than Open card sorting. It would be interesting to conduct this method as a web-based method to see if it offers benefits to web-based methods as it has to laboratory-based methods. I encourage practitioners to utilize this method early in the design process and share their results with the rest of the community.

Acknowledgments

This research was partially funded by the Information Architecture Institute 2006 Process Grant (Paul, 2007) and introduced at the 2007 Information Architecture Summit in Las Vegas, Nevada. I would also like to thank Kathryn Summers of the University of Baltimore and my colleagues Bill Killam and Marguerite Autry of User-Centered Design, Inc. for their guidance and support during the project.

References

Boucher, W.I. & Talbot, J. (1972). Report on a Delphi Study of Future Telephone Interconnect Systems. Report 49-26-01.

Boy, G.A. (1997). The Group Elicitation Method for Participatory Design and Usability Testing. Interactions 4(2), pp 27-33.

Classified [Computer Software]. Information & Design and UCDesign. Retrieved July 1, 2007, from http://www.infodesign.com.au/usabilityresources/classified.

Ede, M.R. (1998). Focus Groups to Study Work Practice, Usability Interface 5(2).

Gaffney, G. (2000). What is Card Sorting? Information & Design. Retrieved July 1, 2007, from http://www.infodesign.com.au/ftp/CardSort.pdf.

Hamilton, W.D. (1971). Geometry for the Selfish Herd, Journal of Theoretical Biology, 31(2), 295-311.

Helmer-Hirschberg, O. (1967). Analysis of the Future: The Delphi Method. RAND Corporation, Report P-3558.

Krug, S. (2000). Don’t Make Me Think! A Common Sense Approach to Web Usability (pp. 142-147). New Riders Press.

Lamantia, J. (2003, August). Analyzing Card Sort Results with a Spreadsheet Template. Boxes and Arrows. Retrieved July 1, 2007, from http://www.boxesandarrows.com/view/analyzing_card_sort_results_with_a_spreadsheet_template.

Linstone, H.A. (1975). The Delphi Method: Techniques and Applications. Addison-Wesley.

Maurer, D. & Warfel, T. (2004, April). Card sorting: a definitive guide. Boxes and Arrows. Retrieved July 1, 2007, from http://www.boxesandarrows.com/view/card_sorting_a_definitive_guide.

McGovern, G. (2002, September). Information architecture: using card sorting for web classification design. Retrieved July 1, 2007, from http://www.gerrymcgovern.com/nt/2002/nt_2002_09_23_card_sorting.htm.

Muller, M.J. (1991). PICTIVE: An Exploration in Participatory Design. In the proceedings of ACM Conference on Human Factors in Computing Systems (pp. 225-231) New Orleans, Louisiana, United States. ACM.

Muller, M.J. (2001). Layered Participatory Analysis: New Developments in the CARD Technique. In the proceedings of ACM Conference on Human Factors in Computing Systems (pp. 90-97) Seattle, Washington, United States. ACM.

Nadeau, R., Cloutier, E., & Guay, J.H. (1993). New Evidence About the Existence of a Bandwagon Effect in the Opinion Formation Process. International Political Science Review, 4(2), 203-213.

Nomiyama, E., Abela, C., & Summers, K. (unpublished). Usability Testing: School of Law, University of Baltimore. University of Baltimore Usability Lab, Baltimore, MD.

Nielsen, J. & Sano, D. (1995). SunWeb: User Interface Design for Sun Microsystem’s Internal Web. Computer Networks and ISDN Systems, 28, 179-188.

Nielsen, J. (1997). The Use and Misuse of Focus Groups. IEEE Software, 28(1) 94-95.

Nielsen, J. (2004, July). Card Sorting: How Many Users to Test. Jakob Nielsen’s Alertbox. Retrieved July 1, 2007, from http://www.useit.com/alertbox/20040719.html.

OptimalSort [Web-based Software]. Optimal Usability. Retrieved November 20, 2007, from http://www.optimalsort.com.

Paul, C.L. (2007). Investigation of Applying the Delphi Method to a New Card Sorting Technique. Information Architecture Institute, June 6, 2007. Retrieved July 1, 2007, from http://iainstitute.org/news/000632.php.

RAND Corporation. (2007). A collection of RAND publications on the Delphi method. Retrieved July 1, 2007, from http://www.rand.org/pardee/pubs/methodologies.html#delphi.

Robertson, J. (2002, February). Information Design Using Card Sorting. Intranet Journal, Retrieved July 1, 2007, from http://www.intranetjournal.com/articles/200202/km_02_05_02a.html.

Rosenfeld, L. & Morville, P. (2002). Information Architecture for the World Wide Web. Sebastopol, CA: O’Reilly Media.

Rosenfeld, L. (2004, August). Information Architecture Heuristics. Louis Rosenfeld’s Bloug. Retrieved July 1, 2007, from http://www.louisrosenfeld.com/home/bloug_archive/2004/08/information_architecture_heuri.html.

Scholtz, J. & Laskowski, S. (1998). Developing usability tools and techniques for designing and testing web sites. In the proceedings of Conference on Human Factors & the Web.

Soctratic Online Card Sort [Web-based Software]. Socratic Technologies, Inc. Retrieved July 1, 2007, from http://www.sotech.com/main/eval.asp?pID=123.

Stuter, L.M. (1996). The Delphi Technique: What is it?. Lynn’s Educational and Research Network, March 1996. Retrieved from http://www.learn-usa.com/transformation_process/acf001.htm.

Surowiecki, J. (2004). The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations. Doubleday.

Teijlingen, E., Pitchfork, E., Bishop, C., & Russell, E. (2005). Delphi method and nominal group techniques in family planning and reproductive health research. Journal of Family Planning and Reproductive Health Care, 31(2) 132-135.

Tognazzini, B. (1992). Tog on Interface. Addison-Wesley.

Toro, J.A. (2006). CardZort [Computer Software]. Retrieved July 1, 2007, from http://www.cardzort.com/cardzort/index.htm.

Tudor, L.G., Muller, M.J., Dayton, T., & Root, R.W. (1993). A participatory design technique for high-level task analysis, critique, and redesign: The CARD method. In the proceedings of Human Factors and Ergonomics Society Annual Meetings (pp. 295-299) Human Factors and Ergonomics Society.

Tullis, T. & Wood, L. (2004). How many users are enough for a card-sorting study? In the proceedings of Usability Professionals Association Conference. Minneapolis, MN.

Tulving, E. & Schacter, D.L. (1990). Priming and human memory systems, Science, 19, 301-306.

Walker, I. & Zhu, Y. (2003). Education, Earnings and Productivity – Recent UK Evidence. Labor Market Trends.

WebCAT [Web-based Software]. National Institute of Standards and Technology. Retrieved July 1, 2007, from http://zing.ncsl.nist.gov/WebTools/WebCAT/overview.html.

WebSort [Web-based Software]. Parallax, LLC. Retrieved July 1, 2007, from http://www.websort.net.

Whitmore, M., Berman, A., & Chmielewski, C. (1997, March). Independent Verification and Validation of Complex User Interfaces: A Human Factors Approach (TP-3665). Houston, TX: Lockheed Martin Engineering & Science Services.

Wikipedia, The Free Encyclopedia. The Wisdom of Crowds (2007). Retrieved on August 22, 2007 from http://en.wikipedia.org/w/index.php?title=The_Wisdom_of_Crowds&oldid=151441531.

Appendix 1: Modified-Delphi Information Structure

Admissions/Prospective Students

  • About the School
  • Academic Requirements
  • Admissions Process
  • Application
  • Application Forms & Instructions
  • Financial Aid Information
  • How to Apply
  • Information for Applicants
  • Inter-Institutional Registration
  • International Students
  • Recruitment Schedule
  • Residency Requirements
  • Scholarships & Loans
  • Transferring to UB
  • Tuition
  • Visiting Students

Academics/Curriculum

  • Academic Calendar
  • Catalog/Course Listings
  • Concentrations
  • Course Descriptions
  • Curriculum Overview
  • Graduate Tax Program
  • Honor Code
  • Legal Skills Program
  • Request a Transcript

About UB Law School

  • About Baltimore City
  • Calendar of Events
  • Dean’s Notes Newsletter
  • History of the School
  • Location & Directions
  • News & Events
  • School Statistics

Career Services

  • Attorney Practice Internship
  • Information for Employers
  • Internships
  • Interviewing Tips
  • Job Listings
  • Job Vacancy Announcement
  • Form
  • Judge Solomon Liss Visiting Scholar Program
  • Judicial Internship
  • Public Interest Fellowship
  • Sandy Rosenberg Scholarship
  • Summer Programs
  • Upcoming Career Services Events

Campus Services

  • Bookstore
  • Campus Maps & Locator
  • Center for Student Involvement
  • Computer Support
  • Library Hours
  • Library Policies
  • Library Services
  • Student Counseling Services
  • Student Life
  • Student Organizations
  • University Facilities & Services

Maryland Bar Exam

  • Bar Exam Courses
  • Maryland Bar Exam Application

Alumni

  • Alumni Association Committees
  • Alumni Calendar of Events
  • Alumni Career Experts
  • Alumni Events & Programs
  • Alumni Resources
  • Alumni Stories
  • Information for Alumni

Centers & Programs

  • A.M. Law Program
  • A.M. Law Seminar Series
  • Center for Families, Children & the Court
  • Center for International & Comparative Law
  • Snyder Center for Litigation Skills
  • Study Abroad Programs
  • Writing Competitions
  • Writing Programs

Clinics

  • About Clinical Programs
  • Clinical Application Forms
  • Clinical Policies
  • Clinical Programs
  • Criminal Practice Clinic
  • Disability Law Clinic
  • Family Law Clinic
  • Family Mediation Clinic
  • Tax Law Clinic

Faculty

  • Adjunct Faculty
  • Faculty News
  • Faculty Profiles
  • Graduate Tax Program Faculty
  • Legal Skills Faculty

Appendix 2: Open Information Structure

About UB Law

  • About Baltimore City
  • About the School Bookstore
  • Campus Map & Locator
  • History of the School
  • Location & Directions
  • School Statistics
  • Student Life
  • Student Organizations
  • University Facilities & Services

Centers & Programs

  • About Clinical Programs
  • Center for Families, Children & the Court
  • Center for International & Comparative Law
  • Clinical Application Forms
  • Clinical Policies
  • Clinical Programs
  • Criminal Practice Clinic
  • Curriculum Overview
  • Disability Law Clinic
  • Family Law Clinic
  • Family Mediation Clinic
  • Graduate Tax Program
  • Legal Skills Program
  • Snyder Center for Litigation Skills
  • Study Abroad Programs
  • Summer Programs
  • Tax Law Clinic
  • Writing Program

Library

  • Library Hours
  • Library Policies
  • Library Services

Information for Current Students

  • Academic Calendar
  • Center for Student Involvement
  • Class Cancellation
  • Computer Support
  • Honor Code
  • Student Counseling Services

Alumni

  • A.M. Law Program
  • A.M. Law Seminar Series
  • Alumni Association Committees
  • Alumni Calendar of Events
  • Alumni Career Experts
  • Alumni Events & Programs
  • Alumni Resources
  • Alumni Stories
  • Information for Alumni
  • Request a Transcript

Career Services

  • Attorney Practice Internship
  • Information for Employers
  • Internships
  • Interviewing Tips
  • Job Listings
  • Job Vacancy Announcement Forms
  • Judicial Internship
  • Public Interest Fellowship
  • Recruitment Schedule
  • Upcoming Career Services Events

News & Events

  • Calendar of Events
  • Dean’s Notes Newsletter

Prospective Students/Applicants

  • Application
  • Academic Requirements
  • Admissions Process
  • Application Forms & Instructions
  • Financial Aid Information
  • How to Apply
  • Information for Applicants
  • International Students
  • Judge Solomon Liss Visiting Scholar Program
  • Residency Requirements
  • Sandy Rosenberg Scholarship
  • Scholarships & Loans
  • Transferring to UB
  • Tuition
  • Visiting Students

Maryland Bar Exam

  • Bar Exam Courses
  • Maryland Bar Exam Application

Academics

  • Catalog/Course Listings
  • Concentrations
  • Course Descriptions
  • Inter-Institutional Registration
  • Writing Competitions

Faculty

  • Adjunct Faculty
  • Faculty News
  • Faculty Profiles
  • Graduate Tax Program Faculty
  • Legal Skills Faculty