Gesture Selection Study for a Maternal Healthcare Information System in Rural Assam, India

Peer-reviewed Article

pp. 7-20

No PDF available for download.

Abstract

This paper presents a case study aimed at selecting suitable body gestures to represent actions that viewers of a televised maternal health information program will recognize and understand. This program is designed for pregnant women in rural Assam in India. We observed the gestures of 24 pregnant women to determine how to present the following seven computational functions that were used in the health information program: Select, Pause, Resume, Help, Activate Menu, Next, and Previous. The participants belonged to the low socio-economic strata in rural Assam, India and most had poor literacy levels. A participatory approach through a user-generated gesture collection method was used for this study. This study produced a total of 49 different gestures that participants performed to represent the seven computational functions. We selected seven body gestures based on the frequency of use, logical suitability to represent functions, the decreased possibility that the gestures would be accidently performed (false positives), and the ease of detection for the chosen technology (technical limitations).

Keywords

maternal health, gesture study, participatory design, ICT4D, pregnant women, Assam, India

 

Introduction

In developing countries, maternal health conditions that need immediate attention account for 99% of maternal deaths. India, for instance, is responsible for the highest number of maternal deaths across the globe. There is an undeniable need for reliable healthcare services to improve maternal healthcare in India, especially in Assam which accounts for highest maternal mortality ratio (MMR) of 390 (out of 100,000 live births; World Health Organization, 2013).

We conducted a need assessment study through contextual inquiry among pregnant women in rural Assam, India. Major findings of our research suggested consistent lack of information awareness among rural pregnant women, low technology literacy, and scarce usage of and non-familiarity with mobile phones and related communication devices. The findings demanded an immediate need to impart pregnancy related information to women through an accepted and familiar technology platform like television (TV). Details of our user research can be found in our previous report (Sorathia et al., 2013).

Based on the findings, we conceptualized a gesture-based TV program to provide health information to pregnant women. The program educates women on what to do and what not to do during pregnancy and provides information on government health schemes, pregnancy tests, and appropriate food habits and the significance of those habits to a growing fetus. We named the health information program Chetna, which means awareness. To facilitate using the program, we chose seven functions: Select, Pause, Resume, Help, Activate Menu, Next, and Previous. We refer to these functions as computational functions as they are often used in computing supported information systems. To communicate these functions to our intended audience, we developed this gesture study. Figure 1 shows the system’s information architecture and related computational functions associated at various stages. The following are the explanations of the tasks performed for each function:

  • Select: Choose a given option or information source.
  • Pause: Temporarily stop an ongoing activity (e.g., video/animation).
  • Resume: Restart the paused ongoing activity (e.g., video/animation).
  • Next: Move to the next topic.
  • Previous: Move to the previous topic.
  • Activate Menu: Move to the home screen that contains the Activate Menu.
  • Help: Seek assistance.

In this paper, we present a case study aimed at selecting suitable body gestures to portray the seven computational functions. The selected gestures will be used in the Chetna program. In this paper, we start by reporting similar research done regarding related work, then we provide a description of our adopted methodology. We discuss the gestures performed by the women and the gesture selection process as well.

Information architecture for Chetna programFigure 1. Information architecture for the Chetna program, includes the computational functions.

Related Work

We divide the Related Work section into two categories. First, we report on the existing work on gesture recognition, where gesture recognition technology is designed by developers’ intuition rather than observing actual user preferences and how actual user gestures are studied to complete device tasks. Second, we report on the existing methods of gesture selection through a participatory user approach.

Prior Work Done on Gesture Recognition and Observation

Since 2002, the way gestures have often been designed for use in video programs, for example, is based on designers’ intuitions and developers’ preferences. Previous attempts in gesture-based interaction were mostly directed towards algorithms designed to recognize human gestures for computer related programs (Moeslund, Storring, & Granum, 2002; Schlomer, Poppinga, Henze, & Boll, 2008). With previous experiments often focusing on accurate gesture detection through developing new algorithms for applications, some research in recent time has also chosen a non-technology based approach in designing gestures suitable to contexts for technology platforms users. Kray, Nesbitt, Dawson, and Rohs (2010) tried to determine what human gestures could be used to control functions of a mobile phone (e.g., phone-to-phone, phone-to-tabletop, and phone to public display). In this study, 23 university students were asked to perform gestures using different devices (e.g., to send an application from their phone to any other device and to download a contact from any device to their phone). Based on the success of the task, the most natural gestures were selected. In another study, Troiano, Pedersen, and Hornbæk (2014) gave participants 29 tasks, including selection, navigation, and 3D modelling, to define a gesture set out of the 493 observed gestures that could be used to develop elastic and deformable device displays. Ruiz, Li, and Lank (2011) proposed a gesture taxonomy to produce commands on a smart phone by requiring participants to perform a set of gestures for 19 tasks using the device. Connell, Kuo, Liu, and Piper (2013) used the Wizard of Oz technique (where a person, not a computer, is behind the scenes directing responses to participants’ actions) to elicit gestures for whole body interaction by using a motion-sensing device to prompt six children (age 3 to 8) through a series of 22 task stimuli including object manipulation, navigation-based tasks, and spatial interaction. Kurdyukova, Redlin, and André (2012) investigated gestures that iPad users naturally perform to transfer data through multi-touch, spatial, and direct contact between two iPads or the iPad tabletop or public display stands. The researchers emphasized the following three aspects for designing gestures for iPads: flat page-like shape of the device, flat metaphors for the device (plate or tray as a metaphor), and privacy concerns. Mauney, Howarth, Wirtanen, and Capra (2010) asked 340 participants across nine countries to define the gestures they used when performing 28 common tasks on handheld touchscreen user interfaces. The researchers then compared the cultural differences and similarities of the observed gestures.

These study examples are mostly limited to observing participants from developed nations with literate users. We found no studies that investigated suitable gestures for a television platform targeted at low-literate users in resource scarce regions. We therefore began afresh with a study specific to our target user segment—low-literate pregnant women in rural areas of Assam, India. For gesture design studies like ours, the gesture design methodology assumes prime importance. Therefore, next we discuss the different methodologies that have been adopted for designing recent studies of gestures to choose a suitable method for our study.

Overview of Gesture Design Methodology

A common approach towards gesture design as seen in recent experiments suggests incorporating users’ participation in the design process. The importance of user-defined gestures has been well established by studies where comparisons have been done between the user defined gestures and those proposed by researchers (Morris, Wobbrock, & Wilson, 2010; Nacenta, Kamber, Qiang, & Kristensson, 2013). Nielsen, Moeslund, Storring, and Granum (2004) proposed a user–generated gesture approach to derive a set of usable gestures, where functions of the proposed system were presented to the users, and gestures for each of the functions were selected from the users based on a semantic representation of the associated functions. Henze, Löcken, Boll, Hesselmann, and Pielot (2010) built on Nielsen’s et al. method and proposed to validate the outcome of each step to derive a gesture set. Wobbrock, Morris, and Wilson (2009) proposed a participatory design approach to derive basic gestures for surface computing. A similar design approach of participatory design and observation was adopted by Akers (2006) who tried to find a set of gestures for 3D selection of neural pathways. This study explored two design methods: gesture brainstorming, a Wizard of Oz method for early prototyping of new interfaces, and gesture log analysis, a machine-learning based log analysis method for improving existing interfaces. This method focused on what people do with gestural interfaces instead of relying exclusively on what they say. Wobbrock, Aung, Rothrock, and Myers (2005) took a guessability approach using a think-aloud protocol and video analysis to obtain qualitative data to understand users’ mental models: Effects of certain gestures were presented to the participants, and they were asked to guess the gestures that might have been used to invoke them. Using custom software, the researchers recorded quantitative measures (such as gesture timing, activity, and preferences) to obtain a set of user-defined gestures.

Overall, the above methods involve users in the design process and document gestures from users’ input. While the method suggested by Wobbrock et al. (2009) is preferred for small screen and mobile interfaces, the user-generated gesture method proposed by Nielsen et al. (2004) is independent of computing platforms and screen sizes. Moreover, this method does not require an early prototype interface, unlike those suggested by Wobbrock et al. (2009) and Akers (2006). For this study, we employed the user-generated gesture design methodology proposed by Nielsen et al. (2004) to design a gesture set suitable for the users and their context.

Methods Used for the User-Generated Gesture Design

This study aimed at identifying suitable gestures for seven computational functions: Select, Pause, Resume, Help, Activate Menu, Next, and Previous. We divided the study into two stages: (a) observation of user preferred gestures and (b) selection of suitable gestures based on how frequent the gestures were observed, how the gestures logically related to the computational function, the decreased possibility that the gestures would be accidently performed (false positives), and the ease of detection for the chosen technology (technical limitations).

The following are some commonly used gesture categories that we used to identify gestures for our study:

  • Deictic: Refers to pointing gestures for indicating objects, people, directions, etc. in the person’s physical space.
  • Symbolic (emblems): Refers to conventional forms and meanings of hand symbols, for example, a “thumbs up” indicates something good.
  • Iconic: Depicts some aspect of an object or action (these gestures are less conventional than symbolic gestures), for example, a rapid hand movement up and down may indicate the action of chopping ginger.
  • Metaphoric: Refers to a representation of abstract ideas or categories, for example, displaying an empty palm hand may indicate “presenting a problem.”

Participants

For this study, we recruited 24 pregnant women at Bonmoja mini Primary Health Center (mPHC) in a remote region of Changsari (30 kms from Guwahati in Assam, India). The mean age for all participants was 24.3 years. Participants were chosen with the help of local health workers called Accredited Social Health Activists (ASHAs) who are responsible for information outreach for government health schemes to pregnant women. Overall, participants had low literacy rates, including for technology. Out of 24 participants, only two had completed their bachelor’s degree, six had completed education up to tenth grade, 13 had completed some education prior to tenth grade, and three were completely illiterate. No participant had any prior experience with using gesture-based interface systems. Their technology literacy was limited to television usage (mainly with changing channels and volume) and mobile phones. Although they did not own mobile phones themselves, they used their husbands’ mobile phones to talk to their relatives.

Procedure

The participants came to the Bonmoja mPHC to participate in this study. We used the one-to-one interview approach to conduct the study. Through verbal instruction and scenarios, the researchers presented the one-by-one computational functions in random sequences. We did not present a screen or any form of interface for participants to view.

The researchers described a hypothetical situation asking the participants to “[i]magine 5 of your favorite Assamese songs listed in front of you. There is a set of tasks you need to perform to listen to them…” We asked the participants to perform each function in the form of a task. For example, we asked them, “Choose your most favorite song among the list through performing a gesture. How would you choose that?” Then, we asked them to perform at least two natural gestures for each given task. We requested that the participants not discuss the tasks with the other participants until the study was completed. Two teams of a pair of ASHAs moderated the session in order to achieve the most natural gestures for each function. This was done to leverage on the existent familiarity and trust between the ASHAs and the participants. Both sessions were conducted at the same time in different rooms of Bonmoja mPHC. The sessions were video recorded with permission from the users. Each session was 15 to 20 minutes long. Figure 2 shows moderators conducting the activity with participants who are performing the gestures for the given tasks in two different setups.

Two images of three women standing together in small rooms, with one moderating a study.

Figure 2. ASHAs moderating a session study in two different rooms at the Bonmoja mPHC.

Collection of User Preferred Gestures

A total of 49 different gestures that represented the seven computational functions were collected from 24 participants. Table 1 presents the collection of gestures gathered during the study. In Table 1, italic text represents a gesture category, and normal text represents the variations performed within the category.

Table 1. Collection of User-Generated Gestures Gathered During the Study

Function

User-generated gestures

Frequencya

Select

 

 

 

 

 

 

 

 

Deictic

22+1*

Pointing with stretched arms

13

Pointing with arm close to the body

9+1*

Iconic

2

Grab

1

Showing number using fingers (e.g., index finger for no. 1) to select information

1

Function  User-generated gestures Frequencya

Pause

 

 

 

 

Symbolic

24 + 3*

Halt once with right arm stretched out

11 + 2*

Halt twice with right arm stretched out

9 + 1*

Halt with palm down

4

Function User-generated gestures Frequencya

Resume

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Iconic (for Come Back I gesture)

6 + 2*

Come Back I: palm facing upward direction moving towards the face, performed once

5 + 2*

Come Back I: movement performed twice together

1

Iconic (for Come Back II gesture)

8 + 2*

Come Back II: palm facing down while moving towards the face

6 + 2*

Come Back II: arm stretched straight with straight palm facing down moving towards the right shoulder

2

Deictic

2

Pointing a finger and making a tapping movement

1

Pointing a finger

1

Iconic (other)

5

Pressing a TV remote control button

1

Horizontal arm movement (right to left)

2

Turning a Knob

2

 Function User-generated gestures Frequencya

Next

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Deictic (vertical arm movement)

6

Arm movement from up to down

5

Arm movement from up to down including pointing

1

Deictic (horizontal arm movement)

10

Arm movement right to left

7

Arm movement left to right

3

Deictic (pointing twice)

1

Iconic

7

Come Back I

4

Come Back II

3

 Function User-generated gestures Frequencya

Previous

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Deictic (vertical arm movement)

6

Arm movement from down to up with arms close to body

4

Arms stretched and palm movement from down to up

2

Deictic (horizontal arm movement)

11

Arm movement left to right

7

Arm movement right to left

4

Deictic (other)

2

Palm rotation

1

Moving arm front to back

1

Symbolic

5

Right arm with palm pushing out

4

Arm goes back a little and palm pushes out

1

Iconic (lift gesture from right to left)

1

 Function User-generated gestures Frequencya

Activate menu

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Deictic (half circle palm movement from right to left)

3

Symbolic

3 + 4*

Waving goodbye

2

Performing Namaste

4*

Making a halt movement

1

Iconic

2

Making the Go gesture

1

Turning hand as if it was a piece of paper three times

1

Metaphoric

12

Bringing two arms closer

11

Gesturing a full circle

1

 Function User-generated gestures Frequencya

Help

Deictic (raising right arm)

17

Waving once with one hand

8

Waving twice with one hand

2

Raising right arm and not waving

7

Deictic

1+ 1*

Two arms stretched out, palms down, both hands waving back

1*

Pointing

1

Iconic

4

Begging

2

Come Back I

1

Come Back I twice

1

Symbolic

2

Halt

1

Two arms wide moving up and down

1

* represents gesture performed as participants’ second choice
abold represents the total number of the performed gestures for the gesture category

Gesture Selection

As described in Table 1, participants performed several gestures to represent each of the given computational functions. To identify and select suitable gestures out of the pool of all the gestures performed, the following acted as determinants: (a) frequency of the gesture performed, (b) logical mapping of the gesture performed to the function, (c) decreased possibility that the gestures would be accidentally performed, and (d) ease of detection for the chosen technology.

Pointing was an obvious choice to represent the Select function based on the frequency of the gesture performed—23 out of 24 participants used this gesture. We considered two possibilities for how to perform the gesture: (a) pointing with a stretched arm or (b) pointing with an arm close to the body. We decided that participants could perform the Select function by pointing with or without an arm stretched out, as per their preference (Figure 3a).

Similar to the Select function gesture, all users (24/24) performed the halt gesture to represent the Pause function. There were some variations such as gesturing to halt twice or halt with the palm in downwards direction, but we found the gestures to be semantically presenting the same function. Considering the frequency of performed gestures, halt with and without an arm stretched was chosen as appropriate to represent the Pause function (Figure 3b).

The participants performed a variety of gestures to represent the Resume function, such as the iconic come back gesture, pointing, and turning a doorknob. When participants performed the come back gesture they either did it by moving their arm with the palm faced upwards toward their face (performed 11 times) or they made the same movement but their palm was faced down at first (performed eight times). We call these gestures Come Back I and Come Back II, respectively. Given the opposite correlation between the Pause and Resume functions, we found the come back gesture to be appropriate for the Resume function. Moreover, the come back gesture semantically represents calling users back to their normal state that was temporarily stopped due to the Pause function. Because the Come Back I gesture was performed more by the participants and given that the correlation with the Pause function, we chose the Come Back I gesture to represent the Resume function (Figure 3c).

For the Next function, participants performed a variety of gestures, such as vertical arm movement (upwards to downwards) and horizontal arm movement (left to right and right to left), that were preferred by the participants. These gestures presented progressive navigation of information; however, we hypothesized that some of the gestures may contradict users’ mental models if the presented information did not correlate with gestures (e.g., vertical arm movement to navigate an image from right to left). Because the system proposes right to left transition of information, we selected the right to left horizontal arm movement to represent the Next function (Figure 3d).

For the Previous function, participants performed a variety of gestures, such as moving the arm horizontally and vertically in different directions or halt or push gestures. Considering the opposite nature of the Next and Previous functions and the frequency of performed gestures, we chose the left to right horizontal arm movement to represent the Previous function (Figure 3d). It is important that the chosen gestures do not occur unwittingly. To prevent a misunderstanding for the Next and Previous functions, we restricted the gestures to the area above the stomach (Figure 3c and 3d).

We chose the metaphoric gesture of bringing two arms together for the Activate Menu function due to the participants’ high preference for this gesture. Moreover, this gesture demonstrates a metaphoric representation of bringing things together, which is the purpose of the Activate Menu, that is, to bring all information to one place. We decided that the other gestures that participants performed, such as turning a hand three times, would have been tiring for the participants when performed multiple times.

Out of 24 participants, 17 performed variations of raising her right arm to represent the Help function. These gestures represented a call for help such as raising your hand to get someone’s attention. We decided that iconic gestures such as begging might not be culturally acceptable for our target user group. We chose a raised arm with the palm facing out to represent the Help function. To eliminate the possibility of misunderstandings from routine arm movements, the right arm should be raised above the head for two seconds.

Results

We finalized the use of seven gestures to represent the seven functions as explained in Table 2 and shown in Figure 3.

Table 2. List of Finalized Gestures

Function

Extracted gesture

Select

Point

Pause

Push hand out from shoulder (halt)

Resume

Move palm towards shoulder

Next

Swipe hand horizontally (above waist) from right-to-left

Previous

Swipe hand horizontally (above waist) from left-to-right

Activate Menu

Bring two hands (above waist) together

Help

Raise right hand above head for 2 seconds

Plain illustrations of women using various hand gestures.

Figure 3. Visual representation of functions (a) select (b) pause (c) resume (d) next (e) previous (f) activate menu (g) help.

Discussion

The following sections discuss the different parameters that influenced the participants’ gesture performance and the researchers’ final selection of gestures.

Familiar and Non-Familiar Functions

All participants easily understood familiar functions such as Select, Pause, Next, Previous, and Help. Because these functions were associated with day-to-day interactions with TV and mobile phone technology and the moderators did not need to give detailed descriptions, participants were able to perform quick and non-hesitant gestures. However, for the Activate Menu, participants’ lack of computer experience made the understanding of this function difficult and confusing for the participants. Because they did not understand the function, participants initially performed random gestures, which is why there were so many varied gestures (e.g., bye, Namaste, halt, and paper turning gestures) associated with this function.

Vocabulary Driven Mental Models

The moderators vocabulary used to describe each function influenced the participants’ gestures. For instance, one moderator explained Resume as, “If you want to call back this paused song, how will you do it?” The participant then performed the call back gesture. Similarly, a moderator explained the Activate Menu as, “If you want to bring all information together, how will you do it?” This resulted in the participants bringing both hands together. Another example is the word for wait was used to explain the Pause function that resulted in participants using a halt gesture.

Reversible Gestures for Opposite Correlational Functions

Pause–Resume and Next–Previous are opposite correlational computational functions. Participants mostly employed reversible gestures for these functions, even though moderators did not present them together. This is similar to the findings reported by Wobbrock et al. (2009); however, in this study the functions are 3D gestures instead of on 2D surface computing platforms as was reported in the Wobbrock study.

Limitations and Challenges

We observed that some, not all, moderators influenced participant gesture performance for some functions. For instance, when one participant performed a number 1 to represent the Select function, moderators instructed her to perform a pointing action. Although moderators were immediately asked to refrain from influencing the participants’ thinking, such situations may have biased the results.

Participants carried their belongings (e.g., plastic bags, purse, medical card, etc.) with them to the study and often performed gestures with their belongings. This influenced the expressiveness of their gesture performance by decreasing their range of motion for the gestures. Although the final list of gestures accommodates for a higher range of motion for the selected gestures, the acceptability of these gestures for system usage has yet to be observed and investigated.

Conclusion and Future Work

This paper presented a study of user-generated gestures through a participatory method employed across 24 pregnant women in rural Assam in India. We observed 49 gestures to represent seven computational functions: Select, Pause, Resume, Next, Previous, Activate Menu, and Help. The selected gestures will be used in a health information program, Chetna, to educate pregnant women in the Assam area of India. The researchers selected the seven gestures based on the frequency that participants performed them, the logical relation to the functions, the decreased likelihood that the gestures would be accidently performed, and the ease of detection for the selected technology. As mentioned in the Discussion section, this study also presents factors like participants’ familiarity with computational functions, the influence of moderators’ vocabulary used to describe tasks, and reversible functions that influenced participants’ gesture performance. Future work aims at validating these results by performing studies that gauge its adoptability, acceptance, learnability, and memorability.

Tips for Usability Practitioners

The following are a few usability tips learned from this study:

  • Users belonging to resource scarce regions have less exposure and understanding of computational terminologies. Therefore, explaining the purpose and intent of each function can be advantageous when investigating gestures that are most natural to them.
  • Participants’ imagination of how a platform is used can go a long way in evoking the most natural gestures. Participants’ perceptions of a technology platform can benefit in cases of the system being implemented in the technology platforms that a user is familiar with. However, if the technology platform is different from the platforms the user is familiar with, it may lead to complex gestures not suitable for system interaction.
  • In studies of this nature, where participants are low-literate and belong to resource scarce regions where they are less exposed to advanced technologies and studies, the moderators should be chosen based on familiarity and social acceptance. This is even more critical for healthcare related studies where the information is often critical and personal. The moderators in turn need to be trained to interact with the participants objectively without influencing their responses and not presenting any offensive comments at the same time. Mock sessions should be conducted with moderators to ensure their communication with participants is not biased and acceptable among participants.
  • Practitioners need to take into account usage of their system in real world settings. The study should be done in the actual context of use of the system. As depicted in our case, the participants performed gestures along with additional accessories such as purse, medical card, etc. The system and gestures therefore will be defined paying heed to this learning that was possible only due to the study being conducted in a real world setting.

References

Akers, D. (2006). Wizard of Oz for participatory design: Inventing a gestural interface for 3D selection of neural pathway estimates. In Extended Abstracts Proceedings of the 2006 Conference on Human Factors in Computing Systems, CHI. Montréal, Québec, Canada.

Connell, S., Kuo, P. Y., Liu, L., & Piper, A. M. (2013). A Wizard-of-Oz elicitation study examining child-defined gestures with a whole-body interface. In Proceedings of the 12th International Conference on Interaction Design and Children (pp. 277–280). New York, NY: ACM.

Henze, N., Löcken, A., Boll, S., Hesselmann, T., & Pielot, M. (2010). Free-hand gestures for music playback: Deriving gestures with a user-centred process. In Proceedings of the 9th International Conference on Mobile and Ubiquitous Multimedia. New York, NY: ACM.

Kray, C., Nesbitt, D., Dawson, J., & Rohs, M. (2010). User-defined gestures for connecting mobile phones, public displays, and tabletops. Proceedings of the 12th international conference on Human computer interaction with mobile devices and services (pp. 239–248). New York, NY: ACM.

Kurdyukova, E., Redlin, M., & André, E. (2012). Studying user-defined iPad gestures for interaction in multi-display environment. In Proceedings of the 2012 ACM international conference on Intelligent User Interfaces 2012 (pp. 93–96). New York, NY: ACM.

Mauney, D., Howarth, J., Wirtanen, A., & Capra, M. (2010). Cultural similarities and differences in user-defined gestures for touchscreen user interfaces. In CHI ’10 Extended Abstracts on Human Factors in Computing Systems (pp. 4015–4020). New York, NY: ACM.

Moeslund, T. B., Storring, M., & Granum, E. (2002). A natural interface to a virtual environment through computer vision-estimated pointing gestures. Gesture and Sign Language in Human-Computer Interaction (2298), 59–63

Morris, M., Wobbrock, J., & Wilson, A. (2010). Understanding users’ preferences for surface gestures. In Proceedings of Graphics Interface 2010 (pp.261–268). Toronto, Ontario, Canada: Canadian Information Processing Society.

Nacenta, M. A., Kamber, Y., Qiang, Y., & Kristensson, P. O. (2013). Memorability of pre-designed and user-defined gesture sets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 2013 (pp. 1099–1108). New York, NY: ACM.

Nielsen, M., Moeslund, T., Storring, M., & Granum, E. (2004). A procedure for developing intuitive and ergonomic gesture interfaces for HCI. In A. Camurri and G. Volpe (Eds.), Gesture-Based Communication in Human-Computer Interaction: 5th International Gesture Workshop, GW 2003, LNCS 2915 (pp. 409–420). Berlin: Springer.

Ruiz J., Li, Y., & Lank, E. (2011). User-defined motion gestures for mobile interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 2011. (pp. 197–206). New York, NY: ACM

Schlomer, T., Poppinga, B., Henze, N., & Boll, S. (2008). Gesture recognition with a Wii controller. Proceedings of the 2nd international conference on Tangible and embedded interaction (pp. 11–14). New York, NY: ACM.

Sorathia, K., Amrit, M., Jain, M., George, D., Ranjan, A., & Kumar, J. (2013). Research findings, analysis and ICT interventions for empowerment of maternal health in Assam. Workshop on Intelligent User interfaces for Developing Regions (IUIDR), International Conference on Intelligent User Interfaces. Santa Monica, CA.

Troiano, G. M., Pedersen, E. W., & Hornbæk, K. (2014). User-defined gestures for elastic, deformable displays. In Proceedings of the 2014 International Working Conference on Advanced Visual Interfaces (pp. 1–8). New York, NY: ACM.

Wobbrock, J. O., Aung, H. H., Rothrock, B., & Myers B. A. (2005). Maximizing the guessability of symbolic input. In CHI ’05 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’05; pp. 1869–1872). New York, NY: ACM.

Wobbrock, J. O., Morris, M. R., & Wilson, A. D. (2009). User-defined gestures for surface computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1083–1092) New York, NY: ACM.

World Health Organization (WHO). (2013). A Presentation on Maternal Mortality
Levels (2010-12). World Health Organization MMR Report Retrieved August 2014 http://www.censusindia.gov.in/vital_statistics/SRS_Bulletins/MMR_2010-12-Report_Pres_19.12.2013.ppt