Information practices and user interfaces: Student use of an iOS application in special education
A framework connecting concepts from user interface design with those from information studies is applied in a study that integrated a location-aware mobile application into two special education classes at different schools; this application had two support modes (one general and one location specific). The five-month study revealed several information practices that emerged from student attempts to overcome barriers within the application and the curriculum. Students engaged in atypical and unintended practices when using the application. These practices appear to be consequences of the user interface and information processing challenges faced by students. Abandoning activities was a strategic choice and was an unanticipated information practice associated with the application’s integration into lessons. From an information processing perspective, it is likely that students reinterpreted information in the location mode as housing application content rather than being location specific and the information practice of taking photos emerged as an expressive use of the device when an instrumental task was absent. Based on these and other emergent practices, we recommend functionality that should be considered when developing or integrating these types of applications into special education settings and we seek to expand the traditional definition of information practice by including human-computer interaction principles.
KeywordsMobile learning Human information processing Mobile applications User interfaces Special education Cognitive support tools Usability
The mainstream accessibility of touch-input mobile devices has created a substantial market for mobile applications. In 2013, there were over two million applications available for iOS or Android devices (Ingraham 2013; AppBrain 2013). Moreover, the use of mobile applications in school settings is gaining popularity among teachers and administrators (Ally 2009) who are balancing keeping the curricula current with parent expectations and student interest in new media (Du et al. 2004; Windschitl and Sahl 2002). This is especially true in special education where some of this interest may be due to the general popularity of iOS devices and a desire to use mainstream devices rather than specialized devices that have limited functionality and are costlier (Goggin and Newell 2003). It may also be attributable to the general population’s use of such devices and the desire of special education students to fit in with those around them (Kim-Rupnow and Burgstahler 2004; Ludlow 2001). Interest in using technology in special education has also been growing, especially as a means to compensate for communication deficits (Ludlow 2001; Tentori and Hayes 2010; Turnbull 1995). As these studies have shown, focusing on special education student experiences can highlight device-user interaction issues that can be hard to identify when observing typically developing students who may accommodate for challenging human-computer interfaces while those with intellectual, physical, or cognitive impairments exhibit alternative responses that flag interaction obstacles (Campigotto et al. 2013).
While studies of the use of touch-input mobile devices in special education classrooms are only just emerging, the focus has largely been on how adaptive interfaces - that is, technology interfaces that can offer alternative input/output or otherwise learn from users with atypical learning abilities - can improve learning outcomes for students in special education classes. The United Nations Educational, Scientific, and Cultural Organization (UNESCO) published a highly cited report in 2000 called the ‘Information and Communication Technology in Special Education’ (Carey et al. 2000). This report represented the first comprehensive survey of education technology applications used in special education. It highlighted the role that graphical user interfaces can play in improving the communication and accessibility of text or writing for users with learning disabilities. Following this report, several research articles and collections have described the development of applications for special education (Hirotomi 2007; Fernández-López et al. 2013; Starcic et al. 2013; Mateu et al. 2014; and Miesenberger et al. 2014) mainly from computer science, educational research, human-computer interaction, or learning science perspectives. In almost all of the existing literature there is an objective to create applications that will improve accessibility and aid the learning outcomes for students in special education.
We are interested in expanding the research trajectory of touch-input mobile device use in special education from an accessibility lens to include a more critical examination of the experiences of students as these technologies are deployed – we are interested in the user experiences and practices that emerge following the introduction of these technologies. In particular, we bring a few novel aspects to the discussion: a) an information practice and user interface conceptual framework; b) a focus on location-aware applications since they employ spatial information, such as maps, and can provide orientation data that assists users across contexts; and c) a detailed analysis of the challenges that arise, with recommendations for future development and deployment of these technologies in special education.
This paper presents findings from a study of students who used the application MyVoice1 on iOS devices while attending special education classes in schools in Toronto, Canada. While we use this particular application for our analysis we believe that the findings and recommendations from the study are applicable to many other applications currently being used in classrooms worldwide. In the study we asked two research questions: 1) how did students use the location-aware support application and 2) in what ways did specific aspects of the user interface influence student information practices. For question 1, we were interested in student choices, indicated preferences, and emergent or unanticipated application uses. For question 2, we explored the relationship between the application’s user interface and its consequences for user actions; specific actions that could be attributed to the application design were identified and the interface’s role in subsequent actions was considered.
1.1 Conceptual framework
Information practice is a concept in information studies that suggests user actions are not driven solely by internal cognitive processes (Meyers et al. 2009; Wilson 2000), but that user actions should be analysed within a broader context that includes factors external to the user’s immediate perception (Caidi and Allard 2005; McEwen and Scheaffer 2013). Therefore, when someone is observed using information, we look at the consequences for user action that derive from other persons or objects in the immediate environment, even if the user cannot articulate the effect of external entities on his or her actions. Savolainen (2009) applied this lens to user actions when positing that observed actions can be understood as processes that users undertake when interacting with information.
Human-computer interaction (HCI) explores the attempted communication between two powerful information processors, human and computer, using narrow-bandwidth over a highly constrained interface (Tufte 1989). HCI, thus, considers the user interface a major contributor to system success, and a fundamental goal of HCI is to increase the useful bandwidth across that interface (Jacob 1994). However, this does not always occur and user access to information via the interface is often diminished. Users often cope with interface barriers by consciously or unconsciously employing strategies to counteract information loss. Through the process of appropriation, users adopt and adapt technologies to fit within their existing practices and transform those practices to fit the technology (Dourish 2003). This allows atypical user actions to be viewed as information practices composed of strategies invoked to attempt to overcome user-interface obstacles while managing information. The linguistic form of textese that emerged when users exchanged information through the character-limited interface of SMS messages on mobile phones is one example (Mose 2013) that suggests a conceptual framework that relates information (i.e., the content resulting from communication between the user and system), user interfaces (i.e., media), and user information practices (i.e., strategies). We apply this framework to observations of special education students using a touch-input mobile application. In so doing, we seek to expand the traditional definition of information practice by including HCI principles.
This five-month study was conducted in 2011 using iOS devices in two Toronto area public schools with students in grades 7 through 12. Data were gathered from demographic information profiles, interviews, and application usage logs and this constituted a mixed-methods approach described in more detail below. Working with the support of the schools’ principals, and ethics boards, we engaged with the teachers from one classroom in each school. This allowed for the study of a total of 23 students aged 12 to-21. Both classrooms were identified as Special Education classes by the Ontario Ministry of Education, and students were identified as having intellectual and/or cognitive exceptionalities that require additional support and differentiation within the curriculum to support their success. Both classrooms fall under the jurisdiction of the Toronto District School Board and there are several types of special education programs running within the board. For our study we investigated classrooms running Intensive Support Programs (ISPs), where there is a lead teacher, educational assistant(s) and middle-school students in a 1:8 ratio of teachers/assistants: students. This low ratio is typical of ISPs in the board and has had positive outcomes for classroom management (TDSB 2013). iOS devices were introduced to the students in both schools as classroom tools for the first time during this study.
2.1 Demographic information profiles
Teachers completed anonymized demographic profiles for participating students. This included each student’s ethnicity, sex, and official diagnosis. Information about prior student experience with other support tools and iOS devices was collected, and teachers conducted a brief assessment of the student’s social and communication skills. This profile of the student’s communication needs and abilities included information on student speech and language difficulties as well as any behavioral and attention-based challenges that students may have had. The information corresponds to that found in students’ individualized education plans (IEP) – which describes the accommodations and services needed by each student.2 An additional checklist that had been informed by characteristics of students with learning difficulties was completed by teachers and included students’ social abilities, verbal skills, language development, and reading comprehension.
Individual, semi-structured interviews were conducted with the teachers at both schools. Each teacher was interviewed twice: a third of the way into the study period and at the end of the study. The interview script contained 15 questions that included questions toward the development of a baseline of the class (e.g., questions on the level of social interaction typically observed in the class; the types of augmentative communication devices and strategies employed by the teacher and assistants; their general expectations of using a mobile device and application with the class), followed by several open-ended questions to collect data on the teacher’s observations after the introduction of the iOS application (e.g., questions on observed differences in social interaction; student engagement and/or motivation with the application’s vocabulary based activities; and questions on planned and unplanned activities involving the iOS application, and any consequences for lesson planning). Teacher 2 independently chose to conduct a group interview with his students. He asked what they enjoyed and what they had difficulties with when using MyVoice: all student responses were anonymously recorded on a single device and transcribed.
2.3 Application usage logging
Every interaction that users had with the application was logged. This logging captured each user action within the system and includes navigational actions (e.g., switching modes, viewing a category, or viewing words) as well as those intended to support cognition or communication (e.g., selecting a word to be spoken).
2.4 The MyVoice application
The MyVoice application has been iteratively evaluated and refined (Demmans Epp et al. 2011). Early development was user-centered and involved the continued use of the application by an adult with aphasia—a language disorder that can disrupt reading, writing, and speaking (Aphasia Institute 2003). This target user employed the application to support his communication as he went about his daily routines, and he provided the development team with regular feedback about the application’s design and functionality. The developers then adjusted the application and gave their tester the updated software. Feedback was also solicited from those who worked with populations who face communicative challenges and, where appropriate, their suggestions were incorporated.
Beyond the above user testing discount evaluation methods were used, where specialists in human-computer interaction stepped through the application and identified potential usability challenges by applying the Gestalt principles (Mullet and Sano 1995) and Nielsen’s heuristics (1994). Even though tools like MyVoice are commonly used by individual special education students, the MyVoice interface had not been evaluated for its’ ease of use with this population prior to this study. Moreover, MyVoice had not been designed for use in a classroom setting. We, therefore, set out to explore its repurposing to support a special education population since they could benefit from its use in classroom settings and teachers are interested in trying new methods that might support student needs.
2.5 User interface
The MyVoice application is a dual-interface (web and mobile) tool that was designed to support communication. The web interface is intended to allow for the creation, editing, and organization of support materials whereas the mobile (iOS) interface is intended to enable the delivery of those materials. The use both interfaces enables users to take advantage of the strengths of different form factors: the larger screens and input capabilities of laptops and desktops can ease content creation, while the portability of a small mobile device is helpful for content access. Moreover, this separation of functionality enables caregivers and helpers to assist with the set up and administration of the application even when the user is not present.
The web-based interface allows students or teachers to enter and organize support materials (i.e., words or phrases) into collections of vocabulary items. The entered data is then synchronized with the application that is installed on the student’s device. This ensures that the same collection of vocabulary items is available via both interfaces. The ability to create collections of support materials and distribute them from a remote location allows teachers to provide students with new vocabulary items even when they do not have physical access to student devices.
The iOS device interface allows users to interact with previously entered vocabulary by navigating through a hierarchical or location-based organization. Once the user has found a vocabulary entry, it can be selected by touching the device’s screen and the vocabulary entry will be verbalized using text to speech. The mobile application runs on iOS devices that enable user mobility and flexible support. However, the physical dimensions of iPhones (11.5 × 6.2 × 1.2 cm) and iPod Touches (11.1 × 5.9 × 0.7 cm) and their delicate nature can present challenges for some users.
Both interfaces provide the ability to associate an image with a vocabulary entry. However, the iOS device interface only allows the user to add an image to a previously existing word or phrase. The user does this by selecting the desired vocabulary item and then photographing something within his or her environment. In contrast, the web-based interface allows users to create and organize vocabulary; this allows them to add new words for any images to which they have access by browsing through image files that are on their computer and uploading those images so that they can be associated with a specified vocabulary entry. Users can identify locations and associate words with those locations. This location-aware functionality exploits the Global Positioning System (GPS) information that is provided by the devices that we used as well as many other modern mobile devices.
2.6 Information organization
Word view supports navigation through hierarchically organized sets of vocabulary and users can tap on categories to navigate deeper within the hierarchy or tap on individual items to have them verbalized. Place view supports navigation through a location-based organization of vocabulary. It is intended to provide fast access to the vocabulary that are relevant to a particular location and is the key functionality that distinguishes MyVoice from other commercially available communication support tools. Once a location has been selected, all of the words that have been associated with that location are visible; there is no hierarchy. The verbalization of words and phrases is performed using the same actions as those required in wordview; the user taps on the item that he or she wishes to have verbalized.
3 Research sites and participants
A two period, non-credit course for girls aged 17 to 21 was selected from School 1. This class had 1 teacher, 2 educational assistants (EA), and 15 students. Ten of these students were included in the study and had difficulty understanding and interpreting commonly used language. The curriculum for this course, entitled “The World of Work”, has the goal of exposing students to different aspects of work through job shadowing, co-operative placements (co-op),3 and guest speakers. Each student had an IEP that identified her as having abilities that deviated from typical development. This included diagnoses of Down’s syndrome, various learning disabilities (LD), Autism Spectrum Disorder (ASD), and Mild Intellectual Disability (MID). Some stayed in class all semester while others participated in co-op placements. A combined seventh and eighth grade class that had male and female students was selected from School 2. This was an intensive support class that targets students who have learning disabilities and who demonstrate a significant discrepancy between average or better intellectual ability and lower academic achievement. The class had 13 students, 1 teacher, and 1EA; the EA was present during the classes for various subjects including geography, writing, reading, and math. The teacher described his students as needing additional support communicating, specifically with respect to their ability to produce language (i.e., write, speak, or articulate their message). School 2 was recruited to compare experiences, gain the perspective of a new teacher, and evaluate the application’s integration into a class by enabling a new feature that allowed the teacher to create and share collections of vocabulary items.
4 Study implementation
The devices and application were introduced to both research sites using a similar approach. After receiving research ethics board approval for the study, the research team met with school principals and teachers to describe the study at a high-level. Following this initial consultation process, consent was obtained from three parties: principals, teachers, and students. Students opted-in to the study by completing and returning an informed consent form that had also been approved by their parents. Students were not excluded from learning opportunities if they did not consent; only their data was excluded. Voluntary consent was also requested from any EA that was present during the study. All invited students consented to participate; however, one student, S1_H, was unable to fully participate because of fine- motor problems that made using the device difficult. This student remained with the class during exercises and was provided with teacher-designed alternative learning materials, rather than the iOS device and application.
Both classrooms were provided with iPhone and iPod Touch devices on which the MyVoice application had been installed. The schools supplied a computer in the participating classrooms so that participants could access the web interface. School 1 was provided with four iPhones and six iPod Touch devices. School 2 received six iPhones and seven iPod Touch devices. A random subset of students at both schools were assigned the iPhones, which had vibration output called haptic-feedback enabled. All other students received an iPod Touch, which did not support this feature. The random distribution of the application on haptic-feedback enabled devices was meant to help answer research questions relating to how the use of vibration affects patterns of application use. We were, specifically, interested in exploring whether the availability of haptic feedback helped increase the amount of information that these students received and understood. Even though we randomly assigned participants to a haptic feedback condition, we did allow students who had the haptic-feedback feature enabled to turn it on or off via a configuration tool since it is known that this type of information can reinforce behaviors, which has the potential to be disruptive. We wanted to allow the teachers to change a student’s feedback type if the teacher felt that it may have been harming learning. We, therefore, tracked the status of this feedback feature.
Each student had an individual account within the application and was identified by his/her username. Usernames were pre-created and anonymous to protect student privacy. Measures were taken to minimize distractions for students and to restrict access to internet browsers, games, music, and other applications unnecessary for the study because this was requested by the participating teachers. The intent was to align the use of the devices with the existing curriculum in so far as it was possible; teachers were given sample lesson plans and training on both the device and application but were afforded the freedom to use the device as frequently as they felt was appropriate and in any manner that met their needs. Furthermore, teachers were encouraged to develop curriculum-based lesson plans that integrated application use. Additional researcher support was given to School 1 since, at that time, adding vocabulary to each student’s account was a tedious task beyond what could be added to a teacher’s daily workload. The ability to batch upload vocabulary to all devices at once was added before School 2’s participation began. This reduced teacher workload to a reasonable level.
Following the completion of each school’s participation, the devices were collected and all identifying information was deleted. Students were given a copy of any photographs that they had taken before deleting our records of those images. Student and teacher behaviors were identified within the data with a focus on anything that contributed to or hindered the integration of the mobile application or device into a special needs classroom. Considering the application’s focus of supporting communication, particular attention was paid to information, social, and communication practices as well as teacher and student motivation and behaviors when using the technology.
4.1 Classroom integration of MyVoice
To better situate the interpretation of the results, we first detail how each teacher integrated MyVoice into his or her lessons by describing a lesson. School 1 typically used the devices to document the fieldtrip activities of students. In one lesson, students visited a local park and took pictures of each other, man-made structures, flora, and fauna. This type of activity was not intended to support particular learning goals, but the teacher found that the students enjoyed taking photos, which helped keep them engaged. In another lesson, students went to the library where students used MyVoice to categorize different information about the librarian, such as his name and phone number. Teacher 1 also used the hierarchical organization of vocabulary to support student understanding of the different types of literature available: fiction, children’s literature, and biographies were among the selected categories. Students were expected to take pictures of books best suited to each category by navigating into that category, finding one of the listed books in the library, and photographing it.
The teacher at School 2 more actively integrated MyVoice into his courses, where lessons were confined to the school. Aside from allowing students to explore the device, take pictures, and practice using words along with the device, the teacher planned a math lesson where MyVoice was meant to support student learning about central tendencies. He created categories for ‘Mean’, ‘Median’ and ‘Mode’ and added pictures to convey the meaning of the associated concept since he felt this would help students remember the new terms and trigger word meanings when students did not have access to the device. Once students chose the category, they were provided with the definition. The application would read the definition to aid students whose reading comprehension was lacking. Students were also allowed to use MyVoice to help them complete later classroom activities and worksheets.
5 Results and discussion
5.1 Data coding
The application usage log data were a tremendous source in this study. While the user demographic profiles and interviews provided rich context and allowed before and after comparisons to take place, the application usage log data were a detailed account of each participant’s activities and were independent of the human bias that often confounds mobile media research reliant on self-report data (Boase and Ling 2013). The log data were collected through the MyVoice application. The application logged each action that could be performed. Each action was assigned a code, which was logged whenever the user performed that action. This was not done using the more traditional approach to user activity logging that tracks every single click, including the sequence in which keys were pressed. Rather, logging was done at the semantic level whenever a user performed an action or a step in a more complex action. For example, a user must follow a series of steps when creating a location. This involves indicating that they are interested in creating a location, searching for that location on a map, identifying the correct location on the map, giving that location a meaningful name, and saving the location. The application, therefore, logged each of these interactions. For simpler interactions, such as having a word ‘read-aloud’, only one event would be logged. These logs were then transferred to the server before they were cleaned an analyzed. This involved checking the logs for inconsistencies before they were analyzed.
Interviews were transcribed and codes were manually developed from the identification of patterns that corresponded to our research questions. For example, teacher’s accounts of student’s preferred activities on the devices led us to code for photo taking and saving and resulted in the emergence of the information practice theme ‘image capture’ relevant to the second research question. Again, from the interviews, we developed codes describing the manner in which students navigated the application and this coding resulted in the identification of the information practice themes ‘mode switching’ and ‘searching’ relevant for the second research question.
The demographic information profiles for the participants were transferred to SPSS and were available for descriptive statistics used in the reporting of the results.
The results are organized by research question, with the analysis of the application’s use being subdivided into general application use, the logging of learning activities, and the influence that haptic feedback had on student actions. After presenting our analysis of the data with respect to application use (i.e., research question 1), we explore the implications that the user interface had for student information practices (i.e., research question 2).
5.2 Research question 1 – general application use
The following are results to the first research question: how did students use the location-aware communication application? Most results are presented in aggregate form. However, there are places where highlighting individual differences is important. In these cases, a code that starts with the school (i.e., S1 or S2) and ends with a letter that identifies the student is used (e.g., the student who had device D at School 1 would be identified as S1_D).
Correlated user actions within the application
Student actions within the application and their focus
Log file excerpt of student usage of place view
… 6 session starts and 6 session ends …
The mode is the number that occurs the most in a collection of numbers.
The median is the middle number in a given sequence of numbers. When there is an even number of numbers in the collection, take the middle two numbers and average them to discover the median number.
Mean is usually known as the word average. You find an average by taking the total for a set of numbers divided by the number or numbers in the collection.
Log file excerpt of view navigation behaviors that were typical of the group
Navigated Into a Category
Show and Not/Tell
The correlation (see Table 1) between searching for a location and content navigation actions may also indicate that students attempted to use the place view to search through vocabulary or that they failed to find a desired vocabulary item and decided to change to the place view in order to try and find it. In this case, the application would initiate a location search if the user had not previously specified his or her location. However, the number of failed or abandoned attempts at creating a location (263) and the correlation between searching for a location and creating one (0.732) may indicate that the user interface in the place view presented students with some challenges. It may also indicate that students liked to look at the map but did not necessarily want to save their location or view the learning materials that were associated with a location, which is partially supported by a School 2 student comment: “It is good to find the GPS location and for me it was good showing how I can use the device in general educational way”. Furthermore, it appears that participants stayed within a location (see Table 3) once they had selected one. Subsequent mode changes were between the limited categories that had been associated with a location and the vocabulary hierarchy present in the word view.
Location-creation descriptive statistics
Failed Location creation
Locations as categories
Five School 2 students (S2_K, S2_P, S2_Q, S2_R, and S2_S) repurposed the locations that they had created as categories by flattening the hierarchy that was present in portions of the word view and assigning it to a location. For example, students might take all of the words from each subcategory (i.e., mean, median, and mode) and associate them with one location called school. In many cases, students flattened multiple hierarchies from the word view and assigned them to the same location.
5.3 Research question 1 – application use: Feedback configuration
Approximately half of the students in each school were provided with a version of the application that incorporated haptic feedback. An analysis of the students’ actions showed no difference between application usage for students who received haptic feedback and those who did not.
Student changes to haptic feedback status for School 2
5.4 Research question 1 – application use: Media creation
Participants used two approaches to taking photos during their course activities: the structured creation of learning materials and the unplanned recording of learning activities. The first tended to occur when students were engaged in prescriptive, teacher-led activities where teachers had prepared vocabulary for an activity in which students were required to photograph examples of the vocabulary. The second occurred during more exploratory classroom activities. In many cases, the unplanned recording of learning activities occurred alongside activities where students were expected to take pictures of vocabulary.
Student photo taking practices within the application
Photos taken per vocabulary entry
Vocabulary entries for which students took photos
Total photos taken
5.5 Research question 2 – interface consequences for user information practices
In this section we discuss the second research question: in what ways did specific aspects of the user interface influence the information practices of the student users? Viewed from the perspective of our conceptual framework, the findings that can be derived from the above results are grouped into five sections: i) location creation and use, ii) mode switching, iii) searching, iv) image capture, and v) haptic feedback preferences.
5.5.1 Location creation and use
In a typical user scenario for the ‘create a new location’ activity, the user must complete several steps. The steps include initiating the creation of a new location by selecting the command to do so, entering text for the name of the location, planning for the types of images and text that would be useful in the new location, selecting or taking photos to be associated with it, and saving the new location. The correlation between initiating the creation of a new location and the termination of the session (r = 0.667) indicates that students did not complete the intermediate steps, suggesting that they either turned-off their devices or stopped interacting with the application for the period of time that is necessary to trigger the device’s sleep mode. This is an atypical interaction, since it is expected that all of the steps involved in location creation should take less than a minute to complete. This indicates that the location creation process was difficult for this population to understand or perform to completion while attending to both classroom activities and the application.
Students may have abandoned the location creation activity because they experienced difficulty completing the steps. This could be the result of an interaction design where the number or sequence of steps was too complex for these users and negatively impacted their goal-direction. Since task complexity is directly related to the user’s goal effects (Locke and Latham 2002), it is probable that the user interface had a moderating effect on student ability or willingness to complete the activity. From an information processing perspective, we can also apply cognitive load theory (Sweller et al. 2011) and conjecture that the complexity of the actions and decisions that were required to complete this activity increased the student’s extraneous load such that students employed the information practice of abandoning the activity to reduce their cognitive load. In this case, abandoning the activity is a strategic choice for some students and is an emergent information practice associated with this activity.
5.5.2 Mode switching
As previously described the application has two modes through which the user can access information: word view and place view. While it is possible to switch between modes when using the application, it is more likely that a classroom user would only employ one mode to accomplish activities for a specific interaction. Students were expected to stay in place view if they were in a location with which they had associated vocabulary. It was also expected that they would remain in word view during classroom activities. However, some School 1 students chose to use place view to help them navigate to their field trip destination and then switched to word view to access the speech functionality that would enable them to perform general communication tasks and their prescribed learning activities. Student mode switching activities also exceeded interactions that were dedicated to creating locations or verbalizing support materials, when in place view, suggesting that the students in this study were experiencing difficulty in managing or understanding the different modes.
There are visual cues, both textual and graphical, to facilitate user understanding of which mode they are in. However, the data suggests that these interface design elements were insufficient to remove ambiguity for students. As a result, switching modes did not serve the intended transitional role of allowing users to move between modes, but switching modes could be viewed as an activity in and of itself. Students may have employed mode switching as a strategy for navigating the content, an unanticipated search practice. From an information processing perspective, it is likely that students reinterpreted locations as categories housing content. We, therefore, surmise that students were building mental models that considered the modes as navigational categories instead of different functional views of the support materials.
Within information practice, searching is a core and analytically instructive activity that is observed in a variety of everyday environments and especially within digital spaces. In this application, users search by moving through hierarchies of nested categories with varying degrees of classification abstraction. For example, the category labeled ‘Math’ may have a lower-level sub-category labeled ‘Mean, Median, Mode’ with a subcategory labeled ‘The mean is usually known as’, and items such as ‘average’ or ‘arithmetic mean’ with a photo accompanying the text-label for each level. In a typical search scenario users would start at a broader or more abstract level and then navigate or drill-down into narrower related sub-categories. After arriving at the required or expected photo and completing the activity, users could then navigate or drill back up to broader levels and continue with further activities within the application. Students were expected to remain within a subset of categories for an extended period of time given the nature of classroom activities.
Before we discuss the challenges that students faced when navigating through the vocabulary hierarchies, it is worth noting that Teacher 2 even took a little while to become comfortable with the hierarchical organization of the vocabulary that came preloaded on the application: “It took me a while to figure that out, but after I was like ‘oh I like this’ because it’s like you go general and you go kind of more specific”.
Students performed asymmetrical searches where they engaged in drilling down into the hierarchy from broader categories to more discrete items but tended not to navigate back up, sequentially, through hierarchy levels. After arriving at a lower level, students typically ended the application session or moved up several categories at a time. The lack of symmetry between navigating into and out of categories indicates that students used the home button to return to the top of the hierarchy or skipped levels in the hierarchy when navigating out of a sub-category. If we consider the arithmetic mean example, a student might navigate up into the math category rather than first navigating up into the ‘Mean, Median, Mode’ category.
It is also possible that students were experiencing difficulty recalling the path to return to the top of the hierarchy even though there were text-based indicators. This may indicate working memory issues: working memory is an information processing and cognitive psychology concept that considers the short term memory capacity for all persons to be limited (Baddeley and Hitch 1974). One of the components of working memory is called the visuo- spatial sketchpad and it is assumed to be responsible for manipulating visual information. According to Baddeley, the visuo-spatial sketchpad plays a key role in assisting people with the spatial relationships between objects as we move through an environment (McLeod 2008), and the visuo-spatial sketchpad may be taxed by MyVoice since it relies on visual and spatial interactions. In the case of students navigating hierarchies, it would be working memory that is at the heart of the information processing tasks, and the lack of easily interpretable visual information (i.e., the image that represented the category) may have put pressure on the visuo-spatial sketchpad and prevented student recognition of the categories and hierarchy structure. The students’ choice to return to the home screen rather than navigate back up through the previous levels is an information practice strategy that was employed to relieve this pressure. Ending the session is another information practice that also serves to reduce demands on working memory. However, further investigation into the relationship between user navigation practices and working memory would better illuminate the information practices of users and allow system designers to accommodate for the variable capacities of user’s working memory.
5.5.4 Image capture
Taking photos was a popular student activity and an integral part of interacting with the application, but atypical uses were also observed. When students used the application while performing unstructured tasks, they took and saved many photos but did not associate them with learning materials. They used the device camera to take photos of classmates and themselves, and they captured images of the locations that they were in at the time.
When teachers provided an application specific task that involved taking photos, fewer photos were taken and saved. This may indicate that image capture as an activity was interpreted by students differently in the different usage contexts. Additionally, students did not have the ability to associate photos and words on the device itself; this function was performed online via a desktop computer to which students had limited access. This may have reduced the spontaneity of photo taking activities. Students would see something they liked, take a photo, but then would have to remember it at a later time when they had computer access and type in a word to go along with the photo. This presented too many steps to either keep them interested or may have been too complicated a process. It may also suggest that the user interface on the application discouraged or restricted general image capture and channeled students into using the camera function within the application in a goal directed manner. Moreover, students took and saved more photos outside the application when visiting a variety of locations.
This may also suggest that students had prior experience with image capture using mobile devices and that they already had mental models to facilitate this practice. In addition, it appears that taking photos was an information practice that emerged as an expressive use of the device when an instrumental task was absent.
5.5.5 Haptic feedback preferences
To determine the extent to which haptic feedback (which vibrates the device when objects are selected on the screen) influenced student practices, we gave 43.5 % (n = 10) of students devices with haptic feedback enabled and 56.5 % without. Students were not told whether or not they had a device with haptic feedback enabled. Two thirds of School 2 students that began the study with haptic feedback enabled (n = 4, see Table 6) figured out how to turn it off, so that 21 of the 23 students had haptic feedback turned off at the end of the study. It appears that students preferred to interact with the device and application without haptic feedback.
Applying the framework, we can conjecture that students may have found the haptic feedback distracting when other sensory information (visual, audio, and tactile) was already being provided. Haptic feedback may have taxed student information processing by increasing the extraneous load on the cognitive system to a point where students had to make a decision to remove this sensory information. By turning haptic feedback off to reduce cognitive load, students regained control over the user interface and relieved pressure on their information processing. This does not suggest that haptic feedback will overburden information processing in all application contexts but indicates that haptic feedback has the potential to provide as much or greater sensory information to the user as do visual and auditory modes.
5.5.6 Opportunities revealed through system usage
An examination of the results related to student information practices offers insights into student experiences and engagement with the application and provides a basis for a discussion on the extent to which these findings may be related to our conceptual framework connecting information practices, user interfaces, and information processing.
Even though many students struggled with the location creation process, creating and using locations was still of benefit to some. Students liked being able to see the map because “you can find where you are”. Several students created locations and assigned vocabulary to those locations. In addition to students liking the location view and being able to repurpose it, students did not comment about managing the modes or changing between them during the teacher-led interview. This lends weight to their not seeing a difference between the views. Moreover, the repurposing of the location view to support vocabulary navigation shows how students were able to personalize the application in a way that met their information seeking needs and reduced the cognitive load inherent to word view’s hierarchical organization. However, these types of features require more scaffolding if they are to be used effectively by all of the students in a class.
The potential challenges that students faced when navigating through vocabulary may be partly due to the lack of training, in the vocabulary hierarchy’s organization, that they were given. Teacher 2 thought that allowing the students to enter and organize the vocabulary themselves may have helped with this problem much in the way that the student repurposing of locations helped their information seeking practices. It may be that increased agency supported student information practices rather than the reduction of multiple levels of hierarchical information into a single level. While both views at times acted as information gatekeepers for some students, their use by others shows that the interface design can enable, hinder, or challenge users depending on their cognitive abilities. Based on this, the design of interfaces for neuro-atypical users should allow for high levels of customization based on user abilities and preferences.
The log files also revealed behaviors that were inconsistent with those demonstrated by users of other support tools. In some cases, the observed behaviors may have appeared because the hardware on which the application was running is capable of supporting behaviors that other support tools do not (e.g., taking pictures). Student behaviors indicate that the application did not support desired functions, such as the noticing and recording of learning activities, which can benefit students and is supported by other mobile tools (Kukulska-Hulme and Bull 2009). We, therefore, recommend that designers of educational systems take full advantage of the platform’s ability to log learning activities through any combination of media including audio, visual (i.e., video or pictures), and textual methods. This can also be used to further support student cognition and recall.
We would further recommend that learners be allowed to organize information in a structure that meets their information seeking needs. This may mean that users can organize support tools using a flat, graph, or hierarchical tree-like structure. It may even require that learners can access the information via different organizational structures based on their current preferences and the other demands that are being placed on their information systems.
Application feedback, the features that are available to students, and the extent to which students can record learning activities via the application should also be configurable since this would allow both the student and the teacher to ensure that the features which are available to a particular student are appropriate to his or her abilities and the activities being performed.
6 Conclusion and recommendations
The integration of a mobile support tool into the existing curricula of two special education contexts revealed student information practices. An action research approach was employed where logs of user actions, student interviews, and teacher interviews were used to track and explain application usage. This showed the potential limitations of integrating mobile support tools into different types of special education programs. Students at both schools demonstrated agency by developing information practice strategies despite the information processing and user interface obstacles they faced. These strategies included the repurposing of locations as categories and mode switching to support information seeking. These strategies also included the logging of unplanned learning activities via other device functions since the application did not permit the impromptu recording of learning materials. These practices were identified and explained by applying a new conceptual framework that considers information use, information processing, and user interfaces in tandem.
Following from these practices and other observed behaviors, several improvements in the design and integration of these types of tools can be made. Among them are: the ability to easily find learning resources, record learning materials and activities, and control different feedback features. It is also important to facilitate the creation of new material within the support tool. The information seeking challenges that students faced and the practices that emerged as a result of these barriers indicate that students should be given multiple paths to finding the same information. This need is demonstrated by students’ flattening vocabulary hierarchies and saving them as a category that was associated with a location. By providing students with different paths, system designers enable student exploitation of the information seeking practices that best suit them.
The ability for users to control different aspects of the tool is essential to its continued use and classroom integration as shown by student-initiated changes to haptic feedback settings as well as reports of students verbalizing the same word multiple times because the rate of speech was too fast for some to understand (Campigotto et al. 2013). Students faced many challenges and employed various information practices to overcome the barriers that they faced while interacting with the tool. Their ability to develop and employ strategies to overcome barriers that were due to the user interface design, instructional design, and information organization shows that these types of tools can be repurposed for supporting students in special education settings. The combined use of the study of user interactions and information practices through the deployment of a support tool in special education settings revealed how resourceful members of this population can be in overcoming the barriers that the integration of technology can present.
We used the original version – features have been and continue to be added.
See www.edu.gov.on.ca/eng/general/elemsec/speced/iep/iep.html for more information.
Co-operative (co-op) placements are experiential learning opportunities in the form of credit courses that allow secondary school students in the Toronto District School Board to ‘use what is learned in the classroom and apply it in the workplace. Co-op is an opportunity to “try out” a career and can help with making decisions about your future’. The objective is for students to ‘develop work habits, attitudes and job skills necessary for a successful transition to post-secondary education or the workplace’. See http://www.tdsb.on.ca/HighSchool/YourSchoolDay/Curriculum/ExperientialLearning.aspx
- Ally, M. (2009). Mobile Learning: Transforming the Delivery of Education and Training. Edmonton: AU Press.Google Scholar
- Aphasia Institute (2003). What Is Aphasia? | The Aphasia Institute. Aphasia Institute. http://www.aphasia.ca/aboutaphasia.html.
- AppBrain (2013). Number of Available Android Applications - AppBrain. AppBrain. http://www.appbrain.com/stats/number-of-android-apps.
- Baddeley, A. D., & Hitch, G. (1974). Working Memory. In G. H. Bower (Ed.), The Psychology of Learning and Motivation: Advances in Research and Theory (Vol. 8, pp. 47–89). New York: Academic.Google Scholar
- Carey, K., Evreinov, G., Hammarstrom, K., & Raskind, M. (2000). Information and Communication Technology in Special Education. Analytical survey. UNESCO. http://www.iite.unesco.org/pics/publications/en/files/3214585.doc, last viewed March 2015.
- Demmans Epp, C., Campigotto, R., Alexander L., & Baecker, R. (2011). MarcoPolo: Context-Sensitive Mobile Communication Support. In FICCDAT: RESNA/ICTA, (pp. 4). Toronto, Canada. http://web.resna.org/conference/proceedings/2011/RESNA_ICTA/demmans%20epp-69532.pdf.
- Du, J., Sansing, W. & Yu, C. (2004). The Impact of Technology Use on Low- Income and Minority Students’ Academic Achievements: Educational Longitudinal Study of 2002.Google Scholar
- Goggin, G., & Newell, C. (2003). Digital disability: The social construction of disability in new media. Rowman & Littlefield.Google Scholar
- Hirotomi, T. (2007). Multifaceted user interface to support people with special needs (In the Proceedings of The Second Iasted International Conference On Human Computer Interaction, pp. 87–92). Anaheim: ACTA Press.Google Scholar
- Ingraham, N. (2013). Apple Announces 1 Million Apps in the App Store, More than 1 Billion Songs Played on iTunes Radio. The Verge. http://www.theverge.com/2013/10/22/4866302/apple-announces-1-million-apps-in-the-app-store.
- Jacob, R. J. K. (1994). New Human-Computer Interaction Techniques. Human-Machine Communication for Educational Systems Design.Google Scholar
- Kukulska-Hulme, A., & Bull, S. (2009). Theory-based support for mobile language learning: noticing and recording. International Journal of Interactive Mobile Technologies (iJIM) 3 (2). doi:10.3991/ijim.v3i2.740.
- McLeod, S. A. (2008). Working Memory - Simply Psychology. http://www.simplypsychology.org/working%20memory.html.
- Miesenberger, K., Fels, D., Archambault, D., Penaz, P., & Zagler, W. (Eds.). (2014). Computers Helping People with Special Needs: 14th International Conference Proceedings. Paris: ICCHP.Google Scholar
- Mose, N. (2013). SMS linguistic creativity in small screen technology. Research on Humanities and Social Sciences, 3(22), 114–121. http://www.iiste.org/Journals/index.php/RHSS/article/view/9564.Google Scholar
- Nielson, J. (1994). Heuristic Evaluation. In J. Nielson & R. L. Mack (Eds.), Usability Inspection Methods (pp. 25–62). New York: Wiley.Google Scholar
- TDSB (2013). Special education report: Toronto District School Board. Special Education and Sections Programs, Toronto, ON. http://www.tdsb.on.ca/Portals/0/Elementary/docs/SpecED/SpecED_EducationReport.pdf, last viewed March 2015.
- Tufte, E. R. (1989). Visual Design of the User Interface: Information Resolution, Interaction of Design Elements, Color for the User Interface, Typogragphy and Icons, Design Quality. Armonk: IBM.Google Scholar
- Turnbull, A. P. (1995). Exceptional Lives: Special Education in Today’s Schools. Upper Saddle River: Merrill/Prentice Hall.Google Scholar
- Wilson, T. D. (2000). Human information behavior. Informing Science, 3(2), 49–56.Google Scholar