Keywords

1 Introduction

The act of dressing up may involve graphics choices and references. For example, when a person attends to a specific occasion, such as a costume party, or either following a dress code at work. Those situations may cause issues to the visually impaired people (VIP), particularly, when they need to select an outfit. Besides, the textile industry usually present useful instructions in the printed labels, and those are not accessible to the VIP community, which suffers from not noticing features, such as color, texture and other graphical elements from the apparels.

In the last years, the popularization of the Internet of Things (IoT) and the growth of the electronic prototyping tools, such as the Arduino boards, and other sensors and actuators, have given strength to a crescent presence of connected devices in our lives. The IoT gave us the possibility to enhance the everyday activities and even our environment. With these features, it is possible to improve ordinary objects, like clothing, helping people to deal with inherent disabilities, such as blindness. Smart objects can provide information that otherwise would go unnoticed in our routine, expanding our senses. Hence, the use of these interactions in an assistive context can help people to overcome their impairments.

This research introduces a system to aid the VIP community to deal with clothing visual information. Our concern is to provide data which otherwise the visually impaired user (VIU) would not have access by either touching or feeling the apparel’s fabric.

Our concept uses connectivity to provide data on the visual elements of each garment piece. In that way, VIP may recognize the parts of an outfit and also combine such pieces. For example, the system would recommend to the user when selecting suit pants that match with coat. To make this possible, we empowered clothing items with QR Codes and NFC tags.

QR Code is a barcode pattern, readable by anyone with a smartphone; it either can store data information or redirect a user to a specific web address. The NFC is a wireless form of communication between devices; it works by approximating two devices, one emitting a short field of radio frequency and another with an antenna capable of picking up this signal. The NFC technology is getting popular in everyday objects, thanks to its application in services such as electronic tickets, and virtual wallets with smartphones.

This study primary objective is to explore the possibilities of QR Codes and NFC tags as interactive tools providing autonomy for VIU to dress up with visual awareness of their look. Making “How connected objects can help VIP to gain autonomy in the act of dress up?” our research question. Our contributions are our design process, that enabled the creation of our concept and our findings with the VIP community. We hope to encourage other studies and the community to think about the users with visual impairments in the future.

In this paper, we discuss related works, with approaches to help VIP to perceive the visual aspects of the clothes. In the methodology section, we describe our design process and methods, explaining the project choices, and the development of our prototype, and how we tested it. Further, this paper presents our proposal, describing our solutions, and the Usability Test section, where we describe our pilot test. Finally, this study ends with the discussion and the conclusion sections.

2 Related Work

Marco Conti et al. defined the Internet of People (IoP) [1] as a human-centric paradigm to the Internet, thanks to the relation between Human-Computer Interaction (HCI) and IoT. The IoP paradigm discusses the concept that understanding humans is essential for the structure of internet with multiple connections between people-operated gadgets and devices.

Blind and visually impaired people rely on different senses to comprehend the world around them. Accordingly, with Lefeuvre et al. [2] it is vital to designers understands how these people discover and interact with new products.

In their article, Lefeuvre et al. introduced the “Loaded Dice,” a tool designed to present possibilities from connected devices, composed of two cubes filled with sensors and actuators. Their goal was evaluating how VIP could collaborate with ideas and solutions, of IoT products, acting as co-designers after experiencing the Loaded Dice. Authors claimed that their tool and methodology helps VIP to understand better how devices can work, and Designers to comprehend how to develop better solutions for VIP. Although we have not followed the co-design and workshops models, we agree that user-centered design is a compelling practice when working with concepts and generation of alternatives with VIP. In our case, contact with the community was also fundamental to the project.

In the Paper “Current and future mobile and wearable device use by people with visual impairments” [3], the authors consider that Visually Impaired People do not rely on visual cues in screen guided interactions. For this reason, connected wearables are an exciting solution to VIP interacts with devices like tablets, smartphones or computers. In the paper, the researchers developed a wristband using e-textiles materials. Their wearable emerged from a co-design process with VIP. One concern of their focus group was the appearance of the prototype, accordingly to their standards, the ideal design is discrete, to avoid social awkwardness by signing the need for a particular device, and the device has to attend fashion requirements, becoming attractive to use. Thus, it is interesting to observe that user comfort is a requirement for the creation and adoption of assistive technologies. We have adopted this concept in our project.

About visual impairment and clothing information, the study “Recognizing Clothes Patterns for Blind People by Confidence Margin based Feature Combination” [4], developed a method of cloth texture pattern analysis through computer vision. It classified the clothing texture by organizing into four major categories: “stripe,” “lattice,” “special,” and “patternless,” and translate this information into audio for VIP. The article also describes technical details of the image processing algorithm. One of the co-authors, Shuai Yuan, also published the short paper “A System of Clothes Matching for Visually Impaired Persons” [5], with a clothing match system for VIP. Yuan’s work uses a camera and image processing to compare two pieces of fabric and points if their texture and color match. This system captures the clothing with a camera, process with an external computer, returning feedback through audio.

Regarding Yuan’s matching system, it results does not cross information considering clothing utility. Additionally, it only demonstrates that some aspects of tissues are equal, while there are situations with clothing that fabrics can be different but still combine. For example, a white cotton shirt and a blue jeans pants are a classic fashion combination of distinct kinds of fabrics and colors. To deal with this gap, we proposed a system that turns simple aspects of two garment pieces, e.g., color, texture, size, and kind, in data. Both cloth data is analyzed, and then, the system crosses these characteristics to return information about the combination to the user.

Another project with clothing combination suggestions is the Smart Wardrobe System [6]. According to the authors, the project target is two kinds of people, busy entrepreneurs, and color blind people. It uses Radio Frequency Identification (RFID) tags, to register and track apparels in a wardrobe. The user needs to attach an RFID reader to the wardrobe and a tag in the garment piece, after registering it in the system with the system’s software. The Smart Wardrobe’s main idea is to manage apparels from its software, which indicates the best choices based on the user mood, color or style of the day. This solution has some interface issues since it relies on visual elements; it is not friendly for VIP users. Besides, the software clothing, color, texture, and materials combination is very subjective. Regarding the focus on blind people, the Smart Wardrobe System has the flaw of being too much dependent on subjective factors and in a screen interaction to deal with clothes. We propose a more VIP friendly interaction, with clothing data-based feedback.

Another aspect that conferred several solutions was the feedback techniques, while some researchers designed solutions with audio feedback, others preferred tactile feedback. “Touch and #Tag” [7], for example, is a system to organize and store garments by styles, sizes, and functions. The researchers developed a tactual tag system that permits to VIP to classify and store their clothing, e.g., a social shirt would receive a tag and a casual shirt another. Their approach has the advantage to not rely on external digital gadgets to provide feedback, but it has some issues. First, in our opinion, the authors designed a tactile icon system that could demand a high learning curve for the user. Second, the tag is not attached to the cloth; it works more like a wardrobe sign and mapping. These problems can cause cognitive overloading for the user, Lefeuvre et al. [2], for example, had some misinterpretations with tactile icons for VIP during the development of the Loaded Dice, and the Touch and #Tag is also vulnerable to those misinterpretations.

Apart from these considerations, the Touch and #Tag seems to deliver value after an adaptation period, since its cloth tags system has information about clothing appearance, which usually isn’t accessible to VIP. Commonly, tags appear at certain spots on cloth fabrics and could transmit more information about the apparel itself, becoming an assistive resource for VIP, with the application of some technologies and techniques.

The researcher Ringland [8], listed possibilities to make cloth tags a primary source of information to VIP. Her article included four solutions to improve the tags. The first was an alternative using ordinary cloth buttons of different form and sizes to pair clothing pieces and act as indicators of color and texture. The second solution is the use of braille tags with text information about the piece.

Ringland also contemplated the use of QR codes as a third solution and RFID/NFC tags as the last suggestion. The author suggests using these technologies can bring benefits for VIP because it can store information and the description of the clothing pieces. She also considers that VIP has some difficulties with QR Codes, due to the necessity to scan the code with a camera, while washable RFID/NFC tags had an elevated price. Regarding the current popularization of the NFC technology in smartphones, and the improvements in the accessibility software in their operational systems, we think that those adverse conditions for using either NFC or QR Codes are currently in mitigation process.

3 Methodology

During the development of this research, we analyzed studies related to IoT and visual impairment fields, to better understand the human factors behind the connections related to VIP interaction with connected devices. We searched in the scientific literature, studies that investigated IoT as assistive technology, by mapping related design solutions. After this analysis, we conducted a brainstorming session, which defined the general theme of VIP and clothing. Hence, we went back to digital libraries to search specific articles about the subject.

Therefore, our development stage, we listed our design principles and made our product concept, then we designed the prototypes and the user test scenario.

3.1 User-Centered Design Based Method

To reach our proposal, we conducted a method based on the User-Centered Design practices, considering the final user as the most crucial factor of our project. We consulted the VIP community at two local institutions, the Pernambuco’s State Blind People Association (APEC) and the Blind People Institute “Antônio Pessoa de Queiroz” (BPI), both in Recife City, Brazil.

During the visit at the Blind People Institute, we had the opportunity to perform informal interviews with four VIPs. The content of the interviews swirled through questions about habits involving clothes, from how they organize clothing at home, to questions about how to choose outfits to get out.

In our design process, we did not only make assumptions of how VIP would like a system to help them dress up; the interviews gave us important data before brainstorm sessions, and we had the convenience to evaluate our prototype with VIP participation. Unlike some participatory design practices, such as reported in other related works, as Loaded Dice [2] or the Wearables article [3]. The VIP did not take part assuming the role of co-designers of our projects, but we delivered user-driven solutions, always in touch with the local VIP community.

Our prototypes had to be fully accessible to the VIP public, providing regular feedback by our focus group. Our research also consulted the accessibility guide published by the Samsung Research Institute for Informatics Development (SIDI) [9], to design the prototypes.

3.2 Brainstorming Sessions

We conducted two classic brainstorm sessions, where we briefed the participants about the research matter, with a synthesis of the interviews at the Blind Institute, and collaborated with ideas. No visually impaired people attempted to participate directly in the process.

The first brainstorm session occurred after the first search about VIP and IoT at the literature, and it resulted in our research focus to help VIP to deal with visual information on clothes. The second session happened after a new search at the literature and the interviews with VIP at the Blind Institute, and it resulted in the insights of our two proposals for the VIP community, the audio description application, and a clothing combination system.

3.3 Proposals

The first solution selected from the brainstorm round was the idea to translate cloth description into audio. To make it possible, we attached QR codes into clothing tags and created a smartphone application that reads QR Codes and provides means to a VIP user scan a piece of cloth and obtain an audio description.

Our second solution was to create a clothing matching system, where the user would compare two garment pieces and obtain positive or negative feedback about the outfit combination. To realize such task, the user must pick a piece of clothing at a time, so we developed our idea with an NFC module.

3.4 Usability Test

To evaluate our solutions, we performed an exploratory usability test with two volunteers, a thirty-year-old woman with low vision (V1), and a thirty-eight-years-old blind man (V2); both recruited at the APEC Institution, which also, was the usability test location.

We evaluated our proposals with a usability test setup consisted of a garment rack with six pieces of clothing, all attached to NFC tags and QR Codes, as displayed in Fig. 1. Our script consisted of an interview, two tasks, and a post activities interview.

Fig. 1.
figure 1

Test scenario and a t-shirt with an NFC tag and a QR Code.

We have conducted a semi-structured interview with the participants, before starting the testing sessions. This interview aimed to collect details about the user’s behavior when using smartphones, and data about how they manage garments in their daily lives.

After the interview, the volunteer performs the pilot test activities. The first task was to combine an outfit by reading clothing tags with the NFC Module. To complete the task, the user had to pick a piece of cloth from the garment hack, scan the tag with the NFC Reader, then pick the second piece of clothing and repeat the scanning process to hear the feedback about the clothing combination.

The second task was to pick a proper outfit for a social event. To complete the task, the volunteer had to pick a piece of clothing from the garment hack and use our android application to scan the QR code attached to the clothing and hear its description to decide to pick it or not.

Before the activities, we instructed both volunteers about the concepts and functions of the technologies involved in the prototypes. We also authorized them to make comments or to ask questions about the procedures during the execution of the tasks.

At the end of the activities, we realized the second interview to evaluate the user’s satisfaction and to gather qualitative feedback from them. To measure the user’s contentment with the experience, we asked if the volunteer would introduce our prototypes to their routines and if they would recommend our devices to others.

Three members of the research group managed the pilot test. The interviews and activities with the prototypes were duly recorded and filmed by one of the researchers, while a second researcher was responsible for taking field notes, recording comments and observations. The third researcher conducted the operations, interviewing the participants, explaining the procedures and answering questions about the activities.

4 Results

Our informal talk at the Blind People Institute and the brainstorm session with our advisers led us to two alternatives, which generated the evaluated prototypes at the pilot test with VIP users.

4.1 Prototype 1: “Eu Visto” Application

The first solution from the brainstorm rounds was the idea to translate cloth description into audio, with a brief description of the clothing aspects. To realize such task, we designed the concept of an accessible smartphone application with a QR Code reading feature. With our app the VIU would scan the QR Code attached to the cloth tag, using the android phone camera, obtaining a brief description of the garment appearance and details about it. The application would gather information from an online database, with all tags description content.

Our application adopted a very clean aesthetics, with black and white assets to guarantee a high contrast application, considering VIU with low vision. For this first version, we designed our prototype with the QR Code scanning function, as displayed in Fig. 2. All screens are compatible with the talkback function, the screen reader from Google Android Operational System. The first version of the app is called “Eu Visto,” which means “I Wear” in Portuguese.

Fig. 2.
figure 2

“Eu Visto” application final screens.

In our concept, the printed QR Code is an address to an online database, and every address contains four underlying parameters: Type, Size, Texture, and color. When the user scans the QR Code address, it downloads the information to the smartphone, which translates the parameters to audio through the talkback function. All parameters refer to universal aspects in clothes. Type referees to cut and seam accordingly to function. Size refers to the measuring unit present in the cloth tag, the texture parameter refers to the pattern displayed, and the color parameter describes predominant colors in the garment. With these four characteristics, it is possible to describe a piece of cloth in a very distinct way; we can say that a basic t-shirt, for example, is casual, small, without textures, and white.

The structure of the application “Eu Visto,” uses the “zxing” library, and two activity screens, one for camera activation, and other for the screen’s audio description. The talkback function works by reading everything settled at the application “Context Description.” This way, everything settled in the application, has also to be set in the Context Description, in order to the talkback function returns the screen-reading audio. Thus the smartphone reads the QR Code and returns the cloth characteristics, which the talkback function converts to audio description.

4.2 Prototype 2: Cloth Matching System with NFC

We designed an additional module exploring NFC communication. This module can aid the VIU in checking if the selected pieces would match. The user would choose a piece of cloth for the top, and another for the bottom, i.e., a shirt and pants. Then, using the application, reading the NFC tags of both pieces, so that the system will provide recommendations on the combination.

This module is viable due to the possibility of writing data in the NFC tags. In our concept, each label will store significant characteristics of the clothing, its kind, color, texture, and size, same aspects from the QR Code lecture. Thus, when the user read the label of two distinct pieces, with the NFC module, it would send the signal, trough Wi-Fi connection, to the system application. Then the data comparison from both tags would be processed in an external computer or smartphone.

We mounted our prototype on an Arduino mega board, equipped with a buzzer, and an NFC/RFID PN532 module, emulating an accessory that people would have near to their wardrobes, as displayed in Fig. 3. The buzzer’s audio was the feedback component of our system. For our prototype, we set the buzzer to beep three short beeps when the combination was right and to execute a long beep for wrong combinations.

Fig. 3.
figure 3

Clothing combination prototype with the buzzer and the NFC/RFID PN532 module.

4.3 Usability Test Results

The first step of our pilot tests was the recruitment. We conducted the usability test with two subjects recruited in loco, at the APEC institution. We gathered two users, V1, with severe low vision since birth, and V2 with a total loss of vision, also from birth.

The first stage of our procedure were semi-structured interviews, which revealed us that our volunteers are familiar with smartphones; both used Android devices and the talkback function for years.

We asked both users about their habits with garments, general questions related to how they choose an outfit and organize their wardrobes. V1 declared that her low vision allows some autonomy in organizing clothes, but she has problems at stores. During shopping, she suffers to consult clothing labels and to verify visual elements on pieces, such as words or phrases. V2 stated that he buys and organizes garments by tissues styles, such as polo shirts because it usually has specifics kinds of tissue, texture, and design. He revealed to consulting his wife about visual elements in cloth and uses some cues, like embroidery symbols, to memorize which cloth is that. He also consults his wife before leaving home to know if his outfit is acceptable. Curiously, at the interview, V2 admitted not to know if the shirt he was using had a graphic pattern.

After the entry interviews, each participant executed the usability tasks in less than 5 min. They did not experience any problem completing the desired tasks. However, they had some issues with the audio feedback of the buzzer. The volunteers had trouble to understand if the garment combination was right or wrong at the first time, in a misinterpretation of the buzzer’s beep.

When using the application and QR codes, both users had enjoyed using the audio description feature. Still, V2, in Fig. 4, who is blind, had issues. For example, at the first try, the user held the device in an incorrect position, requiring assistance. However, after instructions, the user could read the labels without assistance.

Fig. 4.
figure 4

Volunteer 2 testing our android application.

After the activities, both users declared that would use our solutions in their routines, and that would recommend it to others. V1 also stated that clothing tags with audio description could make a big difference in VIP routines, and observed how this approach could help herself at shopping. V2 gave more feedback about the clothing combination system. He said that it is an interesting concept since it can prevent VIU from going out with the wrong clothes combination. V2 also said that his nine-year-old daughter, who is also blind, would take benefit from both solutions, since, according to him, she is vain and likes doing errands independently.

5 Discussion

During the research, we gathered relevant information about how VIP deal with visual aspects in clothing. Our methodology allowed us to analyze state of the art and understand how researchers project solutions for VIP with IoT technologies.

The research process allowed us to consider the needs of VIPs and comprehend that the lack of vision of our targeted audience is not a problem for the requirements of our system, but an element that is part of the specificity of our users. Blindness does not define them; it is a personal characteristic. Therefore, the access of the blind to our proposals must be the same as those of the full-sighted. This article session discusses our research process and the experience of evaluating prototypes with VIU.

5.1 A System to Aid VIP Deal with Visual Information on Clothes

During the research stages, our user-centered design-based method led us to data that permitted brainstorms, prototypes, and evaluation with VIP. Our interviews at the Blind People Institute revealed intriguing factors about their routines; we understood that is essential to blind people to map their clothes since its earning through storage, up to day usage. Besides, sometimes they have to ask for help from a person with clear sight for aware how their outfits look.

We understand that VIP have a different perception of their surroundings, and this reflects on how they interact with applications and products. For this reason, we defined design concepts and came up with a solution that enables people with visual disabilities to be aware of their clothing and their look. These concepts gave us the opportunity to create a cloth matching alternative, different than Yuan’s system [5] or from the Smart wardrobe [6]. Also, we could advance at the Ringland’s [8] study of possibilities to use QR Code and NFC tags for aid VIP.

After the brainstorm session, we came up with these two objectives, to provide cloth’s visual elements audio description and to create a system of clothing match. Therefore, we designed two prototypes, an android application that uses the smartphone camera to read QR Codes, and the NFC module that provides feedback about the clothing combinations. We separated the QR code and NFC modules to evaluate the VIP abilities to interact with each technology, like different products evaluation. Furthermore, we decided to compare how the local public, used to solutions using smartphones, would fare interacting with an application and a screenless device.

We developed the application for Android because, at APEC and the Blind People Institution, most of the blind people use Android phones. The primary objective of our application evaluated the interaction between the VIU and the QR Code reading and audio description feedback, so we elected these features as essential requirements for our first prototype. During the software development stage, we considered other functions for the application’s first version, such as create, store and edit QR Codes. However, this number of functionalities started to accumulate on screen, making these features costly and burdensome for the user. In the end, we decided to cut it, for keeping the design clear as possible for the user, as displayed in Fig. 5.

Fig. 5.
figure 5

Prototype (left) and final version (right) of the application.

From our point of view, the clothing match system device should be an independent gadget, so we mounted an external device with the Arduino board, the NFC/RFID module, and the buzzer. The reason we picked NFC instead of RFID was the NFC’s low frequency. In our idea, the clothing pieces reading must occur in order, so this low frequency helps to prevent simultaneous readings, forcing the user to operate one reading at the time. We created the feedback pattern with the buzzer because it is compatible with for the Arduino board setup. Also, it can emit different tones in a programmable order, which permitted us to create a pattern for wrong and right combinations.

Regarding the clothing data, we proposed a system with four common aspects of clothes because it was a simple way to summarize clothing description. Hence, the conversion of these characteristics of clothing to data inputs is not a hard task. We can compare these inputs with data from other pieces. Thus, the four topics structure works for the audio description and the system of clothing match, making possible to both solutions share the same database.

5.2 Evaluation with Visually Impaired Users

The usability test procedure revealed some interesting aspects of the context of VIP and clothing. Our two volunteers, with distinct degrees of blindness, helped us to understand how deal with visual data in fashion can be a burdensome task for VIP. The first round of interviews demonstrated that our volunteers experience different issues for picking and buying clothes. V1, with low vision, pointed that even with a sight degree that makes it possible to her organize and combine apparels, it is hard dealing with garments at stores since she cannot read labels or have to come very close at garments to understand details. While V2, totally blind, relies on apparel’s format, cut and tactual tissue textures to organize a clothing management system, supported by his wife. This behavior reminded us the studies about wardrobe organization.

During the task execution, the users demonstrated the ability to operate the systems. V1 did not have any issue in manipulating devices, while V2 had issues in point correctly at the QR Codes. However, after some training, he dealt with the smartphone camera independently, without any other trouble, founding the application interesting. Regarding the clothing matching system, the feedback from the buzzer was an issue for both users. Even with our previous explanation about the audio feedback from the buzzer, both users misunderstood its feedback at the first time. After scanning both tags, they listened to the three beeps confirming a right combination and declared their choices as wrong. They only realized the buzzer functioning after trying another combination and listening to it a second time. Perhaps the buzzer’s sharp, mechanical sound has confused the volunteers.

The user’s response to the audio description was better than beeps from the buzzer. It is interesting to observe because they did not have any trouble in operating the clothing match system, only to comprehend its feedback. With the QR Code, the scenario is almost reverse, the volunteers comprehended the feedback from the application. However, V2 has issues with its operation.

About the volunteer’s feedbacks, in our opinion, the enthusiasm of both users during the interviews after the tasks, biases their answers. They were less comfortable to realize negative critics to some aspects of the evaluation, like the buzzer feedback, or difficulties related to the Smartphone camera. However, even so, their feedback was essential to our conclusions about the prototype performances.

We found that the audio description feature is fascinating for both volunteers. V1 was happy to realize differences from a casual to a social shirt, and V2 pointed that discover the color of a suit jacket is a prominent feature. Their feedbacks demonstrated to us that the awareness of dressing an outfit is indeed a significant form of self-expression.

About the Clothing matching system, V1 was less impressed from its feedback and functionality. We think that she has a degree of sighting that is enough to make it possible to combining clothing. Meanwhile, V2 was very impressed with the possibilities of the solution, since many tissues are very similar regarding tactual textures, but have a completely different visual pattern. These diverse reactions from our volunteers demonstrated to us that maybe the cloth matching system is more relevant to blind users than it is for people with low vision.

The usability test served to analyze interactions of volunteers with our solutions; We consider our results to be interesting to the community and recognize the potential impact that these features can make in the lives of VIP. The systems were efficient in giving awareness of the pieces of clothing for VIP, although we recognize the necessity of some adjustments. The test with QR Codes evidenced that many VIP might have trouble using this technology independently. A future solution, merging both approaches, integrating the audio description feature to NFC tags may be a better approach for the future, since the number of devices with this technology is increasing, and the NFC tags can perform both the audio description and clothing matching tasks.

6 Conclusion

During the development of this research, we reached two different approaches to help VIP deal with visual aspects of clothing. We build up two proof-of-concept solutions over user-centered design principles, reaching an android application that translates QR codes in apparel audio description, and a system of garment matching based on NFC reading and data comparison. With our solutions, VIU can not only have complete awareness of their looks but also, combine and compare different pieces of clothing.

We designed prototypes for each concept and tested them at APEC a blind people local institution, with two volunteers, one with severe low vision and another blind from birth. Both users had no problem completing the usability test tasks. However, they had issues with some assets of our solution.

We found that our prototype is ready to use by the blind, with some training before assimilating all functions. Based on the outcomes, we believe that an NFC only solution, with the same audio description of QR Codes, would be better for VIP. Our next steps are to integrate the NFC module into the mobile application and perform more analysis with VIP people.