5.1 The Beginning: Mainly a Facilitator (2000–2005)

My ASSISTECH journey started by accident. Myself and Mr. Dipendra Manocha (see the adjoining profile) had a common acquaintance with whom he used to meet and exchange audio cassettes in the late 90s and early 2000. Note Dipendra lost his eye sight when he was around 12 years of age. On his reference, Dipendra one day came to meet me with a request that can I help in making the “emacs” editor in Linux accessible through the screen reader software. At that time Dipendra was managing the computer center in NAB (National Association of Blind). At that time I myself was deeply into research and tool development only in the broad areas of VLSI/EDA tools with special focus on system level design that included high level synthesis as well as hardware-software co-design. For such application development, I neither had the skills nor any special interest. On the other hand, I was completely overwhelmed by the sincerity and focus of Dipendra and thus I decided to involve some students through a mini-project. They did manage to develop a basic solution but the solution itself did not get widely deployed but formed the beginning of a very long and fruitful relationship. In the initial years, most of our engagements were similar—he would propose a project (mainly software based) and I would identify a set of project students who would be jointly mentored by us to work on the solution. We became closer and I was always impressed with his ease of understanding the technological capabilities as well as limitations without any formal training in Science or Engineering. Being blind did not seem to matter at all! Note Dipendra’s formal college education consists of an under-graduate and post-graduate degree in music. He had immense capability to explain the needs of the visually impaired in a language that could be easily understood by my CSE students. I started inviting him regularly for engaging my CS class to talk about the challenges faced by the VI and these talks were always a big hit and inspired many students.

5.2 Early Phase: Focus on Embedded Systems (2005–2010)

5.2.1 ASSISTECH and COP315

COP315 is a project based embedded systems course developed by me in late 90s. This course has an interesting German connection. I spent a year as a visiting Professor (Konrad Zuse Fellow) in the Computer Science Department at the University of Dortmund in 1994–1995. I came across this semester long course (group project course) where a group of students (upto 10) did a project that typically resulted in a small system development which they demonstrated at the end of the semester. The course had no formal lectures and though they were mentored by the research assistants but primarily were expected to rely on the material available in the open domain for building their backgrounds and solving the problems they encountered. I was impressed by the complexity as well as quality of projects including use of FPGAs, etc. by which they could build their projects and demonstrate. This was in spite of the fact that students at Dortmund had relatively much less exposure through formal instruction in hardware design vis-à-vis IIT Delhi students.

On my return to IIT Delhi, I decided to revamp a course which I used to teach for third year students “Microprocessor based System Design”. This course initially had lectures, regular practical assignments as well as a small project. After the change, we made groups of 4–6 students, offered them a list of projects while being open to project suggestions from them as well. As in initial years almost all projects were built around microcontrollers and peripheral devices, I gave them a set of lectures on microcontroller based design. COP315 played a major role in the ASSISTECH journey as it incubated many of the ideas that became successful products later not only in Assistive Technology (AT) space but also in other domains.

5.2.2 SmartCane

The SmartCane journey began as a COP315 project in 2005. This was taken up by a group of 4 students—Rohan Paul, Dheeraj Mehra, Vaibhav Jain, and Ankush Garg who were at that time in the first semester of their third year. The first three students were dual degree students and thus were to stay in the Department for another 3 years. Before the start of the semester, Dipendra first mentioned a key problem in independent mobility of visually impaired in Indian infrastructure—White cane’s inability to detect knee above overhanging obstacles on walking paths without a footprint on the walking path itself. These obstacles can range from low overhanging tree branches on roads and footpaths to jutting out window air-conditioners and room coolers in corridors. This often resulted in upper body injuries and resulted in loss of confidence in independent mobility.

I am sure that the problem specification meetings the students had with Mr. Manocha inspired them a lot. The unusually intense engagement of this student group led by Mr. Rohan Paul with the objectives of the project was evident from the very beginning. As they worked on the initial prototypes, they also made it known to me that if opportunity is given they would like to work on it even beyond the semester. This group of students continued to work on it for the next 3 years (except Ankush who graduated in 2007), developed a series of prototypes and did multiple user testing at NAB (National Association of Blind) with the help of Mr. Dipendra Manocha. Many of the initial technical challenges were addressed in this phase that lasted from 2005–2008. Except at the very end of this phase, we worked without any external funding but were responsible for building some key collaborations that have lasted more than a decade. First 3-D printing facility has been established in IIT Delhi and was being managed by Prof. P.V.M. Rao in Mechanical Engineering Department. To prototype the casing that was in the form of handle we established contact with Prof Rao. He not only helped them fabricate but got soon involved in various aspects of design. This was the beginning of the relationship that has played a critical role in the success of ASSISTECH. Key technical achievements were also very inter-disciplinary in nature. Apart from innovative use of ultrasonic sensor to get the obstacle distance information and then to convey the same using a vibrator, key technical challenge revolved around reducing power consumption. It was not only in the electronics but also in the coupling between the vibrator and handle for efficient power transfer [2, 18, 19].

At the end of the prototype development (summer of 2007), by a chance coincidence all the three students could get support to present their work at TRANSED 2007 at Montreal. The fact that the paper also got noticed in the conference and received very positive feedback, it helped to motivate the students. Soon after that we were able to sign an agreement with Phoenix Medical Systems (PMS), Chennai for technology transfer and licensing. The period from 2008–2011 was very frustrating as we wrote several proposals for support for translational research to various Government agencies. Challenge was that significant support was also required at the manufacturer’s end and agencies were willing to fund only the IIT Delhi Component (Fig. 5.1).

Fig. 5.1
figure 1

SmartCane through various stages of development: Laboratory (2005) to Product (2014)

A chance meeting with Dr. Shirshendu Mukherjee, Country Manager of WT (Wellcome Trust, UK) in India resulted in a major success in getting the grant from WT in their first call for “Affordable Health Care in India”. Our application itself was successful because the committee was impressed by the team composition—academic entity with product know-how, a credible industrial partner and an NGO in the space as dissemination partner. This clearly was a turning point for the lab. The funding not only catered to the requirements of all the three partners but also included many provisions for creating a quality product, e.g. plastic injection mold costs, large scale multi-city validation trials and costs of getting CE marking. This funding clearly helped us to do many things very systematically and with professional support which is not typical of an academic project. Some of the key features of the development process are listed below.

  • A 30-member user group was involved in the early stages of the product design and helped arrive at the requirements

  • The product was validated by 150 users in 6 cities before its eventual launch on 31st March 2014. We have not come across any disability product anywhere in the world that has been tested by so many users before its launch

We discuss some of the interesting user-centric design decisions related to ergonomics and aesthetics. Initial designs were difficult for women to grip due to their smaller hand sizes. This came to notice later as the focus group had only men. This required a major redesign including a completely novel packaging. Initially the color of the device was not being considered as it was thought that anyway it is a product for a blind person. A simple question from a blind woman settled this question—do you choose color of the clothes to wear only for yourself or for others as well to see and appreciate?

The success in launching the product within the project duration was highly appreciated by the sponsor (WT) and resulted in significant funding for national and International dissemination. While the industry partner focused on scaling manufacturing, over the next 3 years a team from IIT Delhi and Saksham focused on training users and mobility instructors and building partnership with 40 +  agencies across the country. This has played a significant role in widespread acceptance of the device with sales crossing 70,000 units in 5 years (Figs. 5.2 and 5.3).

Fig. 5.2
figure 2

SmartCane as a product comes with number of support material for trainers as well as for users (self-learning). It contains manuals in Braille and audio manual in different languages

Fig. 5.3
figure 3

SmartCane assembly line facility at Phoenix Medical Systems Chennai and a batch of devices packed in a box

5.2.3 OnBoard

OnBoard is a globally unique solution for assisting visually impaired users to board public buses independently. It addresses two challenges that are faced by the VI in this process of independent boarding (Figs. 5.4 and 5.5).

Fig. 5.4
figure 4

This is a typical bus stop in Delhi. Very often a very large number of routes use the same bus stop and visually impaired person requires the help of another passenger waiting at the stop to help him/her identify the route number

Fig. 5.5
figure 5

Buses for various reasons do not come and stop in the bay necessitating waiting passengers to walk even up to 25 m to board the bus. For a visually impaired it becomes very difficult as well as unsafe to locate the entry door to board the bus especially as the time available is short and number of passengers may be rushing towards the bus

  1. 1.

    Typically many route numbers use the same bus stop. In Indian Metros in some of the bus stops the number of route numbers being serviced may even go up to 50 but 15 +  is quite common and 5 +  is almost the norm except in the suburbs. A VI commuter always requires help of another commuter at the bus stop and that poses two challenges.

    • Off-peak time when it is convenient for travel due to buses being less crowded, it may happen that the bus stop has no other passenger waiting

    • Even if there are many other passengers waiting, inability of the VI person to choose the right person for information may imply that help is being sought from someone busy (may be on his/her phone) or himself a visitor resulting often in unpleasant situations

  2. 2.

    The second challenge is more typical of bus stop infrastructure and practices followed by buses in picking up the passengers. Buses do not always come and stop in the bay for many reasons including presence of slow moving vehicles or large number of waiting commuters spilling onto the bus lane as well as presence of other buses in the bay. Thus for the VI person to locate the entry door can be even a bigger challenge and our video recordings show that the person may have to walk even up to 25 m.

Technically the work on OnBoard started more or less concurrently with SmartCane but after the development of prototypes and some testing, further development was shelved as the focus shifted to translational research on SmartCane. In hindsight it was the right decision as the solution involved changes to the infrastructure provided by a third party (bus operators) struggle has been much higher to get it implemented. The key technical features included a simple user interface, implementation of a slotted network protocol to simultaneously handle up to 8 buses at a bus stop and dual frequency protocol between the user device and the bus device for meeting both the requirements—querying and getting the route number and locating the entry door using an audio cue (Figs. 5.6 and 5.7).

Fig. 5.6
figure 6

User presses the query button (user module): (1) RF query is sent to all buses in the vicinity and buses respond with their route numbers. (2) User module reads out all the route numbers one by one

Fig. 5.7
figure 7

In case a route number is of user interest: (1) User presses the select button (user module), This triggers a voice output from the speaker (bus module fixed near the door). (2) Acts as auditory cue for locating the door

Starting 2014, with the help of TIDE scheme of DST, Govt of India, we restarted this work and did some preliminary testing on DIMTS buses in Delhi. Subsequently with the help of a very active organization in Mumbai (XRCVCFootnote 1) we were able to reach out to BEST where between January and April 2015 we installed these devices on 25 buses (all buses on route numbers 121 and 134 from Back Bay depot). We identified 21 blind bus users, trained them on the device with 5 to 6 supervised boardings each and then asked them to do a total of 350 unsupervised boardings. As they were unsupervised boardings, we needed to evolve a mechanism of assessing the effectiveness of the solution. We asked the users to record the time they reached the bus stop and the time they boarded the bus. Comparing the waiting time at the bus stop with the frequency of the service, we could determine whether the user could board the first bus on the route or not. We achieved 92% +  success rate in users independently boarding the first bus on the route implying the device was very effective. We believe the failure was lower than 8% as sometimes the depot change the buses deployed on a specific route due to operational reasons and it was possible that some of the buses deployed on these routes did not contain the OnBoard bus device [8, 13, 16, 21].

Last 2 years had been spent in reducing the size of the user device and making it aesthetically pleasing. The bus device has also been significantly miniaturized and now it can be operated from the power source of the bus instead of a separate battery as was the case during Mumbai trials Further, a simple protocol by which the device can be retro-fitted in a few minutes has been developed and tested. At present we are awaiting funding for a much larger trials where the users would use the device over a period for their regular commute requirements. We feel such trials are required before we pursue the same becoming a regulatory requirement so that the bus systems become inclusive which is part of the Government policy (Figs. 5.8 and 5.9).

Fig. 5.8
figure 8

Mumbai trials on BEST Buses (Jan–April 2015): Bus device was mounted on the window and the unit operated with its own battery. The battery unit is seen below the front seat reserved for the disabled

Fig. 5.9
figure 9

Miniaturized bus device and user device (not to scale) used in the second Delhi trials. The bus device is mounted on the DMITS orange cluster bus in Delhi. Again the validation trials on these new devices were conducted successfully during May to July 2018

5.3 Collaborations and Research: Formation of ASSISTECH (2010–2013)

5.3.1 Student Projects to Research

This phase also saw a consolidation of our activities through the formation of ASSISTECH group with a clear objective of working in the space of mobility and education of visually impaired. Allocation of laboratory space in the newly constructed School of Information Technology building significantly facilitated this process. Apart from involvement of students through under-graduate projects, registration of first PhD student (Mr. Piyush Chanana) changed the nature of activities in the laboratory. This also meant that a more structured approach to involving users in evolving product/project specification became possible. Saksham our collaborator posted two of its blind employees to the laboratory and that had an impact at multiple levels. By this time it was clear that any design and product development without involvement of users at all stages, a process that is today referred to as co-creation is essential in this space. It is not sufficient for the designers to understand the limitations of VI users but they also have to understand their strengths in equal measure.

Inter-disciplinary research has gained traction globally but still very often the laboratories draw students from within a single disciplinary background. ASSISTECH with its user focus has been able to break this barrier and today has computer science, electrical engineering, mechanical engineering and design expertise apart from user community members under one roof. This has created a unique ecosystem for user-centric approaches to problem solving.

5.3.2 NVDA Activities

NVDA or Non-visual Desk Top access is a screen reader software that makes Microsoft Office products accessible to the visually impaired. It is an open source movement and has been successful in creating a large user base globally. ASSISTECH also participated in improving the NVDA tools to essentially support tables that are found in documents. Navigation across the cells of a table in an intuitive and non-verbose manner is critical for VI to comprehend tabular information. ASSISTECH helped augment NVDA in multiple ways and worked closely with the user groups [3, 4].

5.3.3 TacRead and DotBook

Refreshable Braille displays for accessing digital text are a technology which is more than three decades old. These are line display devices that contain an array of cells (typically ranging from 8 to 40) consisting of 6 or 8 pins per cell to create a 6-dot or 8-dot Braille character, respectively. The pins are driven by piezo actuators and through their up and down movement form the Braille characters. Though the devices have been around for a long time but the costs have been so high that there was hardly any penetration in low-income countries like India. Combination of patented technologies, low volume production associated with high margins from monopoly cell manufactures meant the devices continued to cost USD 50 to USD 100 per cell. The rapid growth of screen readers that had started becoming available on all platforms also meant that even in the high-income countries the Braille displays saw a dwindling market. On the other hand, many studies have now shown that if visually impaired persons do not learn to read their ability to write would be minimal. This resulted in the transforming Braille project with worldwide interest in producing low-cost Braille reading and writing devices.Footnote 2

At the same time at ASSISTECH we were engaged in developing refreshable Braille cells using shape memory alloys. The development process involved efforts of multiple batches of mechanical engineering students working with our research staff. These students not only worked on it as their course projects but often stayed back after graduation to work on refining their designs to improve performance or reduce size/weight/cost. It has taken more than 5 years of design, testing, and instrumentation to produce reliable Braille cell modules. Based on the success of the SmartCane, WT funded us again for the Refreshable Braille displays. This time we had two industry partners (PMS and KSPL—KritiKal Solutions Private Limited) and Saksham was again our dissemination partner. The modules produced by Phoenix are being called TacRead, whereas the devices produced by KSPL using these modules are named DotBook. We did a launch of 20-cell and 40-cell devices on Feb 2019. Small volume production is on but some key changes and tooling required for volume production is being setup (Fig. 5.10).

Fig. 5.10
figure 10

Initial single Braille cell based on SMA (Shape memory alloys) designed and fabricated during 2014–2017

The device is highly complex and represents innovation and design in software, electronics design, mechanical design as well as ergonomics. This also resulted in a number of research papers as well as a well awarded Master’s thesis by Suman Muralikrishna [1, 7, 17, 20]. DotBook provides all the key functionality of a laptop including document editing, web browsing as well as emailing. Today all standard tools have software towards taking inputs from graphical user interfaces. They all needed to be thought afresh as it was not only a character line display but had a length of only 40 characters. Power delivery as well as consumption was a huge challenge as SMA wires required heating to actuate and needed to be cooled to bring the pin return to its original position. Overheating meant not only possibility of wire breakage over time but also slower refresh rate due to higher latency required for deactivation. Mechanical complexity was evident with 320 moving parts (8 pins per cell and 40 cells) which need to be controlled such that the heights of actuated pins are within +  or − 0.1 mm in a 1 mm movement. Many user studies were conducted both to identify the functions associated with the limited set of keys that are available as well as their suitable positioning on the device. Volume production is expected to begin by early 2020 as major reliability related issues in the cell module have been sorted out (Fig. 5.11).

Fig. 5.11
figure 11

DotBook: 20-cell and 40-cell devices that were launched on 28 Feb 2019. Volume production is expected to start in first quarter of 2020

5.4 Change of Focus: Technology to Users (2013–2016)

5.4.1 Tactile Graphics Project

During the dissemination phase of SmartCane, it was decided to create a self-learning manual for visually impaired. It was easy to print the Braille text in English and Hindi but the manual also contained a set of simple diagrams to explain ultrasonic ranging. In this process we realized that there is no structured way for producing tactile diagrams in some volume in India. This was in sharp contrast to what we saw in UK and USA where almost all the text material including diagrams were available to visually impaired students as Braille books. Clearly unavailability of tactile diagrams created a significant barrier to visually impaired students to pursue STEM subjects—typically they were forced to study only subjects like history and literature that did not require access to diagrams. This prompted us to develop a low-cost technology for production of tactile graphics. Under a project sponsored by MEITY (Ministry of Information Technology), know-how was created for production of low-cost tactile diagrams. This involved some software adaptations, standardization of 3-D printing for preparing low-cost molds, and setting up of facilities for production of tactile material using thermoforming. As tactile route to learning is very different from visual route, guidelines have been developed over many decades in the USA and Europe (e.g. BANA1). A set of tactile designers were trained using these guidelines and then in collaboration with an apex agency in India that is responsible for school curriculum and education, existing Science and Mathematics books for School grades 9th and 10th were produced with tactile diagrams. These were extensively tested with both children studying in blind schools as well as inclusive schools [15]. Once the project was completed, a non-profit company named Raised Lines FoundationFootnote 3 (RLF) has been incubated in August 2018. RLF is at present engaged in design and production of tactile diagrams and other tactile material for education. Initial feedback suggests that it is helping many visually impaired students to pursue STEM subjects all over India. In the next few years not only we intend to scale this venture but also create partnerships across the country for creation of tactile material in different Indian languages. It is also planned to create a design service for organizations outside India (Figs. 5.12 and 5.13).

Fig. 5.12
figure 12

Tactile diagrams developed as part of the project Tactile Graphics sponsored by MEITY, Govt of India made innovative use of 3-D printing for production of low-cost tactile diagrams

Fig. 5.13
figure 13

A non-profit company has been formed using the know-how developed under the project to produce school text books and other reading material for visually impaired. The company is in operation since August 2018

5.4.2 More Research Projects and International Collaboration

This period also saw enrollment of number of PhD and Masters students in this space. As Design students started enrolling with us, it was clear that there were many open questions to be answered to make the tactile diagrams effective. Also modern tactile production processes have created more flexibility in production but there have been little study on their effectiveness. Ms. Richa Gupta, a design graduate, started studying effectiveness of various forms of representation of tactile diagrams and is now close to finishing her work [10, 14].

This research on tactile diagrams also created our first significant collaboration with IUPUI in Indianapolis. Prof. Steven Mannheimer, whom we met in a Indo-US workshop in India was also interested in pursuing this area. He had already done some work with Indianapolis School for the Blind (ISB). This collaboration enabled Richa to conduct her studies and establish effectiveness of representation techniques in two distinct geographies—NAB in New Delhi and ISB at Indianapolis [11].

5.5 Consolidation and Growth (2016 - )

The period after 2016 has seen lot of growth—both in terms of research projects, students as well as activities. This period also saw lots of national and International recognitions.

5.5.1 RAVI

Reading Assistant for Visually Impaired (RAVI) is a project aimed at making pdf documents accessible. The work is under progress and we expect some initial results next year [6, 12]. The challenge is manifold

  • The legacy documents available in the digital library in Indian languages use fonts that are not recognized by screen reader software. There is a need to convert these into formats like ePUB that are accessible.

  • Mathematics still poses a major challenge as very often in pdf documents the equations are available as images. Even otherwise delivering a complex equation in audio format that is linear and comprehensible is a challenge. One of the visually impaired students in the group (Mr. Akashdeep Bansal) has taken it up as his PhD research topic. We are also collaborating with Prof. Volker Sorge in University of Birmingham (UK) who has had extensive experience in this field.

  • Navigating through tables efficiently needs some research as well as tooling.

  • Diagrams require associated description for delivery. Recent AI techniques are making great progress in automatically describing images and we would like to adapt these techniques for automatic generation of diagram descriptions

5.5.2 MAVI

Mobility Assistant for Visually Impaired (MAVI) is a project to use modern AI based image classification techniques for safe and efficient mobility of visually impaired. The focus of this work is to look at object detection in the context of street infrastructure that is typical of Indian cities. The initial prototypes have been built but a usable solution is likely only by 2021. The video stream from the camera is processed using multiple streams to detect

  • Street stray animals like dogs and cattle for safety

  • Potholes at a distance again for safety

  • Multi-lingual street signage for assisting in navigation

  • Face detection for social inclusion

5.5.3 NAVI

Navigation Assistant for Visually Impaired (NAVI) is a mobile app based solution that can help in outdoor as well as indoor mobility. Mr. Piyush Chanana, a senior scientist in ASSISTECH, understood the challenges of VI persons in independent mobility by interacting with hundreds of blind users whom he has trained in the use of SmartCane. He has captured this in the specification and design of an app that can help visually impaired in outdoor mobility. Among other things, it involves annotation of tactile landmarks that are “visible” to a blind user [5, 9].

Currently another PhD student, Mr. Vikas Upadhyay has started working on indoor navigation. The work involves effective mapping of internal spaces, localization with limited additional infrastructure, and an appropriate user interface for visually impaired. Intent is not only to propose novel algorithms and techniques but also to install it in couple of public buildings and validate.

5.5.4 Outreach Through Conferences

This period also saw our intent to outreach and create forums for all Assistive Technology stakeholders to come together. Initially we organized two Indo-US workshops with participation of 50 +  persons in this space. This was followed by two major assistive technology conferences in 2018 and 2019. EMPOWER 2018Footnote 4 and EMPOWER 2019Footnote 5 brought 250 +  AT researchers, users, innovators and entrepreneurs, educators, exhibitors, etc. on one platform and have been a major success.

5.5.5 Major Recognitions

This period also saw numerous awards being conferred for ASSISTECH activities. A spotlight talk in London as part of the Grand ChallengesFootnote 6 meeting organized by Gates foundation and Wellcome Trust in London on 24th Oct 2016 was the first major recognition of ASSISTECH activities. Major Indian awards included NCPEDP-Mphasis Universal Design award and three national awards. ACM recognized our contribution through the ACM Eugene L LawlerFootnote 7 award for Humanitarian Contributions within Computer Science and Informatics at their annual awards function in San Francisco on 15th June 2019.

5.6 Conclusion

Clearly my venturing into Assistive Technology from Embedded Systems/EDA space has been an accident and thus I titled this paper as an accidental journey. The work has been hugely satisfying primarily because of the impact it potentially has on the lives of visually impaired people. The feedback we frequently get from our numerous users on how our devices and solutions have positively affected their lives is sufficient to drive and inspire us. Over a period ASSISTECH design philosophy has become completely user-centric. In the initial years we used to choose the problems to look at based on our own experience and expertise. Now if we learn of a major challenge in the visually impaired community and then scout around and try to put up a team by collaborating with people with the required expertise. Our motto is to touch a million people by 2022.