Abstract
This paper presents a brainstorming tool combined with pointing gestures to improve the brainstorming meeting experience for blind and visually impaired people (BVIP). In brainstorming meetings, BVIPs are not able to participate in the conversation as well as sighted users because of the unavailability of supporting tools for understanding the explicit and implicit meaning of the non-verbal communication (NVC). Therefore, the proposed system assists BVIP in interpreting pointing gestures which play an important role in non-verbal communication. Our system will help BVIP to access the contents of a Metaplan card, a team member in the brainstorming meeting is referring to by pointing. The prototype of our system shows that targets on the screen a user is pointing at can be detected with 80% accuracy.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
- Brainstorming tool
- Web application
- Android application
- Pointing gesture
- Robot operating system
- Kinect sensor
- OpenPtrack
- Localization
- Recognition
- Non-verbal communication
1 Introduction
Non-verbal communication plays an important role in team meetings, in which we use gestures along with speech to convey the full meaning of our ideas. Usually, those gestures are based on our inherited cultures, language we speak, etc. However, this non-verbal communication (NVC) is not accessible to blind and visually impaired people (BVIP) without additional aid. Thus, they are unable to participate in the meetings to a full extent. To better integrate BVIP in such meetings, we need to provide them with external aids that are able to capture and transfer the spatial information of artifacts as well as referring gestures and other non-verbal communication elements by sighted users.
Brainstorming meetings are used in many areas of business and academia, such as medical diagnostics, scientific research, spin-offs, military operations, etc. Considering the wide use of brainstorming meetings, there is a need to build an autonomous system to help BVIP work independently in those meetings. Otherwise, it is very difficult for them to understand the full meaning of the conversation, mainly due to the non-verbal communication.
NVC in brainstorming meetings includes several kinds of gestures performed by the participants, such as nodding, shaking the head, head orientation, pointing gestures, sign language, eye contact, blinking of eyes, pointing with eyes, etc. Thus, the information flow in a team meeting is not simply based on generated artifacts and on spoken explanations, but it is in particular a manifold of NVCs that could carry up to 55% of the overall information [13]. These gestures refer to the 3D information space they are performed in.
Spatial aspects of brainstorming meetings also play a vital role in understanding and determining pointing gestures performed by the participants of a meeting. Most people tend to give an egocentric relative position of the objects in the meeting room when referring to them. Some of the spatial artifacts which are to be considered are whiteboards, items on the whiteboards, etc. For this paper, we developed a Metaplan brainstorming tool which is the basis of our spatial artifacts.
Thus, the goal is to transfer NVC elements to BVIP, and more particular pointing gestures that refer to artifacts in the 3D information space. For this, we use OpenPtrack along with robot operating system (ROS) [18] to detect the pointing direction of a user with regard to artifacts in a common work space. We have also developed a brainstorming tool which has a web interface (the “Moderator” interface) and android application for the digital interaction between the members of the brainstorming meeting. The content of the corresponding artifact could then be output on a blind user interface such as braille.
This paper is structured as follows: Related work is discussed in Sect. 2, while the methodology is described in Sect. 3. The experiments are elaborately illustrated in Sect. 3.1, results are discussed in Sect. 3.2, followed by suggestions for improvement in Sect. 3.3. Finally, Sect. 4 concludes our work.
2 State of the Art
Researchers have worked on technology to improve the experience of brainstorming meetings in particular for sighted people. Pictorial stimuli is used for supporting group conversation [22]. Graph-based web services are built for the solutions for various problems in meetings [6]. An automatic system to categorize and process the language used in meetings is described in [4]. Mobile phones are used for brainstorming sessions which act like a virtual mind map table [11]. There is also commercial as well as free tool support for brainstorming meetings. Approaches range from cards applications [5, 10] and mind map applications [12, 14] over dedicated brainstorming and decision support software [3, 21] to virtual design spaces and visual management tools [15, 20]. These various kinds of software allow for an improved workflow and help people to collaborate.
There is only little research to improve the integration of BVIP in brainstorming meetings. In [8], Mindmap-based brainstorming sessions are described to push the integration of BVIP in meetings. In [19], a Mindmap along with a LEAP sensor is described for tracking pointing gestures over an interactive horizontal surface. A prototypical system simulated gestures by sighted users and made them accessible to BVIP [16]. A system using a LEAP sensor and speech recognition was developed to improve the tabletop interaction for BVIP in [9] to better detect deictic gestures that are typically accompanied with specific words that hint to a geometric position. Another approach to detect pointing gestures in brainstorming meetings used a Kinect and a PixelSense table. It helped BVIP to understand the basic meaning of such gestures [7]. For this, an information infrastructure was developed by [17] to translate the natural behavior of sighted team members and thus reduce the information gap for the BVIP.
3 Methodology
Our approach includes the development of a brainstorming tool and an autonomous system for recognizing pointing gestures. Thereafter, the two systems are combined to know the output of pointing gestures made towards the digital screen showing the brainstorming tool. This combined system helps BVIP to access the content of the brainstorming tool app, i.e. the card on which a sighted user is pointing to.
3.1 Concept of the Brainstorming Tool
The brainstorming tool is software, which aims to support brainstorming meetings based on the Metaplan method. It mainly supports two different roles: a moderator and the other participants of the group. These participants can be sighted people as well as BVIP. The moderator organizes the input of the participants, leads the discussion, and asks participants to clarify and resolve input, but neither provides content nor makes decisions by himself. The participants on the other hand provide input by editing cards, and in a second step contribute to discussions and participate in the decision-making process. Consequently, the brainstorming tool has two different modes of operation, which will be used consecutively following the two different phases of Metaplan:
-
Participants add cards via a smartphone Android application
-
The moderator operates a web-based user interface, called whiteboard view, to organize cards of the participants
Android App for the Participants. The Android app for the participants has intentionally a relatively small feature set, since any detailed user interface would distract the user from his main task. The functionalities of the Android app are as follows:
-
Participants can create cards and edit them.
-
Providing an overview of all created cards by each individual user.
-
Participants can submit cards to whiteboard. Once the card is submitted, it cannot be deleted anymore from the whiteboard by the participant.
Web-Based User Interface for Moderators. The web-based user interface for moderators includes the following functionalities for organizing and facilitating a meeting:
-
Organization Moderators are provided with an overview of meetings. They can create new meetings, invite participants to a meeting from the list of users, who registered to the system, and can modify and delete existing meetings. Moderators can open meetings multiple times, which allows for multi-screen setups where screens show certain segments of the whole work space.
-
Facilitation In the whiteboard view, moderators can rearrange cards, which were created by the other participants using the Android app. New cards pop up in real time on a stack in a corner of the virtual whiteboard. Moderators can create groups and relations between cards. However, they cannot decide to create these two types of entities themselves, but they are the output of group discussion. Moderators can delete cards, groups and relations. This is the result of a group discussion among participants coordinated by the moderator.
Architecture and Technology. The brainstorming tool is based on a client-server architecture (see Fig. 1). The server is based on LaravelFootnote 1 which stores data in an SQL database. Laravel also provides the web-based user interface for the moderator. For the dynamic parts of the whiteboard view, which are supposed to change without page reloads, like real-time modifications of the size, orientation and position of user interface elements or repositioning and grouping of cards, the JavaScript Framework KonvaFootnote 2 is used to display cards, groups of cards and their relation to each other. Konva allows the moderator to manipulate these items in a user-friendly manner using a mouse or touchscreen.
The server offers two kinds of APIs. Firstly, a RESTful APIFootnote 3, which allows data, e.g. user data, cards, groups, relations and other data, to be created, read, updated and deleted. Secondly, a Web-SocketFootnote 4 service, which allows broadcasting changes of such data following the publish-subscribe patternFootnote 5. Clients can subscribe to channels, which correspond to sets of data. If a set of data changes, the server publishes the fact that data was changed to these channels, and clients can react to these changes and for instance update their cached data.
3.2 Pointing Gesture Recognition System
The pointing gesture recognition system [2] uses a Kinect v2 sensor. The sensor data is given to ROS 1 (Robot Operation System) and analyzed by OpenPTrack [1] to get the joint coordinates of the pointing arm. These joint coordinates are then used for assessing the pointing gesture performed by the user. Each joint has a different ID and the x, y, z coordinates with different IDs are published. The sensor’s reference frame is transformed to the world reference frame using the /TF ROS package. This package is used for rotation and translation, i.e. linear transformations, to have the world reference coordinate frame.
The pointing gesture consists of an arm movement towards the referral object, and the hand pointing towards the object. The hand gesture is usually accompanied with speech referring towards the same directional position. We calculated the pointing gesture from the elbow and hand position coordinates. These coordinates help to find the forearm vector which is used for calculating the pointing vector. We used the mathematical transformation as shown in Eq. 1. For this, we used a normal direction to the plane \(\textit{\textbf{N}}_f\), a predefined point on the ground plane \(P_f\), the positions of hand H, and the position of elbow joint E, respectively.
The plane coordinate frame is the plane where the output screen (the common work space for the Metaplan) is placed. The coordinate position in the world reference frame is transformed to the plane coordinate frame of the output screen using a rotation matrix. The output values from OpenPtrack are converted to the whiteboard/matrix plane coordinate frame. The TF package in ROS is used for this coordinate transformation. These transformed output position values are analysed based on the position of the cards of the brainstorming tool being displayed on the screen. After getting the position of the card being pointed at, the card’s content could be converted to speech and made available to the BVIP (Fig. 2).
3.3 Combination of Brainstorming Tool and Pointing Gesture Recognition System
After developing the brainstorming tool and the pointing gesture recognition system, these two systems are combined to better integrate BVIP in brainstorming meetings as shown in Fig. 3. The pointing gesture recognition system is used to assess the position of the card which is being pointed at by the moderator. This card carries the information which has to be conveyed to the BVIP. The system helps a BVIP to be better integrated and to access complete meaning of the conversation by knowing the contents the participants are talking about. So, it is a two-fold process: (1) The user points at the digital whiteboard where the contents of the web application of the brainstorming tool is displayed. The pointing gesture recognition system identifies the gesture and the target position of the pointing gesture. (2) The identified position is correlated to the content being displayed on the screen at that time to retrieve the contents of the corresponding artifact. Preliminary user studies on a screen with six equally distributed areas, this combined setup can offer 80 % accuracy in detecting the target position of pointing gesture.
4 Conclusion
We built a brainstorming tool and automatic pointing gesture recognition system, which can work together in an synchronous manner to help BVIP to access the integral meaning of NVC. The output of our system could be delivered to the BVIP via audio/speech or using a braille display.
The pointing gesture recognition system is based on the pre-developed software OpenPtrack and ROS. The output of the system gives the position of the pointing gesture towards the digital screen showing the web application of the brainstorming tool. Future work will also involve the output medium for the BVIP. We plan to use a magnetically driven 2D actuation system along with braille display and audio for the output of the system.
Notes
- 1.
Laravel - PHP web framework: https://laravel.com.
- 2.
Konva - Javascript 2D Canvas Library: https://konvajs.org.
- 3.
- 4.
WebSocket: https://en.wikipedia.org/wiki/WebSocket.
- 5.
References
Carraro, M., Munaro, M., Burke, J., Menegatti, E.: Real-time marker-less multi-person 3D pose estimation in RGB-depth camera networks. In: Strand, M., Dillmann, R., Menegatti, E., Ghidoni, S. (eds.) IAS 2018. AISC, vol. 867, pp. 534–545. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-01370-7_42
Dhingra, N., Valli, E., Kunz, A.: Recognition and localisation of pointing gestures using a RGB-D camera. arXiv preprint arXiv:2001.03687 (2020)
Groupmap - collaborative brainstorming & group decision-making (1442020). https://www.groupmap.com/
Huber, B., Shieber, S., Gajos, K.Z.: Automatically analyzing brainstorming language behavior with Meeter. Proc. ACM Hum.-Comput. Interact. 3(CSCW), 1–17 (2019)
Ideaflip - realtime brainstorming and collaboration (1442020). https://ideaflip.com/
Ivanov, A., Cyr, D.: The concept plot: a concept mapping visualization tool for asynchronous web-based brainstorming sessions. Inf. Vis. 5(3), 185–191 (2006)
Kunz, A., Alavi, A., Sinn, P.: Integrating pointing gesture detection for enhancing brainstorming meetings using kinect and pixelsense. In: Disruptive Innovation in Manufacturing Engineering Towards the 4th Industrial Revolution, 25–28 March 2014, Stuttgart, Germany, p. 28 (2014)
Kunz, A., et al.: Accessibility of brainstorming sessions for blind people. In: Miesenberger, K., Fels, D., Archambault, D., Peňáz, P., Zagler, W. (eds.) ICCHP 2014. LNCS, vol. 8547, pp. 237–244. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-08596-8_38
Kunz, A., Schnelle-Walka, D., Alavi, A., Pölzer, S., Mühlhäuser, M., Miesenberger, K.: Making tabletop interaction accessible for blind users. In: Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces, pp. 327–332 (2014)
Lino - sticky and photo sharing for you (1442020). http://en.linoit.com/
Lucero, A., Keränen, J., Korhonen, H.: Collaborative use of mobile phones for brainstorming. In: Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services, pp. 337–340 (2010)
Lucidchart - online mind map maker (1442020). https://www.lucidchart.com
Mehrabian, A., Ferris, S.: Inference of attitudes from nonverbal communication in two channels. J. Consult. Clin. Psychol. 3, 248–252 (1967)
Miro - mind map software built with teams in mind (1442020). https://miro.com/
Mural - online brainstorming, synthesis and collaboration (1442020). https://mural.co/
Pölzer, S., Miesenberger, K.: Presenting non-verbal communication to blind users in brainstorming sessions. In: Miesenberger, K., Fels, D., Archambault, D., Peňáz, P., Zagler, W. (eds.) ICCHP 2014. LNCS, vol. 8547, pp. 220–225. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-08596-8_35
Pölzer, S., Schnelle-Walka, D., Pöll, D., Heumader, P., Miesenberger, K.: Making brainstorming meetings accessible for blind users. In: AAATE Conference (2013)
Quigley, M., et al.: Ros: an open-source robot operating system. In: ICRA Workshop on Open Source Software, Kobe, Japan, vol. 3, p. 5 (2009)
Schnelle-Walka, D., Alavi, A., Ostie, P., Mühlhäuser, M., Kunz, A.: A mind map for brainstorming sessions with blind and sighted persons. In: Miesenberger, K., Fels, D., Archambault, D., Peňáz, P., Zagler, W. (eds.) ICCHP 2014. LNCS, vol. 8547, pp. 214–219. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-08596-8_34
Stormboard (1442020). https://www.stormboard.com/
Stormz - meeting software for demanding facilitators (1442020). https://stormz.me/de
Wang, H.C., Cosley, D., Fussell, S.R.: Idea expander: supporting group brainstorming with conversationally triggered visual thinking stimuli. In: Proceedings of the 2010 ACM Conference on Computer Supported Cooperative Work, pp. 103–106 (2010)
Acknowledgements
This work has been supported by the Swiss National Science Foundation (SNF) under the grant no. 200021E 177542/1. It is part of a joint project between TU Darmstadt, ETH Zurich, and JKU Linz with the respective funding organizations DFG (German Research Foundation), SNF (Swiss National Science Foundation) and FWF (Austrian Science Fund).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2020 The Author(s)
About this paper
Cite this paper
Dhingra, N., Koutny, R., Günther, S., Miesenberger, K., Mühlhäuser, M., Kunz, A. (2020). Pointing Gesture Based User Interaction of Tool Supported Brainstorming Meetings. In: Miesenberger, K., Manduchi, R., Covarrubias Rodriguez, M., Peňáz, P. (eds) Computers Helping People with Special Needs. ICCHP 2020. Lecture Notes in Computer Science(), vol 12377. Springer, Cham. https://doi.org/10.1007/978-3-030-58805-2_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-58805-2_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58804-5
Online ISBN: 978-3-030-58805-2
eBook Packages: Computer ScienceComputer Science (R0)