Publications

Reports

Workshop Report: Digital Solutions for Inclusive Mobility?

Claudia Loitsch & Karin Müller

This workshop report was produced as part of the largest German-speaking professional conference in the field of human-computer interaction, “Mensch und Computer 2020”. The report summarises the contents of the workshop held during the conference. The workshop included an exchange and presentation of various projects in the field of digital mobility. The report gives an insight into the following topics:

  • AccessibleMaps – Accessible Indoor Maps
  • Linked Data for Accessibility
  • ASSIST ALL: Virtual assistant for indoor orientation for people with disabilities
  • DYNAMIK A requirement-oriented navigation app
  • WheelShare: Development of an accessible outdoor map using machine learning and crowd sourcing

The report can be downloaded below in accessible form and in German language only:

Challenges for people with impairments in unfamiliar buildings

Christin Engel

This report presents the results of an online survey with 136 participants with blindness, visual impairment and mobility impairment, which was conducted in early 2020 as part of the research project. The survey was created as part of the target group analysis and needs assessment. The aim of the study was to analyse the current practice of people with blindness, visual impairment as well as mobility impairment in orientation and finding their way in unfamiliar buildings. In addition to orientation strategies, information about challenges in orientation and the need for information about the accessibility of buildings were analysed. Furthermore, the survey provides information about the participants’ experiences with unfamiliar buildings, with navigation applications and maps. Requirements for maps and preferred map formats and types were also recorded. The results of the survey form the basis for designing a target group-oriented and needs-based map application for buildings. Within the framework of the project, requirements for the development of a mobile, digital map application are derived from this.

This report is currently only available in German.

Published Papers

HIDA: Towards Holistic Indoor Understanding for the Visually Impaired via Semantic Instance Segmentation with a Wearable Solid-State LiDAR Sensor

Huayao Liu, Ruiping Liu, Kailun Yang, Jiaming Zhang, Kunyu Peng, Rainer Stiefelhagen. International Workshop on Assistive Computer Vision and Robotics (ACVR) with IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, Canada (Virtual), October 2021.

Abstract: Independently exploring unknown spaces or finding objects in an indoor environment is a daily but challenging task for visually impaired people. However, common 2D assistive systems lack depth relationships between various objects, resulting in difficulty to obtain accurate spatial layout and relative positions of objects. To tackle these issues, we propose HIDA, a lightweight assistive system based on 3D point cloud instance segmentation with a solid-state LiDAR sensor, for holistic indoor detection and avoidance. Our entire system consists of three hardware components, two interactive functions (obstacle avoidance and object finding) and a voice user interface. Based on voice guidance, the point cloud from the most recent state of the changing indoor environment is captured through an on-site scanning performed by the user. In addition, we design a point cloud segmentation model with dual lightweight decoders for semantic and offset predictions, which satisfies the efficiency of the whole system. After the 3D instance segmentation, we post-process the segmented point cloud by removing outliers and projecting all points onto a top-view 2D map representation. The system integrates the information above and interacts with users intuitively by acoustic feedback. The proposed 3D instance segmentation model has achieved state-of-the-art performance on ScanNet v2 dataset. Comprehensive field tests with various tasks in a user study verify the usability and effectiveness of our system for assisting visually impaired people in holistic indoor understanding, obstacle avoidance and object search.

https://openaccess.thecvf.com/content/ICCV2021W/ACVR/html/Liu_HIDA_Towards_Holistic_Indoor_Understanding_for_the_Visually_Impaired_via_ICCVW_2021_paper.html


Trans4Trans: Efficient Transformer for Transparent Object Segmentation To Help Visually Impaired People Navigate in the Real World

Jiaming Zhang, Kailun Yang, Angela Constantinescu, Kunyu Peng, Karin Müller, Rainer Stiefelhagen. International Workshop on Assistive Computer Vision and Robotics (ACVR) with IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, Canada (Virtual), October 2021.

Abstract: Common fully glazed facades and transparent objects present architectural barriers and impede the mobility of people with low vision or blindness, for instance, a path detected behind a glass door is inaccessible unless it is correctly perceived and reacted. However, segmenting these safety-critical objects is rarely covered by conventional assistive technologies. To tackle this issue, we construct a wearable system with a novel dual-head Transformer for Transparency (Trans4Trans) model, which is capable of segmenting general and transparent objects and performing real-time wayfinding to assist people walking alone more safely. Especially, both decoders created by our proposed Transformer Parsing Module (TPM) enable effective joint learning from different datasets. Besides, the efficient Trans4Trans model composed of symmetric transformer-based encoder and decoder, requires little computational expenses and is readily deployed on portable GPUs. Our Trans4Trans model outperforms state-of-the-art methods on the test sets of Stanford2D3D and Trans10K-v2 datasets and obtains mIoU of 45.13% and 75.14%, respectively. Through various pre-tests and a user study conducted in indoor and outdoor scenarios, the usability and reliability of our assistive system have been extensively verified.

https://openaccess.thecvf.com/content/ICCV2021W/ACVR/html/Zhang_Trans4Trans_Efficient_Transformer_for_Transparent_Object_Segmentation_To_Help_Visually_ICCVW_2021_paper.html


Panoptic Lintention Network: Towards Efficient Navigational Perception for the Visually Impaired

Wei Mao, Jiaming Zhang, Kailun Yang, Rainer Stiefelhagen. IEEE International Conference on Real-time Computing and Robotics (RCAR), Xining, China, July 2021.

Abstract: Classic computer vision algorithms, instance segmentation, and semantic segmentation can not provide a holistic understanding of the surroundings for the visually impaired. In this paper, we utilize panoptic segmentation to assist the navigation of visually impaired people by offering both things and stuff awareness in the proximity of the visually impaired efficiently. To this end, we propose an efficient Attention module – Lintention which can model long-range interactions in linear time using linear space. Based on Lintention, we then devise a novel panoptic segmentation model which we term Panoptic Lintention Net. Experiments on the COCO dataset indicate that the Panoptic Lintention Net raises the Panoptic Quality (PQ) from 39.39 to 41.42 with 4.6% performance gain while only requiring 10% fewer GFLOPs and 25% fewer parameters in the semantic branch. Furthermore, a real-world test via our designed compact wearable panoptic segmentation system, indicates that our system based on the Panoptic Lintention Net accomplishes a relatively stable and exceptionally remarkable panoptic segmentation in real-world scenes.

https://ieeexplore.ieee.org/document/9517615


Perception Framework through Real-Time Semantic Segmentation and Scene Recognition on a Wearable System for the Visually Impaired

Yingzhi Zhang, Haoye Chen, Kailun Yang, Jiaming Zhang, Rainer Stiefelhagen. IEEE International Conference on Real-time Computing and Robotics (RCAR), Xining, China, July 2021.

Abstract: As the scene information, including objectness and scene type, are important for people with visual impairment, in this work we present a multi-task efficient perception system for the scene parsing and recognition tasks. Building on the compact ResNet backbone, our designed network architecture has two paths with shared parameters. In the structure, the semantic segmentation path integrates fast attention, with the aim of harvesting long-range contextual information in an efficient manner. Simultaneously, the scene recognition path attains the scene type inference by passing the semantic features into semantic-driven attention networks and combining the semantic extracted representations with the RGB extracted representations through a gated attention module. In the experiments, we have verified the systems’ accuracy and efficiency on both public datasets and real-world scenes. This system runs on a wearable belt with an Intel RealSense LiDAR camera and an Nvidia Jetson AGX Xavier processor, which can accompany visually impaired people and provide assistive scene information in their navigation tasks.

https://ieeexplore.ieee.org/document/9517086


Helping the Blind to Get through COVID-19: Social Distancing Assistant Using Real-Time Semantic Segmentation on RGB-D Video

Manuel Martinez, Kailun Yang, Angela Constantinescu, Rainer Stiefelhagen. Sensors, 2020.

Abstract: The current COVID-19 pandemic is having a major impact on our daily lives. Social distancing is one of the measures that has been implemented with the aim of slowing the spread of the disease, but it is difficult for blind people to comply with this. In this paper, we present a system that helps blind people to maintain physical distance to other persons using a combination of RGB and depth cameras. We use a real-time semantic segmentation algorithm on the RGB camera to detect where persons are and use the depth camera to assess the distance to them; then, we provide audio feedback through bone-conducting headphones if a person is closer than 1.5 m. Our system warns the user only if persons are nearby but does not react to non-person objects such as walls, trees or doors; thus, it is not intrusive, and it is possible to use it in combination with other assistive devices. We have tested our prototype system on one blind and four blindfolded persons, and found that the system is precise, easy to use, and amounts to low cognitive load.

https://www.mdpi.com/1424-8220/20/18/5202


Analyzing the Design of Tactile Indoor Maps

Engel C., Weber G. (2021) Analyzing the Design of Tactile Indoor Maps. In: Ardito C. et al. (eds) Human-Computer Interaction – INTERACT 2021. INTERACT 2021. Lecture Notes in Computer Science, vol 12932. Springer, Cham. https://doi.org/10.1007/978-3-030-85623-6_26

Abstract: Tactile maps are feasible to increase the mobility of people with blindness and to achieve spatial information of unknown environments. Exploring tactile maps could be a hard task. Research on the design of tactile maps, especially the design and meaningfulness of tactile symbols, mostly addresses outdoor environments. The design of tactile indoor maps has been studied less frequently, although they differ significantly from outdoor environments. Therefore, in this paper, 58 tactile indoor maps have been investigated in terms of the design of the headline, additional map information, legend, walls and information presentation types used. In addition, the design of common objects for indoor environments, such as doors, entrances and exits, toilets, stairs and elevators, has been examined in more detail and commonly used symbols have been extracted. These findings form the basis for further user studies to gain insights into the effective design of indoor maps.


„Travelling more independently: A requirements analysis for accessible journeys to unknown buildings for people with visual impairments”

Christin Engel, Karin Müller, Angela Constantinescu, Claudia Loitsch, Vanessa Petrausch, Gerhard Weber, Rainer Stiefelhagen
The 22nd International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS 2020)

Abstract:
It is much more difficult for people with visual impairments to plan and implement a journey to unknown places than for sighted people, because in addition to the usual travel arrangements, they also need to know whether the different parts of the travel chain are accessible at all. The need for information is presumably therefore very high and ranges from knowledge about the accessibility of public transport as well as outdoor and indoor environments. However, to the best of our knowledge, there is no study that examines in-depth requirements of both the planning of a trip and its implementation, looking separately at the various special needs of people with low vision and blindness. In this paper, we present a survey with 106 people with visual impairments, in which we examine the strategies they use to prepare for a journey to unknown buildings, how they orient themselves in unfamiliar buildings and what materials they use. Our analysis shows that requirements for people with blindness and low vision differ. The feedback from the participants reveals that there is a large information gap, especially for orientation in buildings, regarding maps, accessibility of buildings and supporting systems. In particular, there is a lack of availability of indoor maps.

https://doi.org/10.1145/3373625.3417022

Talk on YouTube: https://www.youtube.com/watch?v=wJDUHmUGjws


„AccessibleMaps: Addressing Gaps in Maps for People with Visual and Mobility Impairments”

Claudia Loitsch, Karin Müller, Christin Engel, Gerhard Weber and Rainer Stiefelhagen
17th International Conference on Computers Helping People with Special Needs

Abstract:
Persons with visual and mobility impairments often have problems when planning and implementing a trip to unknown buildings due to the inaccessibility of the built environment, the unavailability of reliable information, and missing mobility-supporting applications for indoor environments. One reason is the lack of barrier-free indoor maps enriched with accessibility information to support the diverse needs of people with disabilities. This paper provides a comprehensive review of user requirements, mobility-related applications and digital maps. We identify different gaps in supporting indoor mobility, i.e lack of (i) dedi-cated requirement analyses for mobility in unknown buildings, (ii) proce-dures to improve the coverage of digital indoor maps, (iii) standards for barrier-free map representations and (iv) location-based indoor services that meet the needs of people with disabilities. Besides, we introduce the AccessibleMaps project, which addresses some of these gaps by automat-ically generating indoor maps enriched with accessibility features.

https://doi.org/10.1007/978-3-030-58805-2_34


„Analysis of Indoor Maps Accounting the Needs of People with Impairments”

Julian Striegl, Claudia Lotisch, Jan Schmalfuss-Schwarz and Gerhard Weber
17th International Conference on Computers Helping People with Special Needs

Abstract:
Digital indoor maps are still in early stages of development but the demand for indoor location-based services is increasing contin-uously. Especially people with disabilities can benefit from accurate in-door maps with information in regards to the accessibility of indoor en-vironments. Currently there are no widely accepted open standards for the expression of accessibility information in indoor maps. Furthermore, there is a lack of methods to assess if indoor maps comply with the re-quirements of people with disabilities in terms of orientation and indoor navigation. To address this problem, this paper presents a first analy-sis of the quantity and quality of indoor maps exemplary for selected cities in OpenStreetMap. The results show that the number of mapped indoor environments in OpenStreetMap is still sparse. On average only one building per city has a completely mapped indoor environment and the number of buildings with accessibility information is even smaller. This indicates that crowd-sourcing approaches should be supported with automated mapping processes and an ongoing analysis of indoor maps accounting the needs of people with disabilities should be conducted in order to ensure the quality of provided indoor geospatial information.

https://doi.org/10.1007/978-3-030-58805-2_36


Considering Time-critical Barriers in Indoor Routing for People with Disabilities”

Jan Schmalfuß-Schwarz, Claudia Loitsch and Gerhard Weber
17th International Conference on Computers Helping People with Special Needs

Abstract:
The usage of indoor map applications is growing and their importance is also increasing within the group of people with disabili-ties. Therefore, different approaches were already developed to support the users on their way. Though, these solutions don’t prevent them from running into a dead end because of unknown insurmountable barriers. These barriers are often not included inside the data set of a building since they have no fixed locations and are temporary. For this reason, it is important to classify them on their characteristics and to develop a sys-tem that detects them and makes them available for routing applications. To address this, we present a classification of barriers founded on their time dependency within this paper and show an exemplary subdivision based on the three barrier types stairs, defect elevator, and wet floor. Furthermore, we draft a first proposal for the possibilities of develop-ing an adaptive system for routing people with disabilities that captures time-critical barriers.

https://doi.org/10.1007/978-3-030-58805-2_37


„Can We Unify Perception and Localization in Assisted Navigation? An Indoor Semantic Visual Positioning System for Visually Impaired People”

Haoye Chen, Yingzhi Zhang, Kailun Yang, Manuel Martinez, Karin Müller and Rainer Stiefelhagen
17th International Conference on Computers Helping People with Special Needs

Abstract:
Navigation assistance has made significant progress in the last years with the emergence of different approaches, allowing them to perceive their surroundings and localize themselves accurately, which greatly improves the mobility of visually impaired people. However, most of the existing systems address each of the tasks individually, which in-creases the response time that is clearly not beneficial for a safety-critical application. In this paper, we aim to cover scene perception and visual localization needed by navigation assistance in a unified way. We present a semantic visual localization system to help visually impaired people to be aware of their locations and surroundings in indoor environments. Our method relies on 3D reconstruction and semantic segmentation of RGB-D images captured from a pair of wearable smart glasses. We can inform the user of an upcoming object via audio feedback so that the user can be prepared to avoid obstacles or interact with the object, which means that visually impaired people can be more active in an unfamiliar environment.

https://doi.org/10.1007/978-3-030-58796-3_13


Presentations

W3C Workshop Maps for the Web 2020

Accessible Indoor Maps: Information Need and Automated Solutions to Address Gaps in Maps for People with Disabilities

Julian Striegl, Claudia Loitsch

Workshop Report:
https://www.w3.org/2020/maps/report