Computer Vision Publikationen

Im AccessibleMaps Projekt wird die Erkennung von Merkmalen der Barrierefreiheit in Gebäuden mit Hilfe einer Kamera oder anderen Sensoren und mittels Methoden, die auf künstlicher Intelligenz beruhen, erforscht und neue Verfahren entwickelt. Ein Großteil der entwickelten Verfahren kann sowohl im Innen- als auch im Außenbereich eingesetzt werden.

Es wurden verschiedene mobile Prototypen entwickelt, getestet und evaluiert. Da im Projekt Menschen mit Blindheit arbeiten, wird auch die Nutzung dieser Prototypen für Menschen mit Blindheit betrachtet.

Folgende Aspekte werden in den Arbeiten untersucht:

Während des Projekts wurden unter Mitwirkung von Mitarbeitenden des AccessibleMaps Projektes bisher folgende Artikel zu den verschiedenen Themen veröffentlicht:

Robustes semantisches Szenenverständnis

In diesem Forschungszweig entwickeln wir robuste Algorithmen für das semantische Szenenverständnis mit dem Ziel, Objekte und Materialien aus realen Szenen effizient, zuverlässig und umfassend zu extrahieren und zu erkennen, indem wir von verschiedenen Sensoren erfasste Daten verwenden.

MASS: Multi-Attentional Semantic Segmentation of LiDAR Data for Dense Top-View Understanding. Kunyu Peng, Juncong Fei, Kailun Yang, Alina Roitberg, Jiaming Zhang, Frank Bieder, Philipp Heidenreich, Christopher Stiller, R. Stiefelhagen. IEEE Transactions on Intelligent Transportation Systems, 2022.

High-performance Panoramic Annular Lens Design for Real-time Semantic Segmentation on Aerial Imagery. Jia Wang, Kailun Yang, Shaohua Gao, Lei Sun, Chengxi Zhu, Kaiwei Wang, Jian Bai. Optical Engineering, 2022.

Event-Based Fusion for Motion Deblurring with Cross-modal Attention. Lei Sun, Christos Sakardis, Jingyun Liang, Qi Jiang, Kailun Yang, Peng Sun, Yaozu Ye, Kaiwei Wang, Luc Van Gool. In European Conference on Computer Vision (ECCV), Tel Aviv, Israel, October 2022.

Bending Reality: Distortion-aware Transformers for Adapting to Panoramic Semantic Segmentation. Jiaming Zhang, Kailun Yang, Chaoxiang Ma, Simon Reiß, Kunyu Peng, Rainer Stiefelhagen. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, United States, June 2022.

Towards Robust Semantic Segmentation of Accident Scenes via Multi-Source Mixed Sampling and Meta-Learning. Xinyu Luo, Jiaming Zhang, Kailun Yang, Alina Roitberg, Kunyu Peng, Rainer Stiefelhagen. In Workshop on Autonomous Driving (WAD) with IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, United States, June 2022.

Transfer beyond the Field of View: Dense Panoramic Semantic Segmentation via Unsupervised Domain Adaptation. Jiaming Zhang, Chaoxiang Ma, Kailun Yang, Alina Roitberg, Kunyu Peng, Rainer Stiefelhagen. IEEE Transactions on Intelligent Transportation Systems, 2021.

Exploring Event-driven Dynamic Context for Accident Scene Segmentation. Jiaming Zhang, Kailun Yang, Rainer Stiefelhagen. IEEE Transactions on Intelligent Transportation Systems, 2021.

Is Context-Aware CNN Ready for the Surroundings? Panoramic Semantic Segmentation in the Wild. Kailun Yang, Xinxin Hu, Rainer Stiefelhagen. IEEE Transactions on Image Processing, 2021.

Polarization-driven Semantic Segmentation via Efficient Attention-bridged Fusion. Kaite Xiang, Kailun Yang, Kaiwei Wang. Optics Express, 2021.

NLFNet: Non-Local Fusion Towards Generalized Multimodal Semantic Segmentation across RGB-Depth, Polarization, and Thermal Images. Ran Yan, Kailun Yang, Kaiwei Wang. In IEEE International Conference on Robotics and Biomimetics (ROBIO), Sanya, China, December 2021.

ISSAFE: Improving Semantic Segmentation in Accidents by Fusing Event-based Data. Jiaming Zhang, Kailun Yang, Rainer Stiefelhagen. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic (Virtual), September 2021.

DensePASS: Dense Panoramic Semantic Segmentation via Unsupervised Domain Adaptation with Attention-Augmented Context Exchange. Chaoxiang Ma, Jiaming Zhang, Kailun Yang, Alina Roitberg, Rainer Stiefelhagen. In IEEE International Conference on Intelligent Transportation Systems (ITSC), Indianapolis, IN, United States (Virtual), September 2021.

Aerial-PASS: Panoramic Annular Scene Segmentation in Drone Videos. Lei Sun, Jia Wang, Kailun Yang, Kaikai Wu, Xiangdong Zhou, Kaiwei Wang, Jian Bai. In European Conference on Mobile Robots (ECMR), Bonn, Germany (Virtual), August 2021.

Panoramic Panoptic Segmentation: Towards Complete Surrounding Understanding via Unsupervised Contrastive Learning. Alexander Jaus, Kailun Yang, Rainer Stiefelhagen. In IEEE Intelligent Vehicles Symposium (IV), Nagoya, Japan (Virtual), July 2021. Best Paper Award.

Capturing Omni-Range Context for Omnidirectional Segmentation. Kailun Yang, Jiaming Zhang, Simon Reiß, Xinxin Hu, Rainer Stiefelhagen. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, United States (Virtual), June 2021.

Omnisupervised Omnidirectional Semantic Segmentation. Kailun Yang, Xinxin Hu, Yicheng Fang, Kaiwei Wang, Rainer Stiefelhagen. IEEE Transactions on Intelligent Transportation Systems, 2020.

Real-time Fusion Network for RGB-D Semantic Segmentation Incorporating Unexpected Obstacle Detection for Road-driving Images. Lei Sun, Kailun Yang, Xinxin Hu, Weijian Hu, Kaiwei Wang. IEEE Robotics and Automation Letters with IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, United States (Virtual), October 2020.

In Defense of Multi-Source Omni-Supervised Efficient ConvNet for Robust Semantic Segmentation in Heterogeneous Unseen Domains. Kailun Yang, Xinxin Hu, Kaiwei Wang, Rainer Stiefelhagen. In IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, United States (Virtual), October 2020.

DS-PASS: Detail-Sensitive Panoramic Annular Semantic Segmentation through SwaftNet for Surrounding Sensing. Kailun Yang, Xinxin Hu, Hao Chen, Kaite Xiang, Kaiwei Wang, Rainer Stiefelhagen. In IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, United States (Virtual), October 2020.

Universal Semantic Segmentation for Fisheye Urban Driving Images. Yaozu Ye, Kailun Yang, Kaite Xiang, Juan Wang, Kaiwei Wang. In IEEE International Conference on Systems, Man, and Cybernetics (SMC), Toronto, Canada (Virtual), October 2020.

Verortung von erkannten Objekten

In diesem Forschungszweig entwickeln wir visuelle Lokalisierungs-, Odometrie- und Kartierungsmethoden für die Navigations- und Mobilitätsunterstützung, die helfen können, den Standort in einer unbekannten Szene zu bestimmen und die erkannten Informationen mit einer Karte zu verknüpfen.

LF-VIO: A Visual-Inertial-Odometry Framework for Large Field-of-View Cameras with Negative Plane. Ze Wang, Kailun Yang, Hao Shi, Peng Li, Fei Gao, Kaiwei Wang. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, October 2022.

Indoor Navigation Assistance for Visually Impaired People via Dynamic SLAM and Panoptic Segmentation with an RGB-D Sensor. Wenyan Ou, Jiaming Zhang, Kunyu Peng, Kailun Yang, Gerhard Jaworek, Karin Müller, Rainer Stiefelhagen.In Joint International Conference on Digital Inclusion, Assistive Technology & Accessibility (ICCHP-AAATE), Lecco, Italy, July 2022.

CSFlow: Learning Optical Flow via Cross Strip Correlation for Autonomous Driving. Hao Shi, Yifan Zhou, Kailun Yang, Xiaoting Yin, Kaiwei Wang. In IEEE Intelligent Vehicles Symposium (IV), Aachen, Germany, June 2022.

Indoor Navigation Assistance for Visually Impaired People via Dynamic SLAM and Panoptic Segmentation with an RGB-D Sensor. Wenyan Ou, Jiaming Zhang, Kunyu Peng, Kailun Yang, Gerhard Jaworek, Karin Müller, Rainer Stiefelhagen. ICCHP-AAATE (1) 2022: 160-168

Panoramic annular SLAM with loop closure and global optimization. Hao Chen, Weijian Hu, Kailun Yang, Jian Bai, Kaiwei Wang. Applied Optics, 2021.

Semantic Visual Odometry based on Panoramic Annular Imaging. Hao Chen, Kailun Yang, Weijian Hu, Jian Bai, Kaiwei Wang. Acta Optica Sinica, 2021.

A Panoramic Localizer Based on Coarse-to-Fine Descriptors for Navigation Assistance. Yicheng Fang, Kailun Yang, Ruiqi Cheng, Lei Sun, Kaiwei Wang. Sensors, 2020.

CFVL: A Coarse-to-Fine Vehicle Localizer with Omnidirectional Perception across Severe Appearance Variations. Yicheng Fang, Kaiwei Wang, Ruiqi Cheng, Kailun Yang. In IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, United States (Virtual), October 2020.

Abschätzung der Abstände von Objekten im Raum

In diesem Forschungszweig entwickeln wir panoramische, effiziente und robuste Tiefenschätzungsalgorithmen mit dem Ziel, vollständige 3D-Informationen von Szenen in der realen Welt zu erhalten, die die Abstände von Objekten im Raum integrieren.

Indoor Navigation Assistance for Visually Impaired People via Dynamic SLAM and Panoptic Segmentation with an RGB-D Sensor. Wenyan Ou, Jiaming Zhang, Kunyu Peng, Kailun Yang, Gerhard Jaworek, Karin Müller, Rainer Stiefelhagen. ICCHP-AAATE (1) 2022: 160-168

Panoramic Depth Estimation via Supervised and Unsupervised Learning in Indoor Scenes. Keyang Zhou, Kailun Yang, Kaiwei Wang. Applied Optics, 2021.

PADENet: An Efficient and Robust Panoramic Monocular Depth Estimation Network for Outdoor Scenes. Keyang Zhou, Kaiwei Wang, Kailun Yang. In IEEE Intelligent Transportation Systems Conference (ITSC), Rhodes, Greece (Virtual), September 2020.

A Robust Monocular Depth Estimation Framework Based on Light-Weight ERF-PSPNet for Day-Night Driving Scenes. Keyang Zhou, Kaiwei Wang, Kailun Yang. In International Conference on Graphics, Images and Interactive Techniques (CGIIT), Sanya, China (Virtual), February 2020.

Erkennung der Veränderung von Objekten in der Umwelt

In diesem Forschungszweig entwickeln wir Algorithmen zur Erkennung von Veränderungen, die dabei helfen, Aktualisierungen und Veränderungen in realen Szenen zu erkennen.

DR-TANet: Dynamic Receptive Temporal Attention Network for Street Scene Change Detection. Shuo Chen, Kailun Yang, Rainer Stiefelhagen. In IEEE Intelligent Vehicles Symposium (IV), Nagoya, Japan (Virtual), July 2021.

Erkennung von Hindernissen und Suche nach Objekten für Menschen mit Blindheit

In diesem Forschungszweig entwickeln wir mobile Assistenzsysteme mit dem Ziel, Menschen mit Sehbeeinträchtigung zu helfen, sich sicher und selbständig in der realen Welt zu bewegen, unbekannte Räume zu erkunden, Hindernissen auszuweichen und Objekte zu suchen.

Trans4Trans: Efficient Transformer for Transparent Object and Semantic Scene Segmentation in Real-World Navigation Assistance. Jiaming Zhang, Kailun Yang, Angela Constantinescu, Kunyu Peng, Karin Müller, Rainer Stiefelhagen. IEEE Transactions on Intelligent Transportation Systems, 2022.

Flying Guide Dog: Walkable Path Discovery for the Visually Impaired Utilizing Drones and Transformer-based Semantic Segmentation. Haobin Tan, Chang Chen, Xinyu Luo, Jiaming Zhang, Constantin Seibold, Kailun Yang, Rainer Stiefelhagen. In IEEE International Conference on Robotics and Biomimetics (ROBIO), Sanya, China, December 2021.

HIDA: Towards Holistic Indoor Understanding for the Visually Impaired via Semantic Instance Segmentation with a Wearable Solid-State LiDAR Sensor. Huayao Liu, Ruiping Liu, Kailun Yang, Jiaming Zhang, Kunyu Peng, Rainer Stiefelhagen. In International Workshop on Assistive Computer Vision and Robotics (ACVR) with IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, Canada (Virtual), October 2021.

Trans4Trans: Efficient Transformer for Transparent Object Segmentation to Help Visually Impaired People Navigate in the Real World. Jiaming Zhang, Kailun Yang, Angela Constantinescu, Kunyu Peng, Karin Müller, Rainer. Stiefelhagen. In International Workshop on Assistive Computer Vision and Robotics (ACVR) with IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, Canada (Virtual), October 2021.

Panoptic Lintention Network: Towards Efficient Navigational Perception for the Visually Impaired. Wei Mao, Jiaming Zhang, Kailun Yang, Rainer Stiefelhagen. In IEEE International Conference on Real-time Computing and Robotics (RCAR), Xining, China, July 2021.

Perception Framework through Real-Time Semantic Segmentation and Scene Recognition on a Wearable System for the Visually Impaired. Yingzhi Zhang, Haoye Chen, Kailun Yang, Jiaming Zhang, Rainer Stiefelhagen. In IEEE International Conference on Real-time Computing and Robotics (RCAR), Xining, China, July 2021.

Helping the Blind to Get through COVID-19: Social Distancing Assistant Using Real-Time Semantic Segmentation on RGB-D Video. Manuel Martinez, Kailun Yang, Angela Constantinescu, Rainer Stiefelhagen. Sensors, 2020.

Can We Unify Perception and Localization in Assisted Navigation? An Indoor Semantic Visual Positioning System for Visually Impaired People. Haoye Chen, Yingzhi Zhang, Kailun Yang, Manuel Martinez, Karin Müller, Rainer Stiefelhagen. In International Conference on Computers Helping People with Special Needs (ICCHP), Lecco, Italy (Virtual), September 2020.

Übermittlung von Informationen an Menschen mit Blindheit

In diesem Forschungszweig untersuchen wir sonifizierungs- und interaktionsorientierte Methoden, um Menschen mit Sehbeeinträchtigung zu helfen, die von den Videoanalysesystemen erfassten Informationen einfach zu verstehen.

Seeing through Events: Real-Time Moving Object Sonification for Visually Impaired People using Event-Based Camera. Zihao Ji, Weijian Hu, Ze Wang, Kailun Yang, Kaiwei Wang. Sensors, 2021.

Affect-DML: Context-Aware One-Shot Recognition of Human Affect using Deep Metric Learning. Kunyu Peng, Alina Roitberg, David Schneider, Marios Koulakis, Kailun Yang, Rainer Stiefelhagen. In IEEE International Conference on Automatic Face and Gesture Recognition (FG), Jodhpur, India (Virtual), December 2021.

Pose2Drone: A Skeleton-Pose-based Framework for Human-Drone Interaction. Zdravko Marinov, Stanka Vasileva, Qing Wang, Constantin Seibold, Jiaming Zhang, Rainer Stiefelhagen. European Signal Processing Conference (EUSIPCO), Dublin, Ireland (Virtual), May 2021.

A Comparative Study in Real-Time Scene Sonification for Visually Impaired People. Weijian Hu, Kaiwei Wang, Kailun Yang, Ruiqi Cheng, Yaozu Ye, Lei Sun, Zhijie Xu. Sensors, 2020.