Explainable AI for driving assistance systems
This project aims to demonstrate the potential and value of Explainable AI in the safety-critical domain of lane change predictions, which offers the possibility to explain the state-of-the-art research in machine learning.
The goal is to develop a machine learning model that can predict and justify lane changes by utilizing the Providentia++ Digital Twin data.
The reasoning of the machine learning model shall be made explicit to build trust and justify the model's prediction in a safety-critical domain.
Finally, the predictions and explanations of the machine learning model are visualized in a live web application to demonstrate the potential of Explainable AI in safety-critical domains for clients.
Layer normalized Long Short-term Memory models (LSTM) are identified as robust state-of-the-art machine learning models for real-time lane change predictions. However, their inner functioning is too complex and complicated to be understood by an observer. As a result, layer normalized LSTM's are so-called black-boxes. For safety-critical applications like lane change predictions, their reasoning needs to be made explicit.
Several attribution methods are compared to explain the model's behaviour. Layer-wise Relevance Propagation (LRP) is identified as particularly suitable for a real-time explanation due to its robustness and computational inexpensiveness. However, LRP was not extended to the layer normalized LSTM architecture. This project extends LRP to layer normalized LSTMS, which is the leading research contribution of the project.
The project was carried out in collaboration with fortiss research fellow Prof. Dr. Ute Schmid, University of Bamberg.
01.01.2021 - 01.06.2021