KI Absicherung

KI Absicherung

Safe AI for Automated Driving

KI Absicherung

Development and investigation of methods and measures for assuring AI-based functions for highly automated driving. Using the use case of pedestrian detection, an exemplary argumentation and process chain for providing assurance of a complex AI function is developed.

Project description

Autonomous vehicles must be able to perceive their environment and appropriately react to it. Error-free and reliable environment recognition that can correctly identify and classify all relevant road users is a basic prerequisite for implementing autonomous driving functions. This is especially true for the perception of the environment in complex urban traffic situations where methods of artificial intelligence (AI) are increasingly being used. Such AI function modules based on machine learning are thus becoming a key technology.

One of the greatest challenges for integrating these technologies into highly automated vehicles is ensuring the customary functional safety of previous systems without the driver having to take over the driving task in an emergency. Existing and established safeguarding processes cannot easily be transferred to machine learning methods.

KI Absicherung project is working on establishing a stringent and provable safety argumentation so that AI-based function modules (AI modules) can be secured and validated for highly automated driving.

Research contribution

The fortiss teams are contributing to a number of research tracks, including:

  • Developing a coverage-guided fuzz-testing framework for testing the robustness of deep neural network (DNN) components and a set of metrics to measure the completeness of test datasets for DNNs
  • Defining a safety argumentation approach for a DNN-based perception component to specify safety requirements for DNN
  • Constructing safety case evidences with DNN black-box metrics and providing strategy for a sufficient specification from a safety perspective
  • Developing and implementing a mechanism to measure how neural networks react to perturbations over the input and compute the maximum perturbation that are tolerable while still maintaining correct detections
  • Constructing a Bayesian Neural Network to evaluate output uncertainty and designing a metric to indicate a network generalization ability

Project duration

01.07.2019 - 30.06.2022

Dr. Holger Pfeifer

Your contact

Dr. Holger Pfeifer

+49 89 3603522 29
pfeifer@fortiss.org

More information

Project partner

Publications

  • 2021 Laplace Approximation with Diagonalized Hessian for Over-parameterized Neural Networks Ming Gui, Ziqing Zhao, Tianming Qiu and Hao Shen NeurIPS 2021, Bayesian Deep Learning Workshop(36):, 2021. Details URL BIB