Until now, the use of complex AI applications in aviation has been problematic due to a lack of appropriate certification procedures. In order for this technology to be used there, procedures must be developed to certify AI systems and thus prove their safety. Efforts in this direction are extremely extensive, as aircraft components are particularly safety-critical and are therefore subject to very high quality and safety standards.
The KIEZ 4-0 project (Artificial Intelligence European Certification under Industry 4.0) has made a significant contribution to how AI systems can be certified in aviation. As part of this project, demonstrators and use cases were used to present a method that shows how the reliability of avionics applications can be certified. In addition, it was analyzed to what extent AI is suitable for certification in aviation and what adjustments are necessary.
The joint project was funded by the Federal Ministry for Economic Affairs and Climate Action (BMWK) as part of the Aviation Research Program (LuFo VI-1) and led by Airbus Defence and Space GmbH. In addition to Airbus, the German Aerospace Center (DLR), several Fraunhofer Institutes, German Air Traffic Control (DFS) and other partners were also involved in the three-year project. The consortium also worked with the European Aviation Safety Agency (EASA) to promote certification at an international level.
Incorporating certifiction requirements in early development phases
Dr. Yuanting Liu, head of the fortiss competence field Human-centered Engineering, led the project and coordinated the collaboration with the researchers from the competence fields Software Dependability and Human-centered Engineering. They initially developed formal verification-based solutions to prove the safety and reliability of AI-based systems. Specifically, model checking and theorem proving techniques were used for this purpose. They are used to fulfill the requirements that a mandatory proof entails when assessing the safety and correctness of AI systems in avionics. In this way, the safety of AI-based systems can be verified while maintaining or even improving the industry's high safety standards.
The fortiss competence field of the Human-centered Engineering worked on specific guidelines that are useful for the certification of such systems. Current certification of Human Factors primarily focuses on avoiding human errors. However, the application of AI in more complex situations introduces a new potential source of error. Therefore, it is necessary to consider the potential for AI errors during the system development process. Rather than focusing on how AI can optimally fulfill its task, developers should first clarify what task the AI should fulfill in the first place. This question is crucial and should be addressed early on through the involvement of operators in a human-centered design approach.
Recommendations for the certification of components with symbolic AI and Human-Factors in particular were derived from this work and exchanged with EASA.
AI visions for avionics
The realization of future aviation topics such as single-pilot aircraft and air cabs poses a particular challenge, especially with regard to the integration of AI systems. The interaction between the AI and the human pilot or passenger plays a decisive role here. This undertaking requires not only further technological development, but also a review and possibly adaptation of certification methods and processes. The specific requirements relate not only to aircraft technology, but also to the interaction with airports and air traffic control to ensure the smooth functioning of the complex overall system.