In future, AI systems should continuously learn and develop new skills in order to adapt to new situations. However, this development will lead to new and unpredictable behaviour of such self-learning AI systems. fortiss has put together a team of experts to take on this challenge. This team will radically develop new approaches for the design, development, safeguarding and certification of AI-based systems.
Current self-learning systems are unreliable and unsafe, and are not currently used in operational, business-critical systems. They lack a fall-back option to a responsible (human) operator who can intervene in an emergency. With the current practices and standards, such AI systems cannot be authorised and are therefore not marketable.
The question for scientists and engineers is therefore: How can AI-reinforced and continuously self-learning software systems be reliably and securely developed and operated?
The need for research
There are a number of open fields of research. These concern all phases of traditional system engineering and include not only the specification, but also the architecture design, the implementation, testing and verification. The need for research extends to a data-driven adaptation and optimisation of AI systems and a dynamic certification of business-critical self-learning systems during operation.
The work carried out by the fortiss team of experts will contribute to the following aspects:
- a validated procedure model for the controllable development and operation of trustworthy, cognitive cyber-physical systems
- a consistently reliable and secure AI architecture for mission-critical autonomous systems
- new approaches to design, verification and certification for continuously self-learning and robust AI-based software systems
- Management of transparent human AI interaction through intelligent user interfaces