Rigorous validation and verification for dependable and safe software systems
The unique challenge here is in the development and operation of safe and secure learning-enabled CPS. These systems possess the capability for autonomous response and decision-making, coupled with the ability to learn from data and experiences using artificial intelligence (AI) methods. As a result, these cognitive systems can adapt their behavior to changing environments. However, the increasing complexity and connectivity of modern digital systems add further challenges to ensuring safety and security. The underlying prerequisite for the successful deployment of autonomous CPS-based products is thus the availability of effective and affordable validation methods.
In our research activities, fortiss relies primarily on formal methods, such as model checking, static analysis, constraints solving, etc., and their integration with other methods such as testing to enable efficient analysis of complex software. The competence field strives to not only develop novel methods but also implement them as research prototypes and evaluate their performance in various real-world use cases. While our target use cases often come from the automotive domain, we also target use cases from the aerospace domain. A special focus is also given to scenario-based testing and its combination with formal verification methods to argue about higher test coverage and completeness of resulting dependability assurance arguments.
The research activities of the competence field SD contribute to the development of novel methods for validating and certifying autonomous cyber-physical systems, thus enabling the deployment of AI technologies in safety-critical autonomous software systems and services.