Interview

Working together to make AI safer, more robust and more comprehensible

Dr. Fathiyeh Faghih and Dr. Tewodros Beyene have recently taken over the co-leadership of the Software Dependability field of competence at fortiss. In this interview, they talk about the exciting developments in the field of software dependability and provide insights and challenges in the development of secure, reliable software for cyber-physical systems (CPS).

Can you provide an overview of the Software Dependability field of competence and its key objectives within fortiss?

Dr. Tewodros Beyene: Within fortiss, the Software Dependability field of competence is dedicated to pioneering research in novel methods, algorithms, and tools essential for the manageable and reliable development of secure software in CPS. For this we particularly focus on mathematical-logical approaches and formal methods to provide verification methods to ensure the correctness of software, metrics as well as testing approaches to measure the quality of components that employ machine learning algorithms and provide guarantees for their robustness, or to systematically construct algorithms that are correct for their specification. Furthermore, we investigate how current development practices need to be augmented to enable the practical application of such analysis methods in integrated, evidence-based safety assurance processes, and we develop the necessary tool infrastructure.

What are the current challenges in this area and what unique contribution do you want to make in the area of software dependability?

Dr. Fathiyeh Faghih: In the realm of software dependability, the foremost challenge lies in the development and operation of cognitive CPS capable of autonomous decision-making and learning from data using artificial intelligence (AI) methods. These systems must adapt their behaviour to dynamic environments, presenting a complex set of challenges. Key industries, such as automotive, aerospace, and industrial automation, are actively exploring the integration of autonomous CPS into their products and processes, such as highly automated driving in automobiles, autonomous flight in aerospace, and adaptable factory automation.

Dr. Tewodros Beyene: However, ensuring the reliability and safety of these autonomously acting CPS presents a significant hurdle. Existing assurance methods and industrial safety standards, designed for traditional CPS, struggle to accommodate the unique characteristics of cognitive systems, particularly those employing learning-based AI components. Certifying the conformity of such systems is paramount for their successful deployment. Our research endeavours are dedicated to addressing this critical gap by developing novel methods for assuring and certifying autonomously acting CPS. By pioneering innovative approaches tailored to the complexities of cognitive systems, we aim to enable the integration of AI technologies into mission-critical autonomous software systems and services. Our goal is to facilitate the widespread adoption of autonomous CPS across various industries while upholding the highest standards of reliability and safety.

What background and experience do you bring with you as co-head of the department to further strengthen this competence at fortiss?

Dr. Fathiyeh Faghih: With a Ph.D. in computer science, specializing in formal methods from the University of Waterloo, and subsequent postdoctoral research at McMaster University in Canada focusing on automated synthesis techniques for fault-tolerant algorithms, I bring a robust academic foundation to fortiss. My six-year tenure as an assistant professor at the University of Tehran further enriched my expertise in software and AI dependability. Working closely with B.Sc., M.Sc., and Ph.D. students, I've supervised research encompassing explainability, formal verification, testing, and static analysis. My commitment to advancing software dependability through interdisciplinary research and mentorship aligns seamlessly with fortiss's objectives.

Dr. Tewodros Beyene: I did my PhD at the Chair for Theoretical Computer Science at TUM, and this had given me a strong background for developing novel and efficient methods for solving computationally hard formal verification challenges. However, I was also interested in implementing my methods into prototypes and applying them on real-world problems. As a doctoral research intern at the Microsoft Research in Cambridge (UK), I was able to apply my prototypes on real verification problems and further develop my method.  Since joining fortiss in 2015, my sole focus has been in developing research prototypes that can help our industrial partners solve their verification and assurance problems. As the challenges of software dependability split between devising novel methods and implementing them into a usable tool, I believe, my strong background in theory and my vast experience in industrial application provides me a sound foundation to push our research lines forward.

Could you elaborate on the specific challenges associated with ensuring the software dependability of CPS in today's increasingly interconnected digital landscape?

Dr. Tewodros Beyene: The emergence of highly interconnected digital infrastructure presents both unprecedented opportunities and significant challenges in ensuring the dependability of software within CPS. One major challenge lies in the theoretical domain, where specifying such distributed and dynamic systems demands sophisticated mathematical formalisms and logics. However, the complexity inherent in such logic necessitates the identification of application-specific heuristics and insights to reduce complexity to a manageable level. Another pressing challenge revolves around scalability, as the number of interconnected smart CPS continues to grow rapidly. Developing scalable methods that can effectively ensure the safety and security of this interconnected infrastructure is crucial. Additionally, efficiency is paramount, particularly considering that interconnected smart CPS may operate with limited computing power compared to traditional computing devices.

Dr. Fathiyeh Faghih: Furthermore, a significant contemporary challenge is ensuring the dependability of AI systems. While AI holds immense potential, the critical question remains: how can we trust AI-driven systems? Various techniques are being explored in the literature to address this question, with our research group focusing on leveraging software engineering techniques such as explainability, testing, and formal verification to tackle the challenge of AI dependability.

How does the research group plan to address the complexities of developing and operating safe and secure learning-enabled CPS, particularly considering their autonomous decision-making capabilities?

Dr. Tewodros Beyene: To tackle the intricate challenges inherent in developing and operating safe and secure learning-enabled CPS, our research group employs a multifaceted approach. One of the primary techniques we utilize is formal methods, a cornerstone of computer science. Formal methods encompass mathematical techniques and approaches used to specify, develop, and verify software and hardware systems. By leveraging formal methods, we establish a rigorous framework for reasoning about the behaviour of CPS, enabling engineers and researchers to analyse and ensure the correctness, reliability, and safety of software and hardware designs. In addition to formal methods, our research group also includes testing as another essential technique. While testing has long been utilized in the software engineering community, our research endeavours focus on extending testing methodologies to the domain of AI-enabled systems. Drawing upon the extensive testing experience of the software community, we aim to adapt and refine testing practices for the safe development of AI-enabled software within the context of CPS.

Can you discuss any ongoing or planned research projects within the field of competence, especially those that demonstrate the practical application of its methodologies in real-world scenarios?

Dr. Tewodros Beyene: Within our research group, we are actively engaged in several research projects aimed at addressing real-world challenges and advancing the practical application of our methodologies. One prominent project is Evidential Tool Bus (ETB) that was aimed at developing a tool integration framework that provides specification languages and integration engines for the creation and continual maintenance of assurance cases. The resulting ETB-framework is semantic-neutral in a sense that it can be used for various assurance and compliance activities. ETB is further developed and applied in several projects spanning diverse topics and domains: integrated continuous code analysis (e.g. with the avionics industry), creation of assurance cases from integrated analysis tools (avionics domain – with the Standford Research Institute – SRI International), and incremental safety case creation (automotive and healthcare domains – within the FOCETA project). We have also recently submitted a project proposal for the EU Horizon program where ETB is further position to be applied for the safe use of generative AI and Large Language Models (LLMs).

Dr. Fathiyeh Faghih: In today's digital landscape, regulatory requirements play a crucial role in ensuring secure data exchange. However, translating legislation into technical controls poses significant challenges, often resulting in errors and complexities. Our project seeks to bridge this gap by automating the creation of data sharing policies, thereby facilitating compliance with regulations and easing the burden on companies. Additionally, we are developing automated methods for analysing and monitoring these policies to ensure compliance from an end-user perspective. Through collaboration with small and medium-sized enterprises (SMEs) based in Bavaria, we aim to validate and implement our project results in real-world industry scenarios. Furthermore, we are dedicated to enhancing test effectiveness for neural networks, a vital area in the realm of AI. While various testing techniques have been proposed in the literature to evaluate neural networks, determining the adequacy of test data remains a primary concern, particularly in the context of machine learning. Our research aims to bridge this gap by formulating a methodology for evaluating coverage criteria specific to neural networks. By assessing and comparing the effectiveness of prominent coverage criteria in the current literature, we strive to ensure robust testing practices that align with the unique challenges posed by neural network systems.

In what ways do you anticipate the findings and innovations from the Software Dependability field of competence to impact industries beyond automotive and aerospace?

Dr. Fathiyeh Faghih: While industries such as automotive and aerospace have their distinct assurance standards, methodologies, and cultures, the core of their assurance processes typically involves verifying compliance between software products and relevant certification standards. At the Software Dependability field of competence, our focus on automation in the assurance process equips us with tools and frameworks that transcend these specific industries. As a result, our innovations hold promise for application across a wide range of domains beyond automotive and aerospace. By offering automated solutions that streamline the assurance process, we anticipate our findings and innovations to positively impact industries such as healthcare, finance, telecommunications, and beyond. These sectors can leverage our tools and frameworks to enhance the reliability, security, and safety of their software systems, ultimately fostering greater trust and efficiency in their operations.

Lastly, could you please share your vision for the future impact of Software Dependability and how you envision it shaping the landscape of software dependability, safety and security in both research and industry?

Dr. Tewodros Beyene: The vision of the Software Dependability research group is to cultivate an ecosystem of techniques, methods, and tools for the provably safe and secure development of software. This encompasses two key aspects: firstly, the identification and specification of critical safety and security considerations derived from various certification standards; and secondly, the development of a continuous assurance process to ensure compliance with these standards. The industry currently has an urgent need for such specification and assurance methods to navigate the increasingly complex landscape of software reliability, safety and security.

Moreover, as technology evolves, new challenges emerge, necessitating novel research approaches. For instance, in the realm of autonomous driving, there is a growing demand for safety verification and validation methods that offer provably acceptable coverage. Our vision extends to addressing these emerging challenges through pioneering research initiatives that push the boundaries of software dependability. By bridging the gap between theoretical advancements and practical applications, we envision Software Dependability shaping the landscape of software safety and security in both research and industry, fostering a future where software systems are not only reliable and secure but also adaptable to evolving technological landscapes.

  Marketing & press

Your contact

Marketing & press

presse@fortiss.org