Mr Pfeifer, the Center for AI was created two years ago with the aim of developing reliable and safe AI technologies for industry and society. What results have been achieved so far?
We established up a series of research lines in which we started multiple projects in areas such as human-centered machine learning, robotics, government services and distributed learning, as well as anomaly detection in building management.
In the human-centered machine learning area (HCML), we examine how applications that utilize machine learning can better meet the requirements of a human user. On the one hand, that means it will be easier for humans to understand and comprehend AI-based decisions, while on the other it allows the individual differences between the users to flow directly into the development process of the learning algorithms. These issues are being examined within a scenario that involves how firefighters manage stress in the field. Because people react differently depending on the stress situation, learning algorithms have to be personalized – in other words they have to be able to recognize the level of stress for firefighters on an individual basis. Conversely, the AI signals have to be presented to the platoon leaders in such a way that they can easily recognize which members of their platoon are currently in a highly stressful situation. In this area we are developing new stress recognition models on the basis of various biosignals and so-called self-supervised learning.
In the field of manufacturing robots, we are developing easy-to-manage and economically-viable configurations and analysis solutions that are also suitable for small-to-medium enterprises. To do that we developed semantic models that describe which work steps the robots are supposed to carry out. This information is coupled to data from the robot activities. By using machine learning, the system is then in a position to learn continuously, recognize deviations and when required to generate a warning, ideally before a malfunction occurs.
In terms of government services, we examined how these services can be designed such that citizens take advantage of them online without the hassle of filling out forms, and how they can be made available to citizens proactively, such as by eliminating the need to specifically apply for services with certain events. We demonstrated how this can work with two application scenarios: applying for child allowances and permits for opening restaurants.
And finally, we recently began work on an application for recognizing water damage in buildings. We utilize sensors installed in the IBM Watson Center building in Munich to dynamically collect moisture data and link it to static building information and environmental data. By using special AI algorithms to detect events in the dynamic sensor data that could point to the intrusion of water, with the help of ontological data the water can not only be located more precisely, the cause can also be better qualified. That means you could determine if the event was caused by an open window or a broken pipe for example.
You have been collaborating with IBM for several years. What makes IBM a good partner for these issues and what is the objective of the effort?
I believe it has to do with the fact that IBM is a global leader in the field of AI and boasts a large worldwide network of renowned research centers and business partners that understand what is required in practice. Apart from IBM’s leading industry expertise, the ethics aspect is also very important. IBM leaves it up to the partners and customers to decide which data the AI should use (transparency) and how they can use their data to build their own cognitive solutions.
What challenges do you see with the current state of AI technologies?
AI systems are learning-based, which means the functions and behaviors result from data that is used to train them. These methods also lead to unforeseeable behavior during operation, which presents a major challenge for the manageability of such systems. Furthermore, AI solutions are only as good as the data they are trained with. First off, this raises the question of the availability of suitable data and secondly, the quality. The keywords here are representative data and one-sidedness. AI systems can develop further through continuous learning and adapt or optimize themselves to new conditions based on experience. This makes development, and especially testing and validation, extremely difficult. Established software and systems engineering methods work under the premise that precise specifications related to the system behavior and operational environment are available, and that the systems will undergo extensive verification prior to going online, which is not feasible for AI systems. That’s why such methods are not directly transferrable. Furthermore, AI systems often operate in sometimes unknown or unreliable environments and must be robust enough to work around situations that have not been modeled in advance, or only partially modeled, despite imprecise or unreliable input data
For many people, AI is currently a black box with many unknowns. How do you want to gain the trust of people with this issue?
A key aspect of trustworthiness is the extent to which AI decisions can be comprehended. As a user, can I understand how or why an AI system makes a particular decision? The focus here is on developing AI methods so that they can also provide the user an explanation of the results. At the IBM fortiss Center for AI, we recently launched a project to develop an intelligent driving assistant that recommends which driving lane to use and when to switch lanes. The system should also be able to explain why a particular recommendation is being made. Since the data that is used as a basis for the lane change recommendation likely stems from sections of the road that the driver is still unable to see, such as an accident or traffic congestion around a curve, in order to gain their trust it’s important that drivers be able to comprehend this.
At what point do you define AI as robust or trustworthy?
We don’t view any AI solution as robust, even if it still functions reliably and delivers good results in unknown environments or unpredictable situations. Here we might have to deal with a range of uncertainties – or better yet call them unknowns. This can be tied for instance to the fact that all possible situations or environmental influence was observed or even detected while developing and training the AI system.
It can also happen that an AI component makes a wrong decision if the input deviates only slightly from the known situation. We’re familiar with these kinds of problems in the area of image recognition for instance, where just a few minor pixel changes can lead to red traffic light being perceived as “green” or a pedestrian not being recognized. This of course opens up the risk of malicious manipulation in which a hacker tires to disrupt the AI through targeted input. To that extent then, it also becomes a question of the safety of the AI application and thus its trustworthiness. To employ AI-based applications in safety-critical environments such as autonomous driving or medical robotics, we have to trust that such misconduct is virtually impossible.
What kind of solutions is the Center for AI working on to specifically design government services more directly and more service-oriented?
The future of public administration services lies in a proactive approach without interaction. Government services should be provided automatically, without the need for applications and without the citizen having to interact with an application. With our DR&P project (Digital Readiness Assessment and Piloting for German Public Services), we want to put the concept of proactive, non-interactive government services into practice. To do that we developed and applied an analysis method for the availability of specific services, enhanced existing software frameworks and created two demonstrators for children’s allowance and restaurant permit applications. Through our research we offer government practitioners a structured engineering approach that couples visionary service design with advanced technologies in order to achieve a higher level of service quality for individuals and companies. A government office that proactively delivers a service is viewed as user-friendly and improves the service quality because it delivers the user a service (user-centric) instead of merely approving it (government-centric). In order to make such proactive and non-interactive services available, the technology basis will combine machine learning-based intelligent data processing and accountability-based data exchange that relies on distributed ledger technology (DLT).
How can government agencies share information in a federated environment while respecting data privacy but without releasing specific data?
The keywords here are federated machine learning (FML) and accountability. FML is an approach that enables multiple parties to use their data to collaborate on the creation of a shared machine learning model without having to share the information. The idea is that all of the parties carry out machine learning tasks on their own private datasets and then share the resulting model updates in order to create a combined model of the entire data. This way the data remains private and the parties share only model updates and test data for evaluating the quality of the learned models. IBM Research developed a framework for federated learning that we have now linked with the fortiss Evidentia system as part of our collaborative AFML project (accountable federated machine learning). With Evidentia, you can specify distributed workflows such as federated learning and document execution of the steps so that they are tamper-proof. This makes it clear and verifiable at all times, which participant did precisely what and at what time, which decisions were made and why and whether the actual actions of the participants correspond to the agreed-upon learning process. This allows a consortium of players to verifiably account for government entitlements, even if there is no mutual trust.
Which sectors profit the most?
At the Center for AI, our initial research activities are industry-independent or they span all industries. There are countless fields of application for AI. Wherever data accumulates, AI, and especially machine learning, has the potential to be deployed in a profitably. There is significant potential especially in the manufacturing industry given its high share of recurring and predictable activities. And of course we’re also working on concrete applications in specific fields, such as the use of machine learning techniques for detecting anomalies in robot-based production.
How can companies gain access to these technologies?
As a collaborative research organization between IBM and fortiss, the Center for AI researches and develops AI solutions for specific challenges. For companies who wish to cooperate with us, the fortiss SME (small-to-medium enterprise) area is the starting point. This is where we bundle our services for companies and where we are happy to tell them about the different forms of cooperation that are available.