AI-Blueprint for Deep Neural Networks

Ernest Wozniak , Henrik J. Putzer and Carmen Cârlan

Proceedings of the Workshop on Artificial Intelligence Safety (SafeAI '21),

February 2021

abstract

Development of trustworthy (e.g., safety and/or security critical) hardware/software-based systems needs to rely on well-defined process models. However, engineering trustworthy systems implemented with artificial intelligence (AI) is still poorly discussed. This is, to large extend, due to the standpoint in which AI is a technique applied within software engineering. This work follows a different viewpoint in which AI represents a 3rd kind technology (next to software and hardware), with close connections to software. Consequently, the contribution of this paper is the presentation of a process model, tailored to AI engineering. Its objective is to support the development of trustworthy systems, for which parts of their safety and/or security critical functionality are implemented with AI. As such, it considers methods and metrics at different AI development phases that shall be used to achieve higher confidence in the satisfaction of trustworthiness prop- erties of a developed system.

subject terms: Safety Case, Model-based Systems Engineering, MbSE

url: http://ceur-ws.org/Vol-2808/Paper_22.pdf