BARK: Open Behavior Benchmarking in Multi-Agent Environments

Julian Bernhard , Klemens Esterle , Patrick Hart and Tobias Kessler

Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),

October 2020 · Las Vegas, NV, USA

abstract

Predicting and planning interactive behaviors in complex traffic situations presents a challenging task. Especially in scenarios involving multiple traffic participants that interact densely, autonomous vehicles still struggle to interpret situations and to eventually achieve their own mission goal. As driving tests are costly and challenging scenarios are hard to find and reproduce, simulation is widely used to develop, test, and benchmark behavior models. However, most simulations rely on datasets and simplistic behavior models for traffic participants and do not cover the full variety of real-world, interactive human behaviors. In this work, we introduce BARK, an open-source behavior benchmarking environment designed to mitigate the shortcomings stated above. In BARK, behavior models are (re-)used for planning, prediction, and simulation. A range of models is currently available, such as Monte-Carlo Tree Search and Reinforcement Learning-based behavior models. We use a public dataset and sampling-based scenario generation to show the inter-exchangeability of behavior models in BARK. We evaluate how well the models used cope with interactions and how robust they are towards exchanging behavior models. Our evaluation shows that BARK provides a suitable framework for a systematic development of behavior models.