Automated Factual Benchmarking for In-Car Conversational Systems using Large Language Models

Rafael Giebisch , Ken E. Friedl , Lev Sorokin and Andrea Stocco

Proceedings of the 36th IEEE Intelligent Vehicles Symposium, pp. 8 pages

July 2025

abstract

In-car conversational systems bring the promise toimprove the in-vehicle user experience. Modern conversationalsystems are based on Large Language Models (LLMs), whichmakes them prone to errors such as hallucinations, i.e., inaccurate,fictitious, and therefore factually incorrect information.In this paper, we present an LLM-based methodology forthe automatic factual benchmarking of in-car conversationalsystems. We instantiate our methodology with five LLMbasedmethods, leveraging ensembling techniques and diversepersonae to enhance agreement and minimize hallucinations.We use our methodology to evalute CarExpert, an in-carretrieval-augmented conversational question answering system,with respect to the factual correctness to a vehicle’s manual.We produced a novel dataset specifically created for the in-cardomain, and tested our methodology against an expert evaluation.Our results show that the combination of GPT-4 with theInput Output Prompting achieves over 90% factual correctnessagreement rate with expert evaluations, other than being themost efficient approach yielding an average response time of4.5s. Our findings suggest that LLM-based testing constitutesa viable approach for the validation of conversational systemsregarding their factual correctness.