Critical Conversation - The risks and benefits of chatting with AI while driving

Over the last couple of years Large Language Models (LLMs) have grown in capability and sophistication, becoming embedded in everyday workflows and activities for many people. Whether this is drafting text, summarising documents, sense-checking arguments or even acting as a sounding board for half-formed ideas, they have to some degree become part of our ‘cognitive furniture’.

The growth of a compelling, attention-seeking technology is a pattern we have seen before. My early work on driving simulators examined the sensory, cognitive and physical distractions caused by handheld and handsfree mobile phone conversations and text messaging whilst driving. A system that provides useful interactions outside of a car can become an irresistible temptation within it.

It is therefore entirely predictable that people will seek to engage with LLMs whilst driving. Anyone with memories of the TV series Knight Rider or the HAL-9000 computer from the movie 2001 will recognise the appeal of chatting to a computerised companion. Fluent, flexible conversations with an LLM could be made more personalised or purposeful than conversations with a passenger and more interactive than listening to a podcast. As an example, voice interaction with AI systems such as Grok in Tesla vehicles is already making such conversations a reality. However, there are risks here that may not be obvious.

Cognitive distraction whilst driving

Psychology research over many decades has shown that human attention is not a switch. As such, drivers continuously trade off between the driving task and everything else they are doing and thinking about. Conversations, especially demanding ones and those that specifically engage the cognitive resources we need for driving, can occupy our working memory, slow our reaction times and reduce our ability to respond in the right way at the right time to unexpected hazards. Allowing drivers to engage in unrestricted conversation with an LLM could create sustained cognitive distraction, increasing the likelihood that urgent responses in critical situations are slower or poorer than they otherwise would be.

However, we also know that the conversation between a driver and a passenger within their vehicle is characteristically different to that with a remote partner. This seems to be the result of two related mechanisms. Firstly, a passenger can pick up on cues from the driving situation and from the behaviour of the driver to detect situations of high workload and moderate the conversation accordingly. Secondly, the etiquette and expectations around a conversation are different when driver and passenger are together compared to when a driver is having a phone conversation. It is completely acceptable for a driver to pause a conversation with their passenger in response to challenging driving conditions whereas a 30 second pause in a telephone conversation might feel awkward and uncomfortable.

Furthermore, research from 2018 has already shown that driving performance improved in a simulator when drivers were allowed to engage with a ‘digital assistant’ compared to driving without. The authors noted that this technique shows promise as a countermeasure for task-related driving fatigue. However, they also recognised that their driving task was made deliberately very simple for the participants. In more challenging situations, the risk of verbal interactions causing cognitive overload and distraction was identified as a concern.

Can we chat and drive?

With that in mind, could conversations with AI be made compatible with driving? Interactions with an in-vehicle conversational AI would need to be more like those with a good passenger: sensitive to context, aware of the driving situation and willing to stop talking when the road demands it. In practice, this would require something quite radical. The system would need to continuously manage the interaction in response to both the road environment and the state of the driver by:

  • Continuously estimating driving task demand (road type, traffic complexity, weather, speed etc.).

  • Monitoring driver state (glance behaviour, steering inputs, workload trends, conversation fluency etc.).

  • Actively moderating conversation — deciding when to speak, how much to say and when to defer or interrupt.

When the system perceives low-demand driving conditions, conversations with the system might feel natural and conversational. As demand rises, system responses would become shorter, simpler, or paused entirely, reducing the driver’s cognitive load. In high-demand environments, silence would be the default. When the system attenuates conversations in this way, suitable feedback on why should be provided. Failing to do so would risk frustration, mistrust, and workarounds. Doing all of this intuitively and seamlessly is not a trivial task!

The evolution of ADAS and automated driving functionality creates a further opportunity here. During conversation with a vehicle-based system, its ADAS could be primed to be more responsive to emerging hazards, recognising that a driver’s perceptions and responses may be delayed by the associated cognitive load.

A new problem emerges…

However, when a system operates to manage attentional demand, it becomes safety-critical. If a serious incident occurred while a system-authorised conversation was in progress, uncomfortable questions around liability would follow. Should the system have picked up a foreseeable risk at the critical moment? Should it have moderated interactions more tightly in response? Did the system unnecessarily increase the driver’s cognitive load exactly when an urgent response was required?

As we have seen with partial vehicle automation, once systems begin to shape behaviour, user expectations can change. When incidents occur during conversation with an in-vehicle system, scrutiny from bodies such as UNECE, NHTSA, NTSB or EuroNCAP would likely follow, exploring whether such systems are sufficiently conservative under conditions of uncertainty.

A familiar pattern — and an open opportunity

When new technologies arrive, people use them in ways designers did not fully anticipate. Risk can emerge from malfunctions but also from normal use in the hands of the everyday consumer. Conversational AI in vehicles is coming, whether through built-in systems or personal devices. The question is whether it arrives as an unmanaged distraction or as something deliberately shaped around the realities of human attention and road risk. As is so often the case, I fear that human factors thinking will be bolted on later in an attempt to mitigate this issue, rather than integrated up front as a key element of system design.

The opportunity to speak with AI whilst driving is an exciting development but one that risks underestimating how deceptively demanding driving really is for us. Safe driving requires accurate perceptions, predictions and responses in an infinitely variable environment and where cognitive demand is uneven and unpredictable. Adding further conversational demands may create risk at the wrong times. A smart interactive system that accounts for predictable changes in task demand could create safer human drivers and more enjoyable, productive driving experiences - but it will take much work to get this right.

Next
Next

Reed Mobility response to the UK Road Safety Strategy