Saving lives with AI

09 August 2023

Duncan Swan, chief operating officer, British APCO

Duncan Swan, chief operating officer, British APCO

The use of artificial intelligence (AI) by public safety agencies is growing rapidly. After all, AI is the science of making things smart – and if by being smarter we can save lives, speed up decision making processes, and achieve the best possible outcome in emergency situations then why wouldn’t AI become all pervading.

In all walks of life, the use of AI presents regulatory and ethical challenges. Public Safety is no different. Working within robust ethical frameworks the benefits will be huge. But even where AI is a valuable tool in the world of Public Safety, there will always be a human in the loop making the final decision. Fundamentally, AI shall augment, not replace human decision making.

Europe and the United States have their own emergency number associations, EENA and NENA respectively – and at each of their recent annual conferences AI was a key topic in both conference papers and out on the exhibition floor. AI is having a transformational impact on public safety – not forgetting that early forms of AI have been used for over 20 years in honing predictive algorithms to deliver faster emergency response.

The world over, voice communication remains the primary means for citizens asking the emergency services for help. And AI can play an important role here – improving voice recognition for those with impaired speech or detecting a foreign language. Natural language processing allows machines to understand words the way that humans can using rule-based systems and machine learning techniques – all core elements with AI.

Project Euphonia is a Google Research initiative focused on helping people with non-standard speech be better understood – for example, those with speech impairments caused by neurologic conditions such as stroke, multiple sclerosis, traumatic brain injuries and Parkinson’s disease. The approach is centred on analysing speech recordings to better train speech recognition models – with the key focus on accessibility. And we are seeing control room suppliers starting to integrate live audio translation into their control room platforms. Anything that can help the emergency call taker better understand the call for help will be an asset – and help get a responder to an incident faster and ultimately save lives.

The emergency services control room sits at the operational core, interacting with citizens and responders alike. Voice communication is captured and recorded; data entered automatically or manually; interactions noted. The ability to capture conversational data in real-time provides a powerful resource. The emergency call taker or dispatcher can search on key words to quickly get to the right point in a conversation; post-call analysis will help understand demand drivers and trends.

Dealing with an emergency incident increases the physiological and environmental pressure, which only builds with the desire to achieve the right outcome as emotions come into play. AI has a role in these situations in helping to reduce data overload – allowing control room staff to focus on the information they need, in context. Enhanced decision making can trigger specific questions for the call taker to ask – based on agency policies set around predictive data where keywords or voice analytics identify a call as highly likely to be, for instance, mental health related. Or AI can help to fill operational blind spots in complex, unfolding emergencies, providing ongoing assessment to provide actionable insights that may otherwise not be identified.

Machines process huge amounts of data faster than humans can and are able to learn to constantly improve – being able to analyse data efficiently and establish patterns (including voice and video analysis), will lead to faster resolution and achieving the right outcome sooner. Serious incidents – such as multi-vehicle road traffic collisions – have a major impact on emergency control rooms with often hundreds of well-meaning callers overwhelming call takers. At the same time, citizens experiencing unrelated emergencies struggle to get through as call queues grow and response times increase – and in the UK as is the case in many other countries, callers who hang up in frustration must be called back to confirm that they are ok and not in need of emergency help. In the United States, we are seeing suppliers helping to solve these issues with the help of AI to provide call triaging. When call volume spikes are detected, the call handling system automatically begins to triage calls, advising callers that the emergency agency is aware of the incident. It also sharply reduces the problem of abandoned calls – reducing the need for additional follow-up calls.

And we are starting to see AI incident detection and verification where emergencies are proactively identified, agencies made aware, and first responders notified before the first emergency calls are received – but this level of video analysis starts to raise key questions around ethical intelligence.

It is probably fair to conclude that in an emergency, citizens would not easily accept directions from a ‘non-human,’ but they would almost certainly accept AI assisting emergency agencies in making better informed and quicker decisions. And if data analysis can provide meaningful insights that reduces pressure in the control room and optimises the first response then so much the better.