AI based on ChatGPT to manage ‘112’ emergency calls by 2025

Posted On : 21 Jun 2023
Image

Technology to be used “in times of congestion”

Artificial intelligence based on ChatGPT technology will manage the answering of 112 emergency calls from 2025 onwards in times of congestion, the deputy secretary-general of the Internal Administration (MAI), António Pombeiro, told reporters today.

“In principle, if the pilot goes well, we are prepared, from 2025, to start using” this system to answer calls, said Pombeiro on the sidelines of MAI Tech, a technology conference in the areas of security and civil protection, organised by the Ministry of Internal Administration in Porto.

The government official conceded this is “a very recent technology“, with the “need to do many tests”, admitting that for now “we are very much in the unknown“.

“In certain situations we have waiting periods motivated by call congestion, when there are incidents or events that involve a lot of publicity, a lot of people visualising what is happening, everyone has the initiative to call 112″, he explained, giving the example of urban fires.

As resources are “dimensioned for normal situations”, periods of call congestion can mean it “can reach five or six minutes” before people hear a voice at the end of the line (ie hardly an emergency response). The idea is to “create a first interface that answers the call, evaluates what kind of problem it relates to and what kind of report” is needed, but with “an answer in natural language”.

According to Pombeiro, “the caller will not realise that he/ she is talking to a system, to a machine, to a robot”, which will “use the new ChatGPT technology”, which is still undergoing a period of testing with simulated calls.

“The second party always has to be a human”, he assured. “The system (that is, the robot) never takes the call to the end (and will only work) “in times of greater congestion”.

Questioned as to whether the long term plan is to gradually replace people from the ‘112’ network, the deputy secretary-general rejected this scenario, saying that the development  involves “strengthening operational means”. It is “always necessary to have a human in the background”, he said.

As for false/ prank calls – which are around 60%, “a very high number” – these too will need to be factored into the “different learning”, but they do constitute “a goal” of the new project. (In other words, robots will be programmed to sift out prank/ false emergencies…)

Source material: LUSA

Night
Day