Guides

Olympics-How France Plans To Use AI To Keep Paris 2024 Safe

Olympics security

France tested Artificial Intelligence-driven video surveillance technology that will be deployed during the Olympic Games at a Depeche Mode concert this week, calling the exercise a success.

French legislation passed in 2023 permits the use of AI video surveillance for a trial period covering the Games to detect abnormal events or human behaviour at large-scale events.

The technology could be pivotal to thwarting an attack like the bombing at the 1996 Olympics in Atlanta or the Nice truck attack in 2016, officials say.

Rights campaigners warn the technology poses a threat to civil liberties.

WHAT IS AI-POWERED SURVEILLANCE?

Algorithmic video surveillance uses computer software to analyse images captured by video surveillance cameras in real time.

Four companies — Videtics, Orange Business, ChapsVision and Wintics — have developed AI software that use algorithms to analyse video streams coming from existing video surveillance systems to help identify potential threats in public spaces.

The algorithms are trained to detect pre-determined “events” and abnormal behaviour and send alerts accordingly. Human beings then decide if the alert is real and whether to act on it.

WHAT WILL THE ALGORITHMS BE LOOKING FOR?

The law allows for eight different “events” to be flagged by AI surveillance software during the Games that include: crowd surges; abnormally heavy crowds; abandoned objects; presence or use of weapons; a person on the ground; a fire breaking out; contravention of rules on traffic direction.

Within these categories, specific thresholds (number of people, type of vehicle, timing etc) can be set manually to cater for each individual event, location or threat.

WHO WILL USE AI-POWERED SURVEILLANCE?

National and local police, firefighters, public transport security agents will all have access to AI-powered surveillance.

Software developed by Wintics and tested at the Depeche Mode concert, will be deployed in the Paris region and on public transport.

Paris Police chief Laurent Nunez described the trial as largely a success.

“Everything went relatively well, all the lights are green (for future use),” he said.

WILL FACIAL RECOGNITION BE USED?

It should not. The new law continues to ban facial recognition in most cases and French authorities have said it is a red line not to be crossed.

Nonetheless, rights campaigners are concerned mission creep risks setting in down the line.

“Software that enables AI-powered video surveillance can easily enable facial recognition. It’s simply a configuration choice,” said Katia Roux of Amnesty International France.

The legal framework regulating facial recognition remained too fuzzy and technical and legal safeguards were insufficient, according to Amnesty International.

Wintics Co-founder Matthias Houllier said his software’s algorithms were not trained for facial recognition.

“There’s no personal identification method in our algorithms,” he said. “It’s technically excluded.”

HOW WILL PRIVACY BE PROTECTED?

France’s Interior Ministry has created an evaluation committee to keep tabs on civil liberties throughout the trial period.

Led by a high ranking official within France’s top administrative court, the committee also comprises the head of the country’s privacy watchdog, CNIL, four lawmakers and a mayor.

(Reporting by Juliette Jabkhiro and Julien Pretot; editing by Richard Lough and Toby Davis)

Juliette Jabkhiro
Related Guides
Related sized article featured image

Gavin Harrison
Related sized article featured image

The key lies in enriching the organisational soil.

Mat Piaggi