top of page

A legal review of the AI Surveillance software at the Paris 2024 Olympics

As the Paris 2024 Olympic Games approaches, a balance between new technology for counterterrorism and not infringing upon civil liberties has become a topic of legal debate, specifically regarding France’s passing of legislation which permits “the use of AI video surveillance for a trial period covering the Games to detect abnormal events or human behaviour at large-scale events.” Under this new law, police will be able to use the CCTV algorithms to detect events such as fights and crowd rushes. Videtics, Orange Business, Chapsvision and Wintics are the companies responsible for developing AI software that analyse video streams coming from existing surveillance systems. This could be revolutionary in preventing attacks like the bombing of the 1996 Olympics in Atlanta. This Olympics will also be unlike others before, featuring an exposed opening ceremony, heightening the demand for security.


French President Emmanuel Macron stated in a recent television interview that the Paris 2024 Olympics and the planned ceremony will “show France under its best light.” The opening ceremony is planned to be a beautiful moment of unity, taking place in the heart of Paris. Paris will essentially become ‘bunkarized’ on 26 July. This is to ensure the safety of half a million spectators, accompanied by viewers watching from apartments, as they witness a moment of Olympic history when athletes will travel atop the River Seine and celebrate the Opening Ceremony. Confidence in the security plan has been expressed by Anne Hidalgo, Paris’s mayor and other government officials. This will be the most exposed opening ceremony as prior ones have been conducted in stadiums, putting the event under higher risk of terrorism.


Paris’s history of terrorism prompted Paris to hold such an event in the first place because as Hidalgo said, “If we don't do it because we’re afraid, then they’ve won. And they didn’t win.” Paris has thus pursued an unparalleled security agenda for the event, where AI surveillance is just one method introduced. However, it has prompted the most questioning over the legality of screening non-consensual individuals. 


The history of EU laws over AI and surveillance will be analysed in detail. But first, one must contextualise the use of AI with other security measures implemented and the heightened demand for an effective counter-terrorism surveillance system. 


To briefly summarise, The New York Times reported that the Paris 2024 opening ceremony security system would entail closing airspace for 93 miles (including Charles de Gaulle airport). Closer in proximity to the event, there will be a QR code entry requirement for the 20,000 people that live and work nearby and the River Seine will be closed for navigation. Police will be in underground tunnels and sewers and metro and business closures will be mandated around the area. Four helicopters will also be monitoring the closed airspace and shooting down drones if necessary. Moreover, there will be soldiers checking all boats during the parade and 100 diver bomb specialists checking the Seine. Additionally, there will be 650 officers from specialised anti-terrorist units, over 700 firefighters specialised in nuclear and chemical attacks and 2,000 private security guards. There will also be 2,500 foreign officers (some with bomb-detecting dogs) and about 45,000 police and military officers in central Paris and suburbs. In sum, security is being taken extremely seriously. Guillame Farde, a security expert who teaches at Sciences Po Paris stated that “We have never seen anything like this before.”  


On 31 May, the Guardian reported that French security services arrested an 18-year-old Chechen teenager who was plotting an “Islamist-inspired” attack against the Geoffroy-Guichard Stadium during an Olympic football match. This attack was planned to target spectators, security and end with the perpetuator dying as a martyr. This has been the first threat specifically against the Olympics, although the heightened terrorism alert has largely been due to France’s history of  being a target of Islamic terrorism. One of the most deadly attacks took place in Paris on 13 November 2015 and took the lives of 130 people. 


However, Ghislain Réty, the head of one of France’s anti-terroism units stated that “a huge amount of intelligence work has been done,” and stated that he thinks it “will be a beautiful party.” Paris’s commitment to these games will be a symbol of strength and a statement of moving forward. 


The context of security risk and the magnitude of force for the security plan contextualises why France’s AI Security surveillance has been permitted. Paris’s AI security cameras are set to detect abandoned bags, mass movements, suspicious gatherings and various crimes. A human being will then ultimately decide if something is a threat after receiving an alert. What is an exciting new use of AI for counter-terrorism is also being criticised as “scary” and invasive of the privacy of French citizens. 


The European Parliament just passed an AI act this spring which stated:


“The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases.” 


This was further elaborated on under the “Law enforcement exemptions” section where it was outlined that:


“The use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations. ‘Real-time’ RBI can only be deployed if strict safeguards are met, e.g. its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation. Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto (‘post-remote RBI’) is considered a high-risk use case, requiring judicial authorisation being linked to a criminal offence.”


This new act put into place by the European Parliament holds France’s AI cameras to a strict legal standard of no facial scanning. Even if France’s new law permits the AI algorithms, France must abide by the EU Parliament's decision.


Katia Roux of Amnesty International France expressed her concerns noting that “software that enables AI-powered video surveillance can easily enable facial recognition. It's simply a configuration choice.” Amnesty International further expressed concerns that “the legal framework regulating facial recognition remained too fuzzy and technical and legal safeguards were insufficient.”


France’s Interior Ministry will be keeping tabs on civil liberties during the trial period with an evaluation committee. French authorities have further announced that as the new law bans facial recognition, France will not cross this legal barrier. Wintics’s software has been trialled at a recent Depeche Mode concert and Paris’s police chief, Laurent Nuñez, declared that the trial was a large success. 


The trial period for the software will extend past the games, raising concern that this will become a new reality in Paris. 


Beyond facial recognition, one more concern advocated by Noémie Levain, a digital rights activist, is that the software enables mass control and people lose the right to be anonymous, even without facial recognition. She points out that the right to not be watched is of importance to those in public. She even claims it to be just as scary as what “what is happening in China,” but François Mattens, owner of a Paris-based AI company, has argued “We are not China; we do not want to be Big Brother.” 


Therefore, despite concerns over civil liberties, the European Parliament’s law protects individuals from facial recognition. Moreover, the AI software should provide further assurance to spectators, athletes and police tasked with monitoring suspicious activity during the games. As discussed, a heightened terrorist threat calls for innovation and solutions so that people from all over the world can celebrate various sports and their countries without fear. 


Next year, as the trial period comes to an end, observing EU and French laws around AI will become very important for those interested in counter-terrorism efforts and legal regulations of AI. AI is becoming increasingly incorporated into society and legal regulations are incredibly essential in protecting civil liberties. 



bottom of page