The promise of AI to improve operational efficiency across different professions is well-documented.
However, its ability to analyse large datasets in real time means AI-enhanced cameras can massively aid in matching faces against police databases, monitoring loitering in hotspots, and detecting other suspicious or antisocial behaviour that might otherwise have gone unmissed.
In many countries, economic austerity is constraining law enforcement funding, making AI-enhanced surveillance an attractive option for governments.
AI and public surveillance in China’s smart cities
In smart cities the world over, particularly in China, mass public surveillance is already common. Chongqing, Shenzhen, Shanghai, and Urumqi are among the most surveilled cities in the world.
Independent intelligence vendor IPVM reported that Xuhui District planned to install 2,500 additional facial recognition cameras in 2024, up from 1,200, with estimates that the system could analyse 25.9 million faces daily. The report also said that files would be created for each person and analysed for profiles on “social relationships, activities, and other comprehensive information,” with search filters including “gender, age group, and Uyghur ethnicity.”
American surveillance
In the US, surveillance technology is used in a less authoritarian manner. San Francisco banned facial recognition in 2019, but in March 2024, voters approved Proposition E, a ballot measure that expands police surveillance powers. The measure allows the San Francisco Police Department to more easily install public security cameras and deploy drones, without oversight from the Police Commission or Board of Supervisors.
US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataThe American Civil Liberties Union (ACLU) of Northern California noted that while the 2019 facial recognition ban remains, Proposition E weakens other safeguards, stating: “Proposition E also dramatically increases secret surveillance by allowing the police to track and monitor San Francisco residents without safety policies, public input, or oversight to protect our rights.” The ACLU also warned that Proposition E reduces the information police must collect, which could worsen discriminatory problems in policing.
The situation in the UK
In the UK, facial recognition is used periodically via clearly signed mobile units.
However, for the first time, fixed facial recognition cameras have been deployed in Croydon, South London, while £3m ($3.92m) has been approved to introduce AI-assisted cameras and facial recognition into the CCTV systems of the London borough of Hammersmith and Fulham.
UK law currently lacks specific regulation on police use of this technology, leaving citizens to rely on data protection, human rights, and anti-discrimination laws.
The Metropolitan Police have said that any biometric data not connected to watch lists is deleted, and testing by the National Physical Laboratory reportedly found no significant race or gender bias.
However, public opinion is mixed: some citizens feel safer amid rising crime and underfunded police departments, while others are concerned about privacy and fairness.
AI can optimise policing efficiency, reducing the time officers spend reviewing hours of CCTV footage and allowing them to focus on intelligence-led investigations. However, specific legislation is needed to protect citizens’ privacy. Careful measures must be implemented to ensure AI does not amplify existing biases and discriminatory practices within law enforcement.

