Safety and regulating risks: who is safe in ‘safe cities’?
This week the UK is set to host its first AI Safety Summit. It aims to bring together international governments, leading AI companies, civil society groups and research experts to consider the risks of AI and how they can be mitigated through governance and international collaboration.
Street surveillance camera and graphical representation of facial recognition screening.
The summit calls for internationally coordinated action, informed by a shared understanding of the risks posed by frontier AI. Among other objectives, it also aims to identify areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance.
Thus, what can be learnt about the risks from countries currently using AI technology? Our latest African Digital Rights Network (ADRN) report maps the supply of such surveillance technologies to five African countries – Nigeria, Ghana, Morocco, Malawi, and Zambia.
Can AI keep us safe?
Our research focused on five categories of digital surveillance technology foregrounded by our previous studies and review of surveillance law in Africa. The forms of AI-enabled surveillance studied include internet interception technologies, mobile phone interception technologies, social media surveillance technologies, biometric ID surveillance technologies and safe city technologies.
‘Safe city’ or ‘smart city’ technologies are used for the surveillance of public spaces. Ghana, Morocco, and Zambia have each spent over US$250m on mass surveillance ‘safe cities’ using equipment from Chinese companies. The report shows that Chinese banks are offering huge loans to African governments to buy packages of surveillance technologies from Chinese companies including Huawei and ZTE.
Safe city packages often include the installation of thousands of closed-circuit (CCTV) cameras that have facial recognition and car licence plate recognition capabilities enabled by artificial intelligence. The Chinese package often includes a surveillance command and control room in a ‘data centre’ from which police and security forces can surveil citizens moving around public space in real time.
Ghana is among the African countries implementing a comprehensive smart city project that cuts across different aspects of societal life. Of particular interest within this project is its facial recognition CCTV camera component. These CCTV cameras are being installed around Accra, Ghana’s capital city, its regional capitals, entry ports, and other state infrastructure, and are powered by Chinese company Huawei’s facial recognition AI.
In our Mapping the supply of surveillance technologies to Africa report, we explain that the Government of Ghana signed a contract with Beijing Everyway Traffic & Lighting Technology and Huawei Technologies in 2012 for Phase 1 to install 800 CCTV cameras. The contract was worth US$176m. The contract for Phase 2 of the project, to install 8,400 CCTV cameras, was signed in 2018. Phase 2 was financed with US$200m from the Export- Import Bank of China and US$35.5m from Barclays Bank of Ghana.
Other components of the project, include the installation of 50 automatic number plate recognition (ANPR) devices at checkpoint sites, expansion of an existing data centre and establishment of a backup data centre, a video transmission network, and an intelligent video analysis system.
Understanding the risks
The issue with such AI technologies is that they use pattern recognition and thus make many false positives. These mistakes can lead to arrests and repercussions for innocent civilians. Furthermore, such mistakes take place disproportionately on people with black or brown skin due to data bias – automating racial profiling and discrimination.
Huawei maintains that its surveillance system is for public safety and improved security, the abuse of the technology in other countries raises concerns. However, the same Huawei AI-powered facial recognition technology was used to target for arrest hundreds of supporters of opposition politician Bobi Wine. Since living under a siege of surveillance technologies leads to datafication of citizens’ private lives, there is the fear that governments, corporations, and hackers will use remote access to use this data for illegal or harmful purposes.
In the case of Morocco, authorities placed the regulation of biometric facial recognition software in the hands of the Moroccan National Commission for the Control of Personal Data Protection (CNDP), which had announced a moratorium on its use by public or private entities. CNDP raised concerns over the technology’s impact on people’s privacy and human rights and announced the need for extended consultations. The moratorium lapsed and, in August 2022, Morocco started tendering for facial recognition systems for installation in the capital’s Rabat-Salé Airport, reportedly the first time the technology will used in the country.
Who is protected?
Despite privacy rights being guaranteed in the constitutions of the countries we studied, in practice, those rights are being violated. In the absence of adequate regulatory frameworks, the new systems that are being implemented do not ensure protection or privacy. Furthermore, there is no legal mechanism for appeal or recourse in the case of abuse.
The issues broaden with the unreliability of surveillance AI technologies making many racially biased false positives on black and brown people. The occurrence of such mistakes is well documented and as a result there has been a moratorium where AI technologies have been completely banned in multiple US cities.
In 2019 San Franciso in the USA became the first city to ban the use of facial recognition surveillance in public spaces in recognition of its in-built racial bias and false positives. Civil society and human rights groups are calling on governments to ban facial recognition on the basis that it is unreliable, unjust, and a threat to basic rights and safety.
In the European parliament, there have been attempts to regulate such technologies place measures to benefit from AI without the discriminatory components. Having a human in the loop to disregard, override or reverse the output of the high-risk AI system is one solution that has proven itself effective.
Need for transparency and accountability
However, our research cautions against the use of such technologies until there is an adequate and regulatory environment or the technology no longer discriminates against people based on their skin colour. Furthermore, there is a need to increase public awareness of expanding surveillance and the digital rights implications of safe cities and biometric identification. Greater transparency is needed regarding the procurement of surveillance technologies and their use through the publication of annual reports by an independent oversight body. Until adequate measures are in place facial recognition technology should be banned to protect human rights.
This blog was originally posted on 30th October 2023 on the IDS Website here