Taming State Surveillance: Reconciling Camera Surveillance Technology with Human Rights Obligations

Reading Time: 5 minutes

(Disponible en français : Encadrer la surveillance exercée par les États : Concilier l’utilisation des technologies de vidéosurveillance avec les obligations en matière de respect des droits de la personne)

Centralized state camera surveillance is but one component of a burgeoning practice of personal data collection paired with artificial intelligence (AI). Camera surveillance is not inherently unlawful and has long been used at border-crossings, airports, and other high-security areas. However, recent technological advances have contributed to the spread of a more intrusive form of video surveillance that includes powerful, if imperfect, facial recognition abilities and AI decision making.

While the technology offers states the ability to, among other things, identify lost children, identify criminals, and monitor threats, the new capacity also raises significant human rights issues.

The infographic depicts surveillance cameras in a public setting and shows that surveillance systems can track individuals, as well as categorize them based on clothing, gender, race, age and actions. It shows surveillance cameras identifying individuals through facial recognition and by their gait. It also depicts cameras detecting people loitering and entering forbidden areas.

New Technologies and Mass Surveillance by States

The use of camera surveillance has grown with leaps in technology, including the introduction of videocassette recorders in the 1970s and the internet in the 1990s. While capacity to monitor video cameras has limited their utility, improved AI, combined with facial recognition software, cloud storage, and high definition cameras have significantly increased what can be done with an image feed.

Video analytics now endow computers with the ability to make real-time decisions about objects and actions captured on camera. Advanced surveillance systems can be programmed to categorize people based on clothing, gait, or race, or detect when a vehicle or person enters a forbidden area.

Ultra-high-resolution cameras can pick out a face in a crowd of thousands and provide AI algorithms with increasingly detailed information. Not only are cameras getting better at zooming-in from afar, the details they are capturing exponentially improve how computers make decisions.

The popularity of the technology is growing quickly. The United States (U.S.), United Kingdom (U.K.), United Arab Emirates, Australia, Germany, Russia, and India all have cities among the 20 most-surveilled locations in the world.

China, however, has embraced  surveillance camera technology more than any country, with nine Chinese cities in the 20 most-surveilled list. In November 2019, the International Consortium of Investigative Journalists reported on a series of leaked Chinese government documents (the China Cables) purporting to confirm intensive use of surveillance technology targeting the Uyghur Muslim minority.

Chinese firms, some of which have little independence from the country’s central government, have been among the world’s most important exporters of surveillance technology. Uganda, Algeria, Serbia, Mauritius, and states in Central Asia are just some of their customers. Many have adopted smart cities, consisting of a combination of facial recognition cameras and sensors throughout a city that transmit data to command centers.

In April 2019, the New York Times revealed the significant pervasiveness of Ecuador’s Chinese-made surveillance system. Such cases raise the question as to whether exporters take measures to ensure that human rights safeguards are in place in receiving countries, or whether they ignore oppressive application of the technology they sell.

Surveillance technology from U.S. firms, including IBM, Palentir, and Cisco, have been found in 26 countries. However, domestically, U.S. legislators are proving more cautious towards the technology. San Francisco, Oakland (California), and Somerville (Massachusetts) banned the use of facial recognition software by police and other agencies. Portland, Oregon is considering a full ban of the technology, including private use. The Massachusetts Senate is currently considering a moratorium on the use of the technology by law enforcement and guidelines for legislation on its future use.

Human Rights at Issue

Although citizens have reduced privacy expectations in public, the right to privacy nevertheless exists in public spaces and is protected to varying degrees by national and international instruments, including the widely-ratified International Covenant on Civil and Political Rights.

Under most international human rights instruments, a state may infringe privacy rights only within strict limits. Surveillance can occur if a publicly accessible domestic legal framework allowing it exists, if the interests justifying surveillance are legitimate, and if the interference with privacy rights is proportionate to the aim pursued.

For example, the European Convention on Human Rights provides one of the more detailed privacy rights provisions in Article 8, noting that interference with the right by a public authority can only occur if it is:

  • in accordance with the law; and,
  • necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.

In interpreting the Article, the European Court of Human Rights has proved to be particularly sensitive towards privacy issues, both in the private and public spheres. In 2018, for example, it ruled that video monitoring of university auditoriums violated the right to private life of professors.

The advent of more powerful video surveillance exponentially raises the potential of rights violations, including violations of freedom of expression, of association, and of movement. States with the technology may, for instance, be tempted to over-enforce minor offences or to use it to control political opponents. Moreover, citizens who understand that their faces can be detected through facial recognition technology may self-censor to conform to state expectations, actual or perceived, when attending political demonstrations, for example. The presence of the technology, absent accompanying safeguards, can therefore have a significant conforming and “chilling effect” on individual expression. While proponents of the new technology tout its benefits for crime-fighting and public safety, these aspects alone cannot justify the accompanying infringement of rights by the state.

Beyond how the technology is applied, weaknesses in the design of technologies related to mass surveillance raise concern. Unintentional bias can be introduced into an AI system when developers program in their personal, often unconscious, biases. Bias can also occur due to the data inputted into an AI system; if inputted data is not representative of all groups of a population, AI recommendations may favour one group over another.

Gender bias and racial bias continue to be a concern in video analytics, namely due to false positives for minorities and for women. Because of this, states incorporating AI into their surveillance technology must also take special care to avoid violating equality rights.

Canada and Privacy Rights

The Canadian Charter of Rights and Freedoms does not specifically include the protection of privacy rights. However, it provides some privacy protection under the right to life, liberty and security (section 7) and under the right to be secure against unreasonable search or seizure (section 8). Privacy rights are further protected under the federal Privacy Act, the federal Personal Information Protection and Electronic Documents Act (PIPEDA), and by an assortment of territorial and provincial laws. The Supreme Court of Canada has interpreted the Privacy Act as quasi-constitutional legislation.

Given that the Privacy Act has not seen substantive updates since 1983, the Privacy Commissioner of Canada recently suggested that the legislation be updated and urged Parliament to grant his offices more powers under both the  Privacy Act and PIPEDA. The Office of the Privacy Commissioner of Canada (OPC) has released several technology-related guidelines over the past two decades. Their Guidelines for the Use of Video Surveillance of Public Places by Law Enforcement, prepared in 2006, remain relevant today.

On 21 February 2020, following reports that several Canadian law enforcement agencies were using facial recognition software, the OPC, along with privacy protection authorities in Quebec, British Columbia and Alberta, announced a joint investigation into software provider Clearview AI. In its press release, the OPC noted that privacy regulators in all provinces and territories agreed to develop “guidance for organizations – including law enforcement – on the use of biometric technology, including facial recognition.”

Additional Resources

Jay Stanley, The Dawn of Robot Surveillance, American Civil Liberties Union, June 2019.

Steven Feldstein, The Global Expansion of AI Surveillance, Carnegie Endowment for International Peace, September 2019.

Human Rights in the Age of Artificial Intelligence, Accessnow, 2018.

Guidance for the use of body-worn cameras by law enforcement authorities, Office of the Privacy Commissioner of Canada (2015).

Automated Facial Recognition in the Public and Private Sectors, Office of the Privacy Commissioner of Canada (2013).

Benjamin J Goold, “CCTV and Human Rights” in Citizens, Cities and Video Surveillance: Towards a Democratic and Responsible Use of CCTV (Paris: European Forum for Urban Security, 2010) 27.

Guidelines for the Use of Video Surveillance of Public Places by Police and Law Enforcement Authorities, Office of the Privacy Commissioner of Canada (2006).

Author: Brendan Naef, Library of Parliament



Categories: Information and communications, Law, justice and rights, Science and technology

Tags: , , , , , ,

%d