Today, artificial intelligence (AI) is everywhere. It can be found in search engines, connected cars, smartphones and chatbots such as ChatGPT. Although AI offers many benefits, it also poses a number of risks, particularly to privacy.
Privacy Risks
An AI system poses privacy risks throughout its lifecycle. These risks occur right from the initial collection of the data that trains the AI model driving the system. This data collection can take place in a variety of ways, some more legitimate than others.
For example, in his 2021 joint investigation of Clearview AI carried out with his counterparts from Quebec, British Columbia and Alberta, the Privacy Commissioner of Canada (the commissioner) criticized the mass collection of online data and its use in an AI system. He determined that the mass collection of online biometric data by Clearview to develop its facial recognition software without the knowledge or consent of the people involved violated the Personal Information Protection and Electronic Documents Act (PIPEDA).
Once on the market, some powerful AI systems can pose additional privacy risks. For example, they can aggregate unrelated personal information to create new insights about a person that the person has not disclosed themselves. AI systems can also distort sensitive biometric information, such as a voice or a face, to create deepfakes, which may fuel the growth of online misinformation and disinformation.
Federal Legislative Framework and Other Measures
Public Sector
The Privacy Act applies to federal government institutions. It was passed in 1983 and has not undergone any significant reforms since then. Not surprisingly, the Privacy Act makes no mention of AI. However, the law applies to the collection, use and disclosure of personal information by a federal institution, including when those activities are related to an AI system.
In 2020, Justice Canada published a discussion paper and held consultations on the modernization of the Privacy Act. The paper talks about possible amendments to the Act to bring it into the digital age, among other issues; however, no bill to reform it has been introduced in Parliament.
Federal institutions must also comply with the Treasury Board of Canada’s policies and directives, including the Directive on Privacy Practices, which sets out an obligation to conduct privacy impact assessments for new programs and activities that include the use of personal information. This assessment must be updated when significant changes are made to a program or activity. That includes cases involving the use of “any new or modified information technology or other process” or any “automated decision system that would require compliance with the Directive on Automated Decision-Making.”
The Directive on Automated Decision-Making seeks to reduce the risks related to the federal government’s deployment of an automated decision-making system, or in other words, “[a]ny technology that either assists or replaces the judgment of human decision-makers.” The requirements that a federal institution must meet before developing or acquiring such a system vary in number and scope depending on the system’s algorithmic impact level (see Table 1). The impact level is assessed by the federal institution. Privacy is one factor that must be considered.
Table 1 – Algorithmic Impact Level of an Automated Decision-Making System and Impact Level Requirements
| Algorithmic Impact Level | Requirements |
The algorithmic impact level is determined by assessing the impact of the decision (from very low to very high) on one or more of the following factors:
|
One or more of the following requirements apply depending on the algorithmic impact level (with varying degrees of detail):
|
Source: Table prepared by the Library of Parliament using information obtained from Government of Canada, Directive on Automated Decision-Making.
Since the Treasury Board directives are not legal obligations under the Privacy Act, the commissioner cannot investigate a federal institution’s failure to comply with a directive.
Private Sector
PIPEDA was passed in 2000. It applies to the collection, use and disclosure of personal information by private-sector organizations in the course of their commercial activities, including companies working in the field of AI. PIPEDA applies across Canada, except in provinces that have passed legislation that is “substantially similar” (Alberta, British Columbia and Quebec). PIPEDA makes no mention of AI.
Recent Bills
Bills seeking to modernize PIPEDA have been introduced in Parliament in recent years. In November 2020, the government introduced Bill C-11, the Digital Charter Implementation Act, 2020, but it died on the Order Paper in 2021. This bill sought to replace Part 1 of PIPEDA with the Consumer Privacy Protection Act (CPPA) and to create the Personal Information and Data Protection Tribunal. In June 2022, Bill C-27, the Digital Charter Implementation Act, 2022, was introduced in Parliament. It proposed the passage of a CPPA that differed in certain respects from the one set out in Bill C-11 and to create the above-mentioned tribunal. It also proposed the passage of the Artificial Intelligence and Data Act (AIDA), which was not included in the previous version of the bill.
The CPPA included algorithmic transparency obligations by inserting in the law a “right to explanation.” Under this Act, organizations would have been required to make available a general account of their use of any automated decision system to make predictions, recommendations or decisions that could have a significant impact on individuals. The Act would also have enabled individuals to ask the organization, in writing, for an explanation of the prediction, recommendation or decision in question.
AIDA sought to regulate AI systems and impose certain measures to mitigate the risk of harm and biased results related to high-impact AI systems. Algorithmic biases tend to have more of an impact on certain groups of people, such as racialized people, as noted in the House of Commons Standing Committee on Access to Information, Privacy and Ethics report on facial recognition technology and the growing power of artificial intelligence, in 2022. AIDA would have applied only to the private sector.
Bill C-27 died on the Order Paper in January 2025.
Work of the Office of the Privacy Commissioner of Canada
In its 2024–2027 strategic plan, the Office of the Privacy Commissioner of Canada (OPC) set out three strategic priorities, including the importance of “[a]ddressing and advocating for privacy in this time of technological change.” The OPC collaborated with its provincial and territorial counterparts to publish principles for responsible, trustworthy and privacy-protective generative AI technologies. It is also conducting joint AI-related investigations, such as its investigation into ChatGPT.
The OPC also participates in the activities of the Global Privacy Assembly (GPA), which published a resolution on generative artificial intelligence systems in 2023. The OPC is a member of the GPA’s International Enforcement Cooperation Working Group, which published the Concluding Joint Statement on Data Scraping and the Protection of Privacy in 2024, and it also participates in the Roundtable of the G7 Data Protection and Privacy Authorities, which recently released its Statement on the Role of Data Protection Authorities in Fostering Trustworthy AI.
Further Reading
Carnegie Mellon University. CyLab Researchers Develop a Taxonomy for AI Privacy Risks. News release, 27 March 2024.
Charland, Sabrina, Alexandra Savoie and Ryan van den Berg. Legislative Summary of Bill C-27: An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts, Publication No. 44-1-C27-E. Library of Parliament, 12 July 2022.
Government of Canada. Artificial intelligence ecosystem.
Government of Canada. Responsible use of artificial intelligence in government.
Government of Canada. Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.
By Alexandra Savoie, Library of Parliament
Categories: Information and communications, Law, justice and rights, Science and technology