Teresa Scassa - Blog

Displaying items by tag: Privacy

On May 13, 2024, the Ontario government introduced Bill 194. The bill addresses a catalogue of digital issues for the public sector. These include: cybersecurity, artificial intelligence governance, the protection of the digital information of children and youth, and data breach notification requirements. Consultation on the Bill closes on June 11, 2024. Below is my submission to the consultation. The legislature has now risen for the summer, so debate on the bill will not be moving forward now until the fall.

 

Submission to the Ministry of Public and Business Service Delivery on the Consultation on proposed legislation: Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024

Teresa Scassa, Canada Research Chair in Information Law and Policy, University of Ottawa

June 4, 2024

I am a law professor at the University of Ottawa, where I hold the Canada Research Chair in Information Law and Policy. I research and write about legal issues relating to artificial intelligence and privacy. My comments on Bill 194 are made on my own behalf.

The Enhancing Digital Security and Trust Act, 2024 has two schedules. Schedule 1 has three parts. The first relates to cybersecurity, the second to the use of AI in the broader public service, and the third to the use of digital technology affecting individuals under 18 years of age in the context of Children’s Aid Societies and School Boards. Schedule 2 contains a series of amendments to the Freedom of Information and Protection of Privacy Act (FIPPA). My comments are addressed to each of the Schedules. Please note that all examples provided as illustrations are my own.

Summary

Overall, I consider this to be a timely Bill that addresses important digital technology issues facing Ontario’s public sector. My main concerns relate to the sections on artificial intelligence (AI) systems and on digital technologies affecting children and youth. I recommend the addition of key principles to the AI portion of the Bill in both a reworked preamble and a purpose section. In the portion dealing with digital technologies and children and youth, I note the overlap created with existing privacy laws, and recommend reworking certain provisions so that they enhance the powers and oversight of the Privacy Commissioner rather than creating a parallel and potentially conflicting regime. I also recommend shifting the authority to prohibit or limit the use of certain technologies in schools to the Minister of Education and to consider the role of public engagement in such decision-making. A summary of recommendations is found at the end of this document.

Schedule 1 - Cybersecurity

The first section of the Enhancing Digital Security and Trust Act (EDSTA) creates a framework for cybersecurity obligations that is largely left to be filled by regulations. Those regulations may also provide for the adoption of standards. The Minister will be empowered to issue mandatory Directives to one or more public sector entities. There is little detail provided as to what any specific obligations might be, although section 2(1)(a) refers to a requirement to develop and implement “programs for ensuring cybersecurity” and s. 2(1)(c) anticipates requirements on public sector entities to submit reports to the minister regarding cyber security incidents. Beyond this, details are left to regulations. These details may relate to roles and responsibilities, reporting requirements, education and awareness measures, response and recovery measures, and oversight.

The broad definition of a “public sector entity” to which these obligations apply includes hospitals, school boards, government ministries, and a wide range of agencies, boards and commissions at the provincial and municipal level. This scope is important, given the significance of cybersecurity concerns.

Although there is scant detail in Bill 194 regarding actual cyber security requirements, this manner of proceeding seems reasonable given the very dynamic cybersecurity landscape. A combination of regulations and standards will likely provide greater flexibility in a changeable context. Cybersecurity is clearly in the public interest and requires setting rules and requirements with appropriate training and oversight. This portion of Bill 194 would create a framework for doing this. This seems like a reasonable way to address public sector cybersecurity, although, of course, the effectiveness will depend upon the timeliness and the content of any regulations.

Schedule 1 – Use of Artificial Intelligence Systems

Schedule 1 of Bill 194 also contains a series of provisions that address the use of AI systems in the public sector. These will apply to AI systems that meet a definition that maps onto the Organization for Economic Co-operation and Development (OECD) definition. Since this definition is one to which many others are being harmonized (including a proposed amendment to the federal AI and Data Act, and the EU AI Act), this seems appropriate. The Bill goes on to indicate that the use of an AI system in the public sector includes the use of a system that is publicly available, that is developed or procured by the public sector, or that is developed by a third party on behalf of the public sector. This is an important clarification. It means, for example, that the obligations under the Act could apply to the use of general-purpose AI that is embedded within workplace software, as well as purpose-built systems.

Although the AI provisions in Bill 194 will apply to “public service entities” – defined broadly in the Bill to include hospitals and school boards as well as both federal and municipal boards, agencies and commissions – the AI provisions will only apply to a public sector entity that is “prescribed for the purposes of this section if they use or intend to use an artificial intelligence system in prescribed circumstances” (s. 5(1)). The regulations also might apply to some systems (e.g., general purpose AI) only when they are being used for a particular purpose (e.g., summarizing or preparing materials used to support decision-making). Thus, while potentially quite broad in scope, the actual impact will depend on which public sector entities – and which circumstances – are prescribed in the regulations.

Section 5(2) of Bill 194 will require a public sector entity to which the legislation applies to provide information to the public about the use of an AI system, but the details of that information are left to regulations. Similarly, there is a requirement in s. 5(3) to develop and implement an accountability framework, but the necessary elements of the framework are left to regulations. Under s. 5(4) a public sector entity to which the Act applies will have to take steps to manage risks in accordance with regulations. It may be that the regulations will be tailored to different types of systems posing different levels of risk, so some of this detail would be overwhelming and inflexible if included in the law itself. However, it is important to underline just how much of the normative weight of this law depends on regulations.

Bill 194 will also make it possible for the government, through regulations, to prohibit certain uses of AI systems (s. 5(6) and s. 7(f) and (g)). Interestingly, what is contemplated is not a ban on particular AI systems (e.g., facial recognition technologies (FRT)); rather, it is potential ban on particular uses of those technologies (e.g., FRT in public spaces). Since the same technology can have uses that are beneficial in some contexts but rights-infringing in others, this flexibility is important. Further, the ability to ban certain uses of FRT on a province-wide basis, including at the municipal level, allows for consistency across the province when it comes to issues of fundamental rights.

Section 6 of the bill provides for human oversight of AI systems. Such a requirement would exist only when a public entity uses an AI system in circumstances set out in the regulations. The obligation will require oversight in accordance with the regulations and may include additional transparency obligations. Essentially, the regulations will be used to customize obligations relating to specific systems or uses of AI for particular purposes.

Like the cybersecurity measures, the AI provisions in Bill 194 leave almost all details to regulations. Although I have indicated that this is an appropriate way to address cybersecurity concerns, it may be less appropriate for AI systems. Cybersecurity is a highly technical area where measures must adapt to a rapidly evolving security landscape. In the cybersecurity context, the public interest is in the protection of personal information and government digital and data infrastructures. Risks are either internal (having to do with properly training and managing personnel) or adversarial (where the need is for good security measures to be in place). The goal is to put in place measures that will ensure that the government’s digital systems are robust and secure. This can be done via regulations and standards.

By contrast, the risks with AI systems will flow from decisions to deploy them, their choice and design, the data used to train the systems, and their ongoing assessment and monitoring. Flaws at any of these stages can lead to errors or poor functioning that can adversely impact a broad range of individuals and organizations who may interact with government via these systems. For example, an AI chatbot that provides information to the public about benefits or services, or an automated decision-making system for applications by individuals or businesses for benefits or services, interacts with and impacts the public in a very direct way. Some flaws may lead to discriminatory outcomes that violate human rights legislation or the Charter. Others may adversely impact privacy. Errors in output can lead to improperly denied (or allocated) benefits or services, or to confusion and frustration. There is therefore a much more direct impact on the public, with effects on both groups and individuals. There are also important issues of transparency and trust. This web of considerations makes it less appropriate to leave the governance of AI systems entirely to regulations. The legislation should, at the very least, set out the principles that will guide and shape those regulations. The Ministry of Public and Business Service Delivery has already put considerable work into developing a Trustworthy AI Framework and a set of (beta) principles. This work could be used to inform guiding principles in the statute.

Currently, the guiding principles for the whole of Bill 194 are found in the preamble. Only one of these directly relates to the AI portion of the bill, and it states that “artificial intelligence systems in the public sector should be used in a responsible, transparent, accountable and secure manner that benefits the people of Ontario while protecting privacy”. Interestingly, this statement only partly aligns with the province’s own beta Principles for Ethical Use of AI. Perhaps most importantly, the second of these principles, “good and fair”, refers to the need to develop systems that respect the “rule of law, human rights, civil liberties, and democratic values”. Currently, Bill 194 is entirely silent with respect to issues of bias and discrimination (which are widely recognized as profoundly important concerns with AI systems, and which have been identified by Ontario’s privacy and human rights commissioners as a concern). At the very least, the preamble to Bill 194 should address these specific concerns. Privacy is clearly not the only human rights consideration at play when it comes to AI systems. The preamble to the federal government’s Bill C-27, which contains the proposed Artificial Intelligence and Data Act, states: “that artificial intelligence systems and other emerging technologies should uphold Canadian norms and values in line with the principles of international human rights law”. The preamble to Bill 194 should similarly address the importance of human rights values in the development and deployment of AI systems for the broader public sector.

In addition, the bill would benefit from a new provision setting out the purpose of the part dealing with public sector AI. Such a clause would shape the interpretation of the scope of delegated regulation-making power and would provide additional support for a principled approach. This is particularly important where legislation only provides the barest outline of a governance framework.

In this regard, this bill is similar to the original version of the federal AI and Data Act, which was roundly criticized for leaving the bulk of its normative content to the regulation-making process. The provincial government’s justification is likely to be similar to that of the federal government – it is necessary to remain “agile”, and not to bake too much detail into the law regarding such a rapidly evolving technology. Nevertheless, it is still possible to establish principle-based parameters for regulation-making. To do so, this bill should more clearly articulate the principles that guide the adoption and use of AI in the broader public service. A purpose provision could read:

The purpose of this Part is to ensure that artificial intelligence systems adopted and used by public sector entities are developed, adopted, operated and maintained in manner that is transparent and accountable and that respects the privacy and human rights of Ontarians.

Unlike AIDA, the federal statute which will apply to the private sector, Bill 194 is meant to apply to the operations of the broader public service. The flexibility in the framework is a recognition of both the diversity of AI systems, and the diversity of services and activities carried out in this context. It should be noted, however, that this bill does not contemplate any bespoke oversight for public sector AI. There is no provision for a reporting or complaints mechanism for members of the public who have concerns with an AI system. Presumably they will have to complain to the department or agency that operates the AI system. Even then, there is no obvious requirement for the public sector entity to record complaints or to report them for oversight purposes. All of this may be provided for in s. 5(3)’s requirement for an accountability framework, but the details of this have been left to regulation. It is therefore entirely unclear from the text of Bill 194 or what recourse – if any – the public will have when they have problematic encounters with AI systems in the broader public service. Section 5(3) could be amended to read:

5(3) A public sector entity to which this section applies, shall, in accordance with the regulations, develop and implement an accountability framework respecting their use of the artificial intelligence system. At a minimum, such a framework will include:

a) The specification of reporting channels for internal or external complaints or concerns about the operation of the artificial intelligence system;

b) Record-keeping requirements for complaints and concerns raised under subparagraph 5(3)(a), as well as for responses thereto.

Again, although a flexible framework for public sector AI governance may be an important goal, key elements of that framework should be articulated in the legislation.

Schedule 1 – Digital Technology Affecting Individuals Under Age 18

The third part of Schedule 1 addresses digital technology affecting individuals under age 18. This part of Bill 194 applies to children’s aid societies and school boards. Section 9 enables the Lieutenant Governor in Council to make regulations regarding “prescribed digital information relating to individuals under age 18 that is collected, used, retained or disclosed in a prescribed manner”. Significantly, “digital information” is not defined in the Bill.

The references to digital information are puzzling, as it seems to be nothing more than a subset of personal information – which is already governed under both the Municipal Freedom of Information and Protection of Privacy Act (MFIPPA) and FIPPA. Personal information is defined in both these statutes as “recorded information about an identifiable individual”. It is hard to see how “digital information relating to individuals under age 18” is not also personal information (which has received an expansive interpretation). If it is meant to be broader, it is not clear how. Further, the activities to which this part of Bill 194 will apply are the “collection, use, retention or disclosure” of such information. These are activities already governed by MFIPPA and FIPPA – which apply to school boards and children’s aid societies respectively. What Bill 194 seems to add is a requirement (in s. 9(b)) to submit reports to the Minister regarding the collection, use, retention and disclosure of such information, as well as the enablement of regulations in s. 9(c) to prohibit collection, use, retention or disclosure of prescribed digital information in prescribed circumstances, for prescribed purposes, or subject to certain conditions. Nonetheless, the overlap with FIPPA and MFIPPA is potentially substantial – so much so, that s. 14 provides that in case of conflict between this Act and any other, the other Act would prevail. What this seems to mean is that FIPPA and MFIPPA will trump the provisions of Bill 194 in case of conflict. Where there is no conflict, the bill seems to create an unnecessary parallel system for governing the personal information of children.

The need for more to be done to protect the personal information of children and youth in the public school system is clear. In fact, this is a strategic priority of the current Information and Privacy Commissioner (IPC), whose office has recently released a Digital Charter for public schools setting out voluntary commitments that would improve children’s privacy. The IPC is already engaged in this area. Not only does the IPC have the necessary expertise in the area of privacy law, the IPC is also able to provide guidance, accountability and independent oversight. In any event, since the IPC will still have oversight over the privacy practices of children’s aid societies and school boards notwithstanding Bill 194, the new system will mean that these entities will have to comply with regulations set by the Minister on the one hand, and the provisions of FIPPA and MFIPPA on the other. The fact that conflicts between the two regimes will be resolved in favour of privacy legislation means that it is even conceivable that the regulations could set requirements or standards that are lower than what is required under FIPPA or MFIPPA – creating an unnecessarily confusing and misleading system.

Another odd feature of the scheme is that Bill 194 will require “reports to be submitted to the Minister or a specified individual in respect of the collection, use, retention and disclosure” of digital information relating to children or youth (s. 9(b)). It is possible that the regulations will specify that it is the Privacy Commissioner to whom the reports should be submitted. If it is, then it is once again difficult to see why a parallel regime is being created. If it is not, then the Commissioner will be continuing her oversight of privacy in schools and children’s aid societies without access to all the relevant data that might be available.

It seems as if Bill 194 contemplates two separate sets of measures. One addresses the proper governance of the digital personal information of children and youth in schools and children’s aid societies. This is a matter for the Privacy Commissioner, who should be given any additional powers she requires to fulfil the government’s objectives. Sections 9 and 10 of Bill 194 could be incorporated into FIPPA and MFIPPA, with modifications to require reporting to the Privacy Commissioner. This would automatically bring oversight and review under the authority of the Privacy Commissioner. The second objective of the bill seems to be to provide the government with the opportunity to issue directives regarding the use of certain technologies in the classroom or by school boards. This is not unreasonable, but it is something that should be under the authority of the Minister of Education (not the Minister of Public and Business Service Delivery). It is also something that might benefit from a more open and consultative process. I would recommend that the framework be reworked accordingly.

Schedule 2: FIPPA Amendments

Schedule 2 consists of amendments to the Freedom of Information and Protection of Privacy Act. These are important amendments that will introduce data breach notification and reporting requirements for public sector entities in Ontario that are governed by FIPPA (although, interestingly, not those covered by MFIPPA). For example, a new s. 34(2)(c.1) will require the head of an institution to include in their annual report to the Commissioner “the number of thefts, losses or unauthorized uses or disclosures of personal information recorded under subsection 40.1”. The new subsection 40.1(8) will require the head of an institution to keep a record of any such data breach. Where a data breach reaches the threshold of creating a “real risk that a significant harm to an individual would result” (or where any other circumstances prescribed in regulations exist), a separate report shall be made to the Commissioner under s. 40.1(1). This report must be made “as soon as feasible” after it has been determined that the breach has taken place (s. 40.1(2)). New regulations will specify the form and contents of the report. There is a separate requirement for the head of the institution to notify individuals affected by any breach that reaches the threshold of a real risk of significant harm (s. 40.1(3)). The notification to the individual will have to contain, along with any prescribed information, a statement that the individual is entitled to file a complaint with the Commissioner with respect to the breach, and the individual will have one year to do so (ss. 40.1(4) and (5)). The amendments also identify the factors relevant in determining if there is a real risk of significant harm (s. 40.1(7)).

The proposed amendments also provide for a review by the Commissioner of the information practices of an institution where a complaint has been filed under s. 40.1(4), or where the Commissioner “has other reason to believe that the requirements of this Part are not being complied with” (s. 49.0.1).) The Commissioner can decide not to review an institution’s practices in circumstances set out in s. 49.0.1(3). Where the Commissioner determines that there has been a contravention of the statutory obligations, she has order-making powers (s. 49.0.1(7)).

Overall, this is a solid and comprehensive scheme for addressing data breaches in the public sector (although it does not extend to those institutions covered by MFIPPA). In addition to the data breach reporting requirements, the proposed amendments will provide for whistleblower protections. They will also specifically enable the Privacy Commissioner to consult with other privacy commissioners (new s. 59(2)), and to coordinate activities, enter into agreements, and to provide for handling “of any complaint in which they are mutually interested.” (s. 59(3)). These are important amendments given that data breaches may cross provincial lines, and Canada’s privacy commissioners have developed strong collaborative relationships to facilitate cooperation and coordination on joint investigations. These provisions make clear that such co-operation is legally sanctioned, which may avoid costly and time-consuming court challenges to the commissioners’ authority to engage in this way.

The amendments also broaden s. 61(1)(a) of FIPPA which currently makes it an offence to wilfully disclose personal information in contravention of the Act. If passed, it will be an offence to wilfully collect, use or disclose information in the same circumstances.

Collectively the proposed FIPPA amendments are timely and important.

Summary of Recommendations:

On artificial intelligence in the broader public sector:

1. Amend the Preamble to Bill 194 to address the importance of human rights values in the development and deployment of AI systems for the broader public sector.

2. Add a purpose section to the AI portion of Bill 194 that reads:

The purpose of this Part is to ensure that artificial intelligence systems adopted and used by public sector entities are developed, adopted, operated and maintained in manner that is transparent and accountable and that respects the privacy and human rights of Ontarians.

3. Amend s. 5(3) to read:

5(3) A public sector entity to which this section applies, shall, in accordance with the regulations, develop and implement an accountability framework respecting their use of the artificial intelligence system. At a minimum, such a framework will include:

a) The specification of reporting channels for internal or external complaints or concerns about the operation of the artificial intelligence system;

b) Record-keeping requirements for complaints and concerns raised under subparagraph 5(3)(a), as well as for responses thereto.

On Digital Technology Affecting Individuals Under Age 18:

1. Incorporate the contents of ss. 9 and 10 into FIPPA and MFIPPA, with the necessary modification to require reporting to the Privacy Commissioner.

2. Give the authority to issue directives regarding the use of certain technologies in the classroom or by school boards to the Minister of Education and ensure that an open and consultative public engagement process is included.

Published in Privacy

Apologies for a somewhat longer than usual post - but the Supreme Court of Canada's decision in R. v. Bykovets both interesting and important....

The Supreme Court of Canada’s decision in R v. Bykovets is significant for two reasons. The first is that it affirms an understanding of privacy that is in keeping with the realities of contemporary and emerging technologies. The second is that it does so by the narrowest of margins, laying bare the tension between two very different ways of understanding privacy in a technological age. While this is a victory for privacy rights, it should leave celebrants in a sober mood.

The appellant Bykovets had been convicted of 14 offences relating to credit card fraud and unlawful credit card purchases. During their investigation, Calgary police approached Moneris, a third-party payment processing company, to obtain the IP address linked to specific fraudulent online purchases. Moneris complied with the request. Police then sought a production order to compel the relevant internet service provider (ISP) to provide the customer name and address (CNA) information associated with the IP address. With this information, they were able to obtain search warrants for the accused’s home. At trial, the appellant challenged these search warrants, arguing that when the police obtained his IP address from Moneris without a production order, they violated his right to privacy under the Canadian Charter of Rights and Freedoms. Bykovets was convicted. The trial judge found that there was no reasonable expectation of privacy in an IP address because an IP address on its own did not disclose a “biographical core” of information (at para 24). The majority of the Court of Appeal agreed with a strong dissent from Justice Veldhuis.

R v. Bykovets builds on the 2014 decision of the Supreme Court of Canada in R. v. Spencer. In Spencer, the Court tackled an issue that had bedeviled lower courts for several years, resulting in inconsistent decisions. The issue was whether there was a reasonable expectation of privacy in CNA information. Until Spencer, it was unclear whether police could simply ask ISPs for CNA information linked to an IP address without the need for a production order. The argument was that a person had no reasonable expectation of privacy in their name and address, and so police did not require judicial authorization to access it. The Supreme Court of Canada ruled in Spencer that a request for this information in a context where it would be linked to online activities raised a reasonable expectation of privacy. Bykovets addresses the issue of the status of the address itself – prior to its linkage with CNA information.

Justice Karakatsanis, writing for a majority of the Supreme Court of Canada in Bykovets, emphasized the importance of a robust right to privacy in a data-driven society. The first line of her decision states: “The Internet has shifted much of the human experience from physical spaces to cyberspace” (at para 1). The IP address is a vital connector between online activities and the individual who engages in them. Justice Karakatsanis rejects an approach that assesses privacy rights in this information “based on police’s stated intention to use the information they gather in only one way” (at para 6), namely to obtain a production order to further link the IP address to an ISP who can provide the CNA information. In her view, the reasonable expectation of privacy must be understood according to a normative standard, which focuses on “what privacy should be – in a free, democratic and open society – balancing the individual’s right to be left alone against the community’s insistence on protection” (at para 7). In her view, an IP address can be linked to deeply personal information about online activities that can, on its own, reveal the identity of the individual even if a further production order for CNA information is not sought. According to Justice Karakatsanis, “an IP address is the first digital breadcrumb that can lead the state on the trail of an individual’s Internet activity” (at para 9). It is “the key that can lead the state through the maze of a user’s Internet activity and is the link through which intermediaries can volunteer that user’s information to the state.” (at para 13). She goes on to note that “[i]f s. 8 is to meaningfully protect the online privacy of Canadians in today’s overwhelmingly digital world, it must protect their IP addresses” (at para 28).

All parties agreed that there was a subjective expectation of privacy in IP addresses. The real issue was whether this expectation was objectively reasonable. In order to assess the reasonableness of the expectation, it is necessary first to define the subject matter of the search. The Crown characterized it as an IP address that would allow police to continue their investigation. Justice Karakatsanis found that the Crown’s description was “artificially narrow” (at para 37) and rejected an approach that focused on the declared intent of an agent of the state. In her view, additional caution is warranted when the subject matter of a search relates to digital data. She noted that the police were not really interested in an IP address; rather, they were interested in what it would reveal. Although the police planned to get a Spencer warrant before linking the IP address to CNA information, Justice Karakatsanis observed that this was not the only way in which an IP address could be used to derive information about an individual. She stated: “Online activity associated to the IP address may itself betray highly person information without the safeguards of judicial pre-authorization” (at para 43).

The majority next considered other relevant factors in the assessment of a reasonable expectation of privacy, including the place where the search takes place. In the U.S., an individual cannot have a reasonable expectation of privacy in information in the hands of third parties. Justice Karakatsanis affirmed the Supreme Court of Canada’s rejection of this ‘third-party doctrine’ in section 8 jurisprudence. Control is not a determinative factor. In the context of ISP’s, the only way to keep an IP address out of the hands of third parties is to not use the internet – which in today’s society is not a meaningful choice.

Although the place of a search can be relevant to the reasonableness of an expectation of privacy, it is also not determinative. Justice Karakatsanis noted that “’online spaces are qualitatively different’ from physical spaces” (at para 49, citing R. v. Ramelson at para 49). She referred to the internet as creating “a broad, accurate, and continuously expanding permanent record” (at para 50), that can be more revealing than most physical spaces. As a result, the fact that the search did not intrude on the territorial privacy rights of the accused was not significant.

Another factor is the private nature of the subject matter, often referred to as the “biographical core of personal information which individuals in a free and democratic society would wish to maintain and control from dissemination to the state” (at para 51, quoting R. v. Plant at p. 293). Justice Karakatsanis adoped a normative approach with aspirational qualities. On this view, a reasonable expectation of privacy “cannot be assessed according to only one use of the evidence” (at para 53) as asserted by the police. She stated: “The unique and heightened privacy interests in personal computer data flows from its potential to expose deeply revealing information” (at para 55). This is not a suggestion that police hide behind innocuous explanations of purported use; rather, the key is “the potential of a particular subject matter to reveal an individual’s biographical core to the state” (at para 57). According to Justice Karakatsanis,

. . . the ever-increasing intrusion of the Internet into our private lives must be kept in mind in deciding this case. It is widely accepted that the Internet is ubiquitous and that vast numbers of Internet users leave behind them a trail of information that others gather up to different ends, information that may be pieced together to disclose deeply private details. [. . . ] This social context of the digital world is necessary to a functional approach in defining the privacy interest afforded under the Charter to the information that could be revealed by an IP address (at para 58).

Justice Karakatsanis rebuffed arguments by the Crown that the IP address is useless without the CNA obtained with a Spencer warrant. An IP address can convey intimate information about online user activity even absent CNA data. Further, the online activity can be correlated with other available data which could ultimately lead to the identification of the individual. In such a context, a Spencer warrant offers little practical protection. It is the IP address which is “the key to unlocking an Internet user’s online activity” (at para 69).

Given this analysis, it is unsurprising that the majority of the Court concludes that there is a reasonable expectation of privacy in IP addresses. The majority centres the role of the private sector in the amassing of information about online activities, giving these third parties “immense informational power” (at para 75). Justice Karakatsanis observes that “By concentrating this mass of information with private third parties and granting them the tools to aggregate and dissect that data, the Internet has essentially altered the topography of privacy under the Charter. It has added a third party to the constitutional ecosystem, making the horizontal relationship between the individual and state tripartite” (at para 78). The result is that the state has an enhanced information capacity, as they have many routes for access to this information. Justice Karakatsanis observes that these companies “respond to frequent requests by law enforcement and can volunteer all activity associated with the requested IP address. Private corporate citizens can volunteer granular profiles of an individual user’s Internet activity over days, weeks, or months without ever coming under the aegis of the Charter” (at para 10).

The majority acknowledges that the important privacy concerns flowing from this massive concentration of personal information need to be balanced against the legitimate interest in “[s]afety, security and the suppression of crime” (at para 11, citing R v. Tessling, at para 17). Justice Karakatsanis notes that digital technologies have enhanced the ability of criminals to perpetrate crime and to evade law enforcement. However, she observes that judicial authorization is “readily available” (at para 11). She characterizes the burden on state authorities to obtain the necessary authorizations as “not onerous” (at para 12), given the increased availability of telewarrants. Further, she states that “the burden imposed on the state by recognizing a reasonable expectation of privacy in IP addresses pales compared to the substantial privacy concerns implicated in this case” (at para 86).

Justice Côté writes for the four dissenting justices. The difference in approach between majority and dissent could hardly be more stark. While the majority opinion begins with a discussion of how closely linked IP addresses are to the details of our online activities, the dissenting opinion opens with a discussion of the police investigation into fraudulent activities that led to the charges against the accused. For the dissent, retrieving the IP address from the financial intermediary was just a first step in the investigation. Justice Côté framed the issue as “whether the appellant had a reasonable expectation of privacy in the IP addresses alone – without any other information linking the addresses to him as an Internet user – in the circumstances of this case” (at para 95). This is the crux of the difference between majority and dissenting opinions – how to characterize the information accessed by the police in this case.

Although the dissenting justices accept that an IP address links an individual to their online activities, but they find that there are two ways to make that connection. One is by asking an ISP to provide the CNA information linked to the IP address (as was the case here). The other is to connect an individual to the IP address by linking their various online activities. For the dissenting justices, if the first method is used, and if a warrant will later be obtained to require an ISP to provide the necessary CNA information, an initial warrant is not needed to obtain the IP address from the intermediary. Whether a warrant is needed, then, depends upon the steps the police plan to take – a matter which is not transparent to the company that must decide whether to voluntarily share the information.

In reaching their conclusion, the dissenting justices differ from the majority on the issue of reasonable expectation of privacy. In particular, Justice Côté takes a different approach to characterizing the subject matter of the search, and the reasonable expectation of privacy. On the question of the subject matter of the search, she emphasized that it was important to consider “what the police were really after” (at para 123, citing R v. Marakah, at para 15). In her view, this means considering “the capacity of the precise information sought to give rise to inferences or to reveal further information” (at para 123). In her view, Spencer aligns with this approach – once an IP address is linked to CNA information, then it can reveal the individual’s online activities. In this case, the precise information sought by police was the “raw IP addresses alone” (at para 128), which in isolation reveal very little information. A subsequent production order would be sought to match these addresses to CNA information.

The dissenting justices dismissed the majority’s concerns that the IP address could be used to identify an individual from their online activities. First, they note, this was not what the police did in this case. Second, if the police were to use the second method to identify an individual, they would need a warrant. However, according to Justice Côté, this “is an issue for another day in a case where the situation actually arises on the facts” (at para 135). In her view, the police followed a clear series of steps, and the IP address was only one step, with the identification of the individual as a further step for which a production order would be obtained. According to the dissent, “to effectively hold that any step taken in an investigation engages a reasonable expectation of privacy . . . would upset the careful balance that this Court has struck between the interest of Canadians in actual privacy and the interest of Canadians in not hindering law enforcement” (at para 139).

On the issue of the reasonable expectation of privacy, Justice Côté dismissed the idea that the IP address was itself ‘private’ information. She emphasized that ‘on these facts’, the IP address did not reveal any core biographical information. She insisted that the case be decided only on the actual evidentiary record, not on speculation about what might have been done.

The dissenting justices analogized between leaving behind fingerprints at a crime scene and leaving behind one’s IP address on websites one visits online. Justice Côté writes “[i]t cannot be seriously suggested that a police investigation that involves dusting for fingerprints and keeping them – without more – could engage a reasonable expectation of privacy. The same – again, without more – is true of obtaining an IP address” (at para 154). What this overlooks, however, is the fact that obtaining an IP address requires a request to a private sector organization that holds that information, and that has privacy obligations to its customers. Although the Personal Information Protection and Electronic Documents Act (PIPEDA)allows for the sharing of information with law enforcement without knowledge or consent, this is tricky territory for organizations. It is also different from collecting fingerprints from a crime scene to which the police have access. The very issue before the Court was what steps are necessary in order to gain access to the information held by private sector companies.

For the dissenting justices, another factor in assessing a reasonable expectation of privacy – and another point of difference with the majority – is the place of the search. This is tied to territorial notions of privacy under which the strongest protection is with respect to a person’s home. According to the dissent, the place of the search is the database of the credit card processor, and this diminishes any objectively reasonable expectation of privacy on the part of the accused. With respect, in a context in which people in their homes interact in digital environments on a daily and routine basis, this is 19th century reasoning that is a poor fit for the information age.

The approach of the dissenting justices also overlooks the fact that laws such as PIPEDA are permissive when it comes to data sharing by organizations with law enforcement. Under section 7(3)(c.1) of PIPEDA, an organization may disclose personal information without the knowledge or consent of the individual to a government actor upon request by that actor where the purpose is law enforcement or investigation. The only check on this data sharing without knowledge or consent is the Charter. If there is a reasonable expectation of privacy in the data being shared, then police require judicial authorization. Charter rights in this context are extremely important – particularly given the vast quantities of often highly sensitive personal information in the hands of private sector organizations. This volume and variety of information has only been increasing and will continue to do so exponentially. To say that the police can request the digital equivalent of a skeleton key from a private organization without a warrant so long as they only intend to use that key to open a particular lock, is to effectively surrender essential Charter rights to privacy in exchange for a “trust me” approach to policing that runs counter to the very idea of Charter rights. The private sector organization is required to trust the police when handing over the information, and society must trust that the police will only use this data appropriately. Yet, the right to be free from unreasonable search or seizure is premised on the very idea that some searches and seizures are unreasonable. Charter rights set important boundaries. In a digital society, the boundary between agents of the state and everything one does online is a fundamentally important one. It deserves to be guarded against intrusion.

Charter cases often arise in contexts in which persons have been accused of dangerous and/or antisocial activities that we wish to see stopped. In cases such as Bykovets, it is easy to be impatient with adding superficially unnecessary steps to complicate investigations. But we need also to bear in mind the research and reporting we see on systemic racism in policing in Canada, of the misuse of police powers to stalk or harass women, and the potential for abuse of personal information when it is made too readily available to authorities. Although Charter rights may be cast as an interference in legitimate investigations, they are also a crucial safeguard against excess and abuse of authority. The digital data held by private sector companies can render us naked in the eyes of state authorities. The Charter is not a blindfold that leaves police fumbling in the dark. Rather, it is a protective cloak that each of us wears – until judicial authorization directs otherwise.

For the majority in Bykovets, the goal is not to interfere with online investigations; rather, it is to “better reflect what each reasonable Canadian expects from a privacy perspective and from a crime control perspective” (at para 86). Finding a reasonable expectation of privacy in IP addresses “significantly reduces the potential of any “arbitrary and even discriminatory” exercises of discretion” (at para 87) by the state. It also removes from the private sector decision-making about what information (and how much of it) to disclose to the state. The majority characterizes its approach as ensuring “that the veil of privacy all Canadians expect when they access the Internet is only lifted when an independent judicial officer is satisfied that providing this information to the state will serve a legitimate law enforcement purpose.” (at para 90)

 

Published in Privacy

A battle over the protection of personal information in the hands of federal political parties (FPPs) has been ongoing now for several years in British Columbia. The BC Supreme Court has just released a decision which marks a significant defeat for the FPPs in their quest to ensure that only minimal privacy obligations apply to their growing collection, use and disclosure of personal information. Although the outcome only green-lights the investigation by BC’s Office of the Information and Privacy Commissioner into the Liberal, New Democrat and Conservative parties’ compliance with the province’s Personal Information Protection Act (PIPA), it is still an important victory for the complainants. The decision affirms the constitutional applicability of PIPA to the FPPs. The tone of the decision also sends a message. Its opens with: “The ability of an individual to control their personal information is intimately connected to their individual autonomy, dignity and privacy.” Justice Weatherill confirms that “These fundamental values lie at the heart of democracy” (at para 1).

The dispute originated with complaints brought in 2019 by three BC residents (the complainants) who sought access under PIPA to their personal information in the hands of each of the three main FPPs in their BC ridings. They wanted to know what information had been collected about them, how it was being used, and to whom it was being disclosed. This access right is guaranteed under PIPA. By contrast no federal law – whether relating to privacy or to elections – provides an equivalent right with respect to political parties. The Canada Elections Act (CEA) was amended in 2018 to include a very limited obligation for FPPs to have privacy policies approved by the Chief Electoral Officer (CEO), published, and kept up to date. These provisions did not include access rights, oversight, or a complaints mechanism. When the responses of the FPPs to the complainants’ PIPA requests proved inadequate, the complainants filed complaints with the OIPC, which initiated an investigation.

Disappointingly, the FPPs resisted this investigation from the outset. They challenged the constitutional basis for the investigation, arguing that the BC law could not apply to FPPs. This issue was referred to an outside adjudicator, who heard arguments and rendered a decision in March 2022. He found that the term “organization” in PIPA included FPPs that collected information about BC residents and that PIPA’s application to the FPPs was constitutional. In April 2022, the FPPs individually filed applications for judicial review of this decision. The adjudicator ruled that he would pause his investigation until the constitutional issues were resolved.

In June of 2023, while the judicial review proceedings were ongoing, the government tabled amendments to the CEA in Bill C-47. These amendments (now passed) permit FPPs to “collect, use, disclose, retain and dispose of personal information in accordance with the party’s privacy policy” (s. 385.1). Section 385.2(3) states: “The purpose of this section is to provide for a national, uniform, exclusive and complete regime applicable to registered parties and eligible parties respecting their collection, use, disclosure, retention and disposal of personal information”. The amendments were no doubt intended to reinforce the constitutional arguments being made in the BC litigation.

In his discussion of these rather cynical amendments, Justice Weatherill quoted extensively from statements of the Chief Electoral Officer of Canada before the Senate Standing Committee on Legal and Constitutional Affairs in which he discussed the limitations of the privacy provisions in the CEA, including the lack of substantive rights and the limited oversight/enforcement. The CEO is quoted as stating “Not a satisfactory regime, if I’m being perfectly honest” (at para 51).

Support for extension of privacy obligations to political parties has been gaining momentum, particularly considering increasingly data-driven strategies, the use of profiling and targeting by political parties, concerns over the security of such detailed information and general frustration over politicians being able to set their own rules for conduct that would be considered unacceptable by any other actors in the public and private sectors. Perhaps sensing this growing frustration, the federal government introduced Bill C-65 in March of 2024. Among other things, this bill would provide some enforcement powers to the CEO with respect to the privacy obligations in the CEA. Justice Weatherill declined to consider this Bill in his decision, noting that it might never become law and was thus irrelevant to the proceedings.

Justice Weatherill ruled that BC’s PIPA applies to organizations, and that FPPS active in the province fall within the definition of “organization”. The FPPs argued that PIPA should be found inoperative to the extent that it is incompatible with federal law under the constitutional doctrine of paramountcy. They maintained that the CEA addressed the privacy obligations of political parties and that the provincial legislation interfered with that regime. Justice Weatherill disagreed, citing the principle of cooperative federalism. Under this approach, the doctrine of paramountcy receives a narrow interpretation, and where possible “harmonious interpretations of federal and provincial legislation should be favoured over interpretations that result in incompatibility” (at para 121). He found that while PIPA set a higher standard for privacy protection, the two laws were not incompatible. PIPA did not require FPPs to do something that was prohibited under the federal law – all it did was provide additional obligations and oversight. There was no operational conflict between the laws – FPPs could comply with both. Further, there was nothing in PIPA that prevented the FPPs from collecting, using or disclosing personal information for political purposes. It simply provided additional protections.

Justice Weatherill also declined to find that the application of PIPA to FPPs frustrated a federal purpose. He found that there was no evidence to support the argument that Parliament intended “to establish a regime in respect of the collection and use of personal information by FPPs” (at para 146). He also found that the evidence did not show that it was a clear purpose of the CEA privacy provisions “to enhance, protect and foster the FPPs’ effective participation in the electoral process”. He found that the purpose of these provisions was simply to ensure that the parties had privacy policies in place. Nothing in PIPA frustrated that purpose; rather, Justice Weatherill found that even if there was a valid federal purpose with respect to the privacy policies, “PIPA is in complete alignment with that purpose” (at para 158).

Justice Weatherill also rejected arguments that the doctrine of interjurisdictional immunity meant that the federal government’s legislative authority over federal elections could not be allowed to be impaired by BC’s PIPA. According to this argument the Chief Electoral Officer was to have the final say over the handling of personal information by FPPs. The FPPs argued that elections could be disrupted by malefactors who might use access requests under PIPA in a way that could lead to “tying up resources that would otherwise be focused on the campaign and subverting the federal election process” (at para 176). Further, if other provincial privacy laws were extended to FPPs, it might mean that FPPs would have to deal with multiple privacy commissioners, bogging them down even further. Justice Weatherill rejected these arguments, stating:

Requiring FPPs to disclose to British Columbia citizens, on request, the personal information they have about the citizen, together with information as to how it has been used and to whom it has been disclosed has no impact on the core federal elections power. It does not “significantly trammel” the ability of Canadian citizens to seek by lawful means to influence fellow electors, as was found to have been the case in McKay. It does not destroy the right of British Columbians to engage in federal election activity. At most, it may have a minimal impact on the administration of FPPs. This impact is not enough to trigger interjurisdictional immunity. All legislation carries with it some burden of compliance. The petitioners have not shown that this burden is so onerous as to impair them from engaging with voters. (at para 182).

Ultimately, Justice Weatherill ruled that there was no constitutional barrier to the application of PIPA. The result is that the matter goes back to the OIPC for investigation and determination on the merits. It has been a long, drawn out and expensive process so far, but at least this decision is an unequivocal affirmation of the application of basic privacy principles (at least in BC) to the personal information handling practices of FPPs. It is time for Canada’s political parties to accept obligations similar to those imposed on private sector organizations. If they want to collect, use and disclose data in increasingly complex data-driven voter profiling and targeting activities they need to stop resisting the commensurate obligations to treat that information with care and to be accountable for their practices.

Published in Privacy

Artificial intelligence technologies have significant potential to impact human rights. Because of this, emerging AI laws make explicit reference to human rights. Already-deployed AI systems are raising human rights concerns – including bias and discrimination in hiring, healthcare, and other contexts; disruptions of democracy; enhanced surveillance; and hateful deepfake attacks. Well-documented human rights impacts also flow from the use of AI technologies by law enforcement and the state, and from the use of AI in armed conflicts.

Governments are aware that human rights issues with AI technologies must be addressed. Internationally, this is evident in declarations by the G7, UNESCO, and the OECD. It is also clear in emerging national and supranational regulatory approaches. For example, human rights are tackled in the EU AI Act, which not only establishes certain human-rights-based no-go zones for AI technologies, but also addresses discriminatory bias. The US’s NIST AI Risk Management Framework (a standard, not a law – but influential nonetheless) also addresses the identification and mitigation of discriminatory bias.

Canada’s Artificial Intelligence and Data Act (AIDA), proposed by the Minister of Industry, Science and Economic Development (ISED) is currently at the committee stage as part of Bill C-27. The Bill’s preamble states that “Parliament recognizes that artificial intelligence systems and other emerging technologies should uphold Canadian norms and values in line with the principles of international human rights law”. In its substantive provisions, AIDA addresses “biased output”, which it defines in terms of the prohibited grounds of discrimination in the Canadian Human Rights Act. AIDA imposes obligations on certain actors to assess and mitigate the risks of biased output in AI systems. The inclusion of these human rights elements in AIDA is positive, but they are also worth a closer look.

Risk Regulation and Human Rights

Requiring developers to take human rights into account in the design and development of AI systems is important, and certainly many private sector organizations already take seriously the problems of bias and the need to identify and mitigate it. After all, biased AI systems will be unable to perform properly, and may expose their developers to reputational harm and possibly legal action. However, such attention has not been universal, and has been addressed with different degrees of commitment. Legislated requirements are thus necessary, and AIDA will provide these. AIDA creates obligations to identify and mitigate potential harms at the design and development stage, and there are additional documentation and some transparency requirements. The enforcement of AIDA obligations can come through audits conducted or ordered by the new AI and Data Commissioner, and there is also the potential to use administrative monetary penalties to punish non-compliance, although what this scheme will look like will depend very much on as-yet-to-be-developed regulations. AIDA, however, has some important limitations when it comes to human rights.

Selective Approach to Human Rights

Although AIDA creates obligations around biased output, it does not address human rights beyond the right to be free from discrimination. Unlike the EU AI Act, for example, there are no prohibited practices related to the use of AI in certain forms of surveillance. A revised Article 5 of the EU AI Act will prohibit real-time biometric surveillance by law enforcement agencies in publicly accessible spaces, subject to carefully-limited exceptions. The untargeted scraping of facial images for the building or expansion of facial recognition databases (as occurred with Clearview AI) is also prohibited. Emotion recognition technologies are banned in some contexts, as are some forms of predictive policing. Some applications that are not outright prohibited, are categorized as high risk and have limits imposed on the scope of their use. These “no-go zones” reflect concerns over a much broader range of human rights and civil liberties than what we see reflected in Canada’s AIDA. It is small comfort to say that the Canadian Charter of Rights and Freedoms remains as a backstop against government excess in the use of AI tools for surveillance or policing; ex ante AI regulation is meant to head off problems before they become manifest. No-go zones reflect limits on what society is prepared to tolerate; AIDA sets no such limits. Constitutional litigation is expensive, time-consuming and uncertain in outcome (just look at the 5-4 splint in the recent R. v. Bykovets decision of the Supreme Court of Canada). Further, the application of AIDA to the military and intelligence services is expressly excluded from AIDA’s scope (as is the application of the law to the federal public service).

Privacy is an important human right, and privacy rights are not part of the scope of AIDA. The initial response is that such rights are dealt with under privacy legislation for public and private sectors and at federal, provincial and territorial levels. However, such privacy statutes deal principally with data protection (in other words, they govern the collection, use and disclosure of personal information). AIDA could have addressed surveillance more directly. After all, the EU has top of its class data protection laws, but still places limits on the use of AI systems for certain types of surveillance activities. Second, privacy laws in Canada (and there are many of them) are, apart from Quebec’s, largely in a state of neglect and disrepair. Privacy commissioners at federal, provincial, and territorial levels have been issuing guidance as to how they see their laws applying in the AI context, and findings and rulings in privacy complaints involving AI systems are starting to emerge. The commissioners are thoughtfully adapting existing laws to new circumstances, but there is no question that there is need for legislative reform. In issuing its recent guidance on Facial Recognition and Mugshot Databases, the Office of the Information and Privacy Commissioner of Ontario specifically identified the need to issue the guidance in the face of legislative gaps and inaction that “if left unaddressed, risk serious harms to individuals’ right to privacy and other fundamental human rights.”

Along with AIDA, Bill C-27 contains the Consumer Privacy Protection Act (CPPA) which will reform Canada’s private sector data protection law, the Personal Information Protection and Electronic Documents Act (PIPEDA). However, the CPPA has only one AI-specific amendment – a somewhat tepid right to an explanation of automated decision-making. It does not address the data scraping issue at the heart of the Clearview AI investigation, for example (where the core findings of the Commissioner remain disputed by the investigated company) and which prompted the articulation of a no-go zone for data-scraping for certain purposes in the EU AI Act.

High Impact AI and Human Rights

AIDA will apply only to “high impact” AI systems. Among other things, such systems can adversely impact human rights. While the original version of AIDA in Bill C-27 left the definition of “high impact” entirely to regulations (generating considerable and deserved criticism), the Minister of ISED has since proposed amendments to C-27 that set out a list of categories of “high impact” AI systems. While this list at least provides some insight into what the government is thinking, it creates new problems as well. This list identifies several areas in which AI systems could have significant impacts on individuals, including in healthcare and in some court or tribunal proceedings. Also included on the list is the use of AI in all stages of the employment context, and the use of AI in making decisions about who is eligible for services and at what price. Left off the list, however, is where AI systems are (already) used to determine who is selected as a tenant for rental accommodation. Such tools have extremely high impact. Yet, since residential tenancies are interests in land, and not services, they are simply not captured by the current “high impact” categories. This is surely an oversight – yet it is one that highlights the rather slap-dash construction of the AIDA and its proposed amendments. As a further example, a high-impact category addressing the use of biometrics to assess an individual’s behaviour or state of mind could be interpreted to capture affect recognition systems or the analysis of social media communications, but this is less clear than it should be. It also raises the question as to whether the best approach, from a human rights perspective, is to regulate such systems as high impact or whether limits need to be placed on their use and deployment.

Of course, a key problem is that this bill is housed within ISED. This is not a bill centrally developed that takes a broader approach to the federal government and its powers. Under AIDA, medical devices are excluded from the category of “high impact” uses of AI in the healthcare context because it is Health Canada that will regulate AI-enabled medical devices, and ISED must avoid treading on its toes. Perhaps ISED also seeks to avoid encroaching on the mandates of the Minister of Justice, or the Minister of Public Safety. This may help explain some of the crabbed and clunky framing of AIDA compared to the EU AI Act. It does, however, raise the question of why Canada chose this route – adopting a purportedly comprehensive risk-management framework housed under the constrained authority of the Minister of ISED.

Such an approach is inherently flawed. As discussed above, AIDA is limited in the human rights it is prepared to address, and it raises concerns about how human rights will be both interpreted and framed. On the interpretation side of things, the incorporation of the Canadian Human Rights Act’s definition of discrimination in AIDA combined with ISED’s power to interpret and apply the proposed law will give ISED interpretive authority over the definition of discrimination without the accompanying expertise of the Canadian Human Rights Commission. Further, it is not clear that ISED is a place for expansive interpretations of human rights; human rights are not a core part of its mandate – although fostering innovation is.

All of this should leave Canadians with some legitimate concerns. AIDA may well be passed into law – and it may prove to be useful in the better governance of AI. But when it comes to human rights, it has very real limitations. AIDA cannot be allowed to end the conversation around human rights and AI at the federal level – nor at the provincial level either. Much work remains to be done.

Published in Privacy

Ontario’s Information and Privacy Commissioner has released a report on an investigation into the use by McMaster University of artificial intelligence (AI)-enabled remote proctoring software. In it, Commissioner Kosseim makes findings and recommendations under the province’s Freedom of Information and Protection of Privacy Act (FIPPA) which applies to Ontario universities. Interestingly, noting the absence of provincial legislation or guidance regarding the use of AI, the Commissioner provides additional recommendations on the adoption of AI technologies by public sector bodies.

AI-enabled remote proctoring software saw a dramatic uptake in use during the pandemic as university classes migrated online. It was also widely used by professional societies and accreditation bodies. Such software monitors those writing online exams in real-time, recording both audio and video, and using AI to detect anomalies that may indicate that cheating is taking place. Certain noises or movements generate ‘flags’ that lead to further analysis by AI and ultimately by the instructor. If the flags are not resolved, academic integrity proceedings may ensue. Although many universities, including the respondent McMaster, have since returned to in-person exam proctoring, AI-enabled remote exam surveillance remains an option where in-person invigilation is not possible. This can include in courses delivered online to students in diverse and remote locations.

The Commissioner’s investigation related to the use by McMaster University of two services offered by the US-based company Respondus: Respondus Lockdown Browser and Respondus Monitor. Lockdown Browser consists of software downloaded by students onto their computers that blocks access to the internet and to other files on the computer during an exam. Respondus Monitor is the AI-enabled remote proctoring application. This post focuses on Respondus Monitor.

AI-enabled remote proctoring systems have raised concerns about both privacy and broader human rights issues. These include the intrusiveness of the constant audio and video monitoring, the capturing of data from private spaces, uncertainty over the treatment of personal data collected by such systems, adverse impacts on already marginalised students, and the enhanced stress and anxiety that comes from both constant surveillance and easily triggered flags. The broader human rights issues, however, are an uncomfortable fit with public sector data protection law.

Commissioner Kosseim begins with the privacy issues, finding that Respondus Monitor collects personal information that includes students’ names and course information, images of photo identification documents, and sensitive biometric data in audio and video recordings. Because the McMaster University Act empowers the university to conduct examinations and appoint examiners, the Commissioner found that the collection was carried out as part of a lawfully authorized activity. Although exam proctoring had chiefly been conducted in-person prior to the pandemic, she found that there was no “principle of statute or common law that would confine the method by which the proctoring of examinations may be conducted by McMaster to an in-person setting” (at para 48). Further, she noted that even post-pandemic, there might still be reasons to continue to use remote proctoring in some circumstances. She found that the university had a legitimate interest in attempting to curb cheating, noting that evidence suggested an upward trend in academic integrity cases, and a particular spike during the pandemic. She observed that “by incorporating online proctoring into its evaluation methods, McMaster was also attempting to address other new challenges that arise in an increasingly digital and remote learning context” (at para 50).

The collection of personal information must be necessary to a lawful authorized activity carried out by a public body. Commissioner Kosseim found that the information captured by Respondus Monitor – including the audio and video recordings – was “technically necessary for the purpose of conducting and proctoring the exams” (at para 60). Nevertheless, she expressed concerns over the increased privacy risks that accompany this continual surveillance of examinees. She was also troubled by McMaster’s assertion that it “retains complete autonomy, authority, and discretion to employ proctored online exams, prioritizing administrative efficiency and commercial viability, irrespective of necessity” (at para 63). She found that the necessity requirement in s. 38(2) of FIPPA applied, and that efficiency or commercial advantage could not displace it. She noted that the kind of personal information collected by Respondus Monitor was particularly sensitive, creating “risks of unfair allegations or decisions being made about [students] based on inaccurate information” (at para 66). In her view, “[t]hese risks must be appropriately mitigated by effective guardrails that the university should have in place to govern its adoption and use of such technologies” (at para 66).

FIPPA obliges public bodies to provide adequate notice of the collection of personal information. Commissioner Kosseim reviewed the information made available to students by McMaster University. Although she found overall that it provided students with useful information, students had to locate different pieces of information on different university websites. The need to check multiple sites to get a clear picture of the operation of Respondus Monitor did not satisfy the notice requirement, and the Commissioner recommended that the university prepare a “clear and comprehensive statement either in a single source document, or with clear cross-references to other related documents” (at para 70).

Section 41(1) of FIPPA limits the use of personal information collected by a public body to the purpose for which it was obtained or compiled, or for a consistent purpose. Although the Commissioner found that the analysis of the audio and video recordings to generate flags was consistent with the collection of that information, the use by Respondus of samples of the recordings to improve its own systems – or to allow third party research – was not. On this point, there was an important difference in interpretation. Respondus appeared to define personal information as personal identifiers such as names and ID numbers; it treated audio and video clips that lacked such identifiers as “anonymized”. However, under FIPPA audio and video recordings of individuals are personal information. No provision was made for students either to consent to or opt out of this secondary use of their personal information. Commissioner Kosseim noted that Respondus had made public statements that when operating in some jurisdictions (including California and EU members states) it did not use audio or video recordings for research or to improve its products or services. She recommended that McMaster obtain a similar undertaking from Respondus to not use its students’ information for these purposes. The Commissioner also noted that Respondus’ treating the audio and video recordings as anonymized data meant that it did not have adequate safeguards in place for this personal information.

Respondus’ Terms of Service provide that the company reserved the right to disclose personal information for law enforcement purposes. Commissioner Kosseim found that McMaster should require, in its contact with Respondus, that Respondus notify it promptly of any compelled disclosure of its students’ personal information to law enforcement or to government, and to limit any such disclosure to the specific information it is legally required to disclose. She also set a retention limit for the audio and video recordings at one year, with confirmation to be provided by Respondus of deletions after the end of this period.

One of the most interesting aspects of this report is the section titled “Other Recommendations” in which the Commissioner addresses the adoption of an AI-enabled technology by a public institution in a context in which “there is no current law or binding policy specifically governing the use of artificial intelligence in Ontario’s public sector.” (at para 134). The development and adoption of these technologies is outpacing the evolution of law and policy, leaving important governance gaps. In May 2023, the Commissioner Kosseim and Commissioner DeGuire of the Ontario Human Rights Commission issued a joint statement urging the Ontario government to take action to put in place an accountability framework for public sector AI. Even as governments acknowledge that these technologies create risks of discriminatory bias and other potential harms, there remains little to govern AI systems outside the piecemeal coverage offered by existing laws such as, in this case, FIPPA. Although the Commissioner’s interpretation and application of FIPPA addressed issues relating to the collection, use and disclosure of personal information, there remain important issues that cannot be addressed through privacy legislation.

Commissioner Kosseim acknowledged that McMaster University had “already carried out a level of due diligence prior to adopting Respondus Monitor” (at para 138). Nevertheless, given the risks and potential harms of AI-enabled technologies, she made a number of further recommendations. The first was to conduct an Algorithmic Impact Assessment (AIA) in addition to a Privacy Impact Assessment. She suggested that the federal government’s AIA tool could be a useful guide while waiting for one to be developed for Ontario. An AIA could allow the adopter of an AI system to have better insight into the data used to train the algorithms, and could assess impacts on students going beyond privacy (which might include discrimination, increased stress, and harms from false positive flags). She also called for meaningful consultation and engagement with those affected by the adoption of the technology taking place both before the adoption of the system and on an ongoing basis thereafter. Although the university may have had to react very quickly given that the first COVID shutdown occurred shortly before an exam period, an iterative engagement process even now would be useful “for understanding the full scope of potential issue that may arise, and how these may impact, be perceived, and be experienced by others” (at para 142). She noted that this type of engagement would allow adopters to be alert and responsive to problems both prior to adoption and as they arise during deployment. She also recommended that the consultations include experts in both privacy and human rights, as well as those with technological expertise.

Commissioner Kosseim also recommended that the university consider providing students with ways to opt out of the use of these technologies other than through requesting accommodations related to disabilities. She noted “AI-powered technologies may potentially trigger other protected grounds under human rights that require similar accommodations, such as color, race or ethnic origin” (at para 147). On this point, it is worth noting that the use of remote proctoring software creates a context in which some students may need to be accommodated for disabilities or other circumstances that have nothing to do with their ability to write their exam, but rather that impact the way in which the proctoring systems read their faces, interpret their movements, or process the sounds in their homes. Commissioner Kosseim encouraged McMaster University “to make special arrangements not only for students requesting formal accommodation under a protected ground in human rights legislation, but also for any other students having serious apprehensions about the AI-enabled software and the significant impacts it can have on them and their personal information” (at para 148).

Commissioner Kosseim also recommended that there be an appropriate level of human oversight to address the flagging of incidents during proctoring. Although flags were to be reviewed by instructors before deciding whether to proceed to an academic integrity investigation, the Commissioner found it unclear whether there was a mechanism for students to challenge or explain flags prior to escalation to the investigation stage. She recommended that there be such a procedure, and, if there already was one, that it be explained clearly to students. She further recommended that a public institution’s inquiry into the suitability for adoption of an AI-enabled technology should take into account more than just privacy considerations. For example, the public body’s inquiries should consider the nature and quality of training data. Further, the public body should remain accountable for its use of AI technologies “throughout their lifecycle and across the variety of circumstances in which they are used” (at para 165). Not only should the public body monitor the performance of the tool and alert the supplier of any issues, the supplier should be under a contractual obligation to inform the public body of any issues that arise with the system.

The outcome of this investigation offers important lessons and guidance for universities – and for other public bodies – regarding the adoption of third-party AI-enabled services. For the many Ontario universities that adopted remote proctoring during the pandemic, there are recommendations that should push those still using these technologies to revisit their contracts with vendors – and to consider putting in place processes to measure and assess the impact of these technologies. Although some of these recommendations fall outside the scope of FIPPA, the advice is still sage and likely anticipates what one can only hope is imminent guidance for Ontario’s public sector.

Published in Privacy

Ontario is currently holding public hearings on a new bill which, among other things, introduces a provision regarding the use of AI in hiring in Ontario. Submissions can be made until February 13, 2024. Below is a copy of my submission addressing this provision.

 

The following is my written submission on section 8.4 of Bill 149, titled the Working for Workers Four Act, introduced in the last quarter of 2023. I am a law professor at the University of Ottawa. I am making this submission in my individual capacity.

Artificial intelligence (AI) tools are increasingly common in the employment context. Such tools are used in recruitment and hiring, as well as in performance monitoring and assessment. Section 8.4 would amend the Employment Standards Act to include a requirement for employers to provide notice of the use of artificial intelligence in the screening, assessment, or selection of applicants for a publicly advertised job position. It does not address the use of AI in other employment contexts. This brief identifies several weaknesses in the proposal and makes recommendations to strengthen it. In essence, notice of the use of AI in the hiring process will not offer much to job applicants without a right to an explanation and ideally a right to bring any concerns to the attention of a designated person. Employees should also have similar rights when AI is used in performance assessment and evaluation.

1. Definitions and exclusions

If passed, Bill 149 would (among other things) enact the first provision in Ontario to directly address AI. The proposed section 8.4 states:

8.4 (1) Every employer who advertises a publicly advertised job posting and who uses artificial intelligence to screen, assess or select applicants for the position shall include in the posting a statement disclosing the use of the artificial intelligence.

(2) Subsection (1) does not apply to a publicly advertised job posting that meets such criteria as may be prescribed.

The term “artificial intelligence” is not defined in the bill. Rather, s. 8.1 of Bill 149 leaves the definition to be articulated in regulations. This likely reflects concerns that the definition of AI will continue to evolve along with the rapidly changing technology and that it is best to leave its definition to more adaptable regulations. The definition is not the only thing left to regulations. Section 8.4(2) requires regulations to specify the criteria that will allow publicly advertised job postings to be exempted from the disclosure requirement in s. 8.4(1). The true scope and impact of s. 8.4(1) will therefore not be clear until these criteria are prescribed in regulations. Further, s. 8.4 will not take effect until the regulations are in place.

2. The Notice Requirement

The details of the nature and content of the notice that an employer must provide are not set out in s. 8.4, nor are they left to regulations. Since there are no statutory or regulatory requirements, presumably notice can be as simple as “we use artificial intelligence in our screening and selection process”. It would be preferable if notice had to at least specify the stage of the process and the nature of the technique used.

Section 8.4 is reminiscent of the 2022 amendments to the Employment Standards Act which required employers with more than 25 employees to provide their employees with notification of any electronic monitoring taking place in the workplace. As with s. 8.4(1), above, the main contribution of this provision was (at least in theory) enhanced transparency. However, the law did not provide for any oversight or complaints mechanism. Section 8.4(1) is similarly weak. If an employer fails to provide notice of the use of AI in the hiring process, then either the employer is not using AI in recruitment and hiring, or they are failing to disclose it. Who will know and how? A company that is found non-compliant with the notice requirement, once it is part of the Employment Standards Act, could face a fine under s. 132. However, proceedings by way of an offence are a rather blunt regulatory tool.

3. A Right to an Explanation?

Section 8.4(1) does not provide job applicants with any specific recourse if they apply for a job for which AI is used in the selection process and they have concerns about the fairness or appropriateness of the tool used. One such recourse could be a right to demand an explanation.

The Consumer Privacy Protection Act (CPPA), which is part of the federal government’s Bill C-27, currently before Parliament, provides a right to an explanation to those about whom an automated decision, prediction or recommendation is made. Sections 63(3) and (4) provide:

(3) If the organization has used an automated decision system to make a prediction, recommendation or decision about the individual that could have a significant impact on them, the organization must, on request by the individual, provide them with an explanation of the prediction, recommendation or decision.

(4) The explanation must indicate the type of personal information that was used to make the prediction, recommendation or decision, the source of the information and the reasons or principal factors that led to the prediction, recommendation or decision.

Subsections 63(3) and (4) are fairly basic. For example, they do not include a right of review of the decision by a human. But something like this would still be a starting point for a person seeking information about the process by which their employment application was screened or evaluated. The right to an explanation in the CPPA will extend to decisions, recommendations and predictions made with respect to employees of federal works, undertakings, and businesses. However, it will not apply to the use of AI systems in provincially regulated employment sectors. Without a private sector data protection law of its own – or without a right to an explanation to accompany the proposed s. 8.4 – provincially regulated employees in Ontario will be out of luck.

In contrast, Quebec’s recent amendments to its private sector data protection law provide for a more extensive right to an explanation in the case of automated decision-making – and one that applies to the employment and hiring context. Section 12.1 provides:

12.1. Any person carrying on an enterprise who uses personal information to render a decision based exclusively on an automated processing of such information must inform the person concerned accordingly not later than at the time it informs the person of the decision.

He must also inform the person concerned, at the latter’s request,

(1) of the personal information used to render the decision;

(2) of the reasons and the principal factors and parameters that led to the decision; and

(3) of the right of the person concerned to have the personal information used to render the decision corrected.

The person concerned must be given the opportunity to submit observations to a member of the personnel of the enterprise who is in a position to review the decision.

Section 12.1 thus combines a notice requirement with, at the request of the individual, a right to an explanation. In addition, the affected individual can “submit observations” to an appropriate person within the organization who “is in a position to review the decision”. This right to an explanation is triggered only by decisions that are based exclusively on automated processing of personal information – and the scope of the right to an explanation is relatively narrow. However, it still goes well beyond Ontario’s Bill 149, which creates a transparency requirement with nothing further.

4. Scope

Bill 149 applies to the use of “artificial intelligence to screen, assess or select applicants”. Bill C-27 and Quebec’s law, both referenced above, are focused on “automated decision-making”. Although automated decision-making is generally considered a form of AI (it is defined in C-27 as “any technology that assists or replaces the judgment of human decision-makers through the use of a rules-based system, regression analysis, predictive analytics, machine learning, deep learning, a neural network or other technique”) it is possible that in an era of generative AI technologies, the wording chosen for Bill 149 is more inclusive. In other words, there may be uses of AI that are not decision-making, predicting or recommending, but that can still used in screening, assessing or hiring processes. However, it should be noted that Ontario’s Bill 149 is also less inclusive than Bill C-27 or Quebec’s law because it focuses only on screening, assessment or selecting applicants for a position. It does not apply to the use of AI tools to monitor, evaluate or assess the performance of existing employees or to make decisions regarding promotion, compensation, retention, or other employment issues – something which would be covered by Quebec’s law (and by Bill C-27 for employees in federally regulated employment). Although arguably the requirements regarding electronic workplace monitoring added to the Employment Standards Act in 2022 might provide transparency about the existence of electronic forms of surveillance (which could include those used to feed data to AI systems), these transparency obligations apply only in workplaces with more than 25 employees, and there are no employee rights linked to the use of these data in automated or AI-enabled decision-making systems.

5. Discriminatory Bias

A very significant concern with the use of AI systems for decision-making about humans is the potential for discriminatory bias in the output of these systems. This is largely because systems are trained on existing and historical data. Where such data are affected by past discriminatory practices (for example, a tendency to hire men rather than women, or white, able-bodied, heterosexual people over those from equity-deserving communities) then there is a risk that automated processes will replicate and exacerbate these biases. Transparency about the use of an AI tool alone in such a context is not much help – particularly if there is no accompanying right to an explanation. Of course, human rights legislation applies to the employment context, and it will still be open to an employee who believes they have been discriminated against to bring a complaint to the Ontario Human Rights Commission. However, without a right to an explanation, and in the face of proprietary and closed systems, proving discrimination may be challenging and may require considerable resources and expertise. It may also require changes to human rights legislation to specifically address algorithmic discrimination. Without these changes in place, and without adequate resourcing to support the OHRC’s work to address algorithmic bias, recourse under human rights legislation may be extremely challenging.

 

6. Conclusion and Recommendations

This exploration of Bill 149’s transparency requirements regarding the use of AI in the hiring process in Ontario reveals the limited scope of the proposal. Its need for regulations in order take effect has the potential to considerably delay its implementation. It provides for notice but not for a right to an explanation or for human review of AI decisions. There is also a need to make better use of existing regulators (particularly privacy and human rights commissions). The issue of the use of AI in recruitment (or in the workplace more generally in Ontario) may require more than just tweaks to the Employment Standards Act but may also demand amendments to Ontario’s Human Rights Code and perhaps even specific privacy legislation at the very least aimed at the employment sector in Ontario.

Recommendations:

1. Redraft the provision so that the core obligations take effect without need for regulations or ensure that the necessary regulations to give effect to this provision are put in place promptly.

2. Amend s. 8.4 (1) to either include the elements that are required in any notice of the use of an AI system or provide for the inclusion of such criteria in regulations (so long as doing so does not further delay the coming into effect of the provision).

3. Provide for a right to an explanation to accompany s. 8.4(1). An alternative to this would be a broader right to an explanation in provincial private sector legislation or in privacy legislation for employees in provincially regulated sectors in Ontario, but this would be much slower than the inclusion of a basic right to an explanation in s. 8.4. The right to an explanation could also include a right to submit observations to a person in a position to review any decision or outcome.

4. Extend the notice requirement to other uses of AI to assess, evaluate and monitor the performance of employees in provincially regulated workplaces in Ontario. Ideally, a right to an explanation should also be provided in this context.

5. Ensure that individuals who are concerned that they have been discriminated against by the use of AI systems in recruitment (as well as employees who have similar concerns regarding the use of AI in performance evaluation and assessment) have adequate and appropriate recourse under Ontario’s Human Rights Code, and that the Ontario Human Rights Commission is adequately resourced to address these concerns.

Published in Privacy

On October 26, 2023, I appeared as a witness before the INDU Committee of the House of Commons which is holding hearings on Bill C-27. Although I would have preferred to address the Artificial Intelligence and Data Act, it was clear that the Committee was prioritizing study of the Consumer Protection and Privacy Act in part because the Minister of Industry had yet to produce the text of amendments to the AI and Data Act which he had previously outlined in a letter to the Committee Chair. It is my understanding that witnesses will not be called twice. As a result, I will be posting my comments on the AI and Data Act on my blog.

The other witnesses heard at the same time included Colin Bennett, Michael Geist, Vivek Krishnamurthy and Brenda McPhail. The recording of that session is available here.

__________

Thank you, Mr Chair, for the invitation to address this committee.

I am a law professor at the University of Ottawa, where I hold the Canada Research Chair in Information Law and Policy. I appear today in my personal capacity. I have concerns with both the CPPA and AIDA. Many of these have been communicated in my own writings and in the report submitted to this committee by the Centre for Digital Rights. My comments today focus on the Consumer Privacy Protection Act. I note, however, that I have very substantial concerns about the AI and Data Act and would be happy to answer questions on it as well.

Let me begin by stating that I am generally supportive of the recommendations of Commissioner Dufresne for the amendment of Bill C-27 set out in his letter of April 26, 2023, to the Chair of this Committee. I will also address 3 other points.

The Minister has chosen to retain consent as the backbone of the CPPA, with specific exceptions to consent. One of the most significant of these is the “legitimate interest” exception in s. 18(3). This allows organizations to collect or use personal information without knowledge or consent if it is for an activity in which an organization has a legitimate interest. There are guardrails: the interest must outweigh any adverse effects on the individual; it must be one which a reasonable person would expect; and the information must not be collected or used to influence the behaviour or decisions of the individual. There are also additional documentation and mitigation requirements.

The problem lies in the continuing presence of “implied consent” in section 15(5) of the CPPA. PIPEDA allowed for implied consent because there were circumstances where it made sense, and there was no “legitimate interest” exception. However, in the CPPA, the legitimate interest exception does the work of implied consent. Leaving implied consent in the legislation provides a way to get around the guardrails in s. 18(3) (an organization can opt for the ‘implied consent’ route instead of legitimate interest). It will create confusion for organizations that might struggle to understand which is the appropriate approach. The solution is simple: get rid of implied consent. I note that “implied consent” is not a basis for processing under the GDPR. Consent must be express or processing must fall under another permitted ground.

My second point relates to s. 39 of the CPPA, which is an exception to an individual’s knowledge and consent where information is disclosed to a potentially very broad range of entities for “socially beneficial purposes”. Such information need only be de-identified – not anonymized – making it more vulnerable to reidentification. I question whether there is social licence for sharing de-identified rather than anonymized data for these purposes. I note that s. 39 was carried over verbatim from C-11, when “de-identify” was defined to mean what we understand as “anonymize”.

Permitting disclosure for socially beneficial purposes is a useful idea, but s. 39, especially with the shift in meaning of “de-identify”, lacks necessary safeguards. First, there is no obvious transparency requirement. If we are to learn anything from the ETHI Committee inquiry into PHAC’s use of Canadians’ mobility data, transparency is fundamentally important. At the very least, there should be a requirement that written notice of data sharing for socially beneficial purposes be given to the Privacy Commissioner of Canada; ideally there should also be a requirement for public notice. Further, s. 39 should provide that any such sharing be subject to a data sharing agreement, which should also be provided to the Privacy Commissioner. None of this is too much to ask where Canadians’ data are conscripted for public purposes. Failure to ensure transparency and some basic measure of oversight will undermine trust and legitimacy.

My third point relates to the exception to knowledge and consent for publicly available personal information. Bill C-27 reproduces PIPEDA’s provision on publicly available personal information, providing in s. 51 that “An organization may collect, use or disclose an individual’s personal information without their knowledge or consent if the personal information is publicly available and is specified by the regulations.” We have seen the consequences of data scraping from social media platforms in the case of Clearview AI, which used scraped photographs to build a massive facial recognition database. The Privacy Commissioner takes the position that personal information on social media platforms does not fall within the “publicly available personal information” exception. Yet not only could this approach be upended in the future by the new Personal Information and Data Protection Tribunal, it could also easily be modified by new regulations. Recognizing the importance of s. 51, former Commissioner Therrien had recommended amending it to add that the publicly available personal information be such “that the individual would have no reasonable expectation of privacy”. An alternative is to incorporate the text of the current Regulations Specifying Publicly Available Information into the CPPA, revising them to clarify scope and application in our current data environment. I would be happy to provide some sample language.

This issue should not be left to regulations. The amount of publicly available personal information online is staggering, and it is easily susceptible to scraping and misuse. It should be clear and explicit in the law that personal data cannot be harvested from the internet, except in limited circumstances set out in the statute.

Finally, I add my voice to those of so many others in saying that the data protection obligations set out in the CPPA should apply to political parties. It is unacceptable that they do not.

Published in Privacy

The following is a short excerpt from a new paper which looks at the public sector use of private sector personal data (Teresa Scassa, “Public Sector Use of Private Sector Personal Data: Towards Best Practices”, forthcoming in (2024) 47:2 Dalhousie Law Journal ) The full pre-print version of the paper is available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4538632

Governments seeking to make data-driven decisions require the data to do so. Although they may already hold large stores of administrative data, their ability to collect new or different data is limited both by law and by practicality. In our networked, Internet of Things society, the private sector has become a source of abundant data about almost anything – but particularly about people and their activities. Private sector companies collect a wide variety of personal data, often in high volumes, rich in detail, and continuously over time. Location and mobility data, for example, are collected by many different actors, from cellular service providers to app developers. Financial sector organizations amass rich data about the spending and borrowing habits of consumers. Even genetic data is collected by private sector companies. The range of available data is constantly broadening as more and more is harvested, and as companies seek secondary markets for the data they collect.

Public sector use of private sector data is fraught with important legal and public policy considerations. Chief among these is privacy since access to such data raises concerns about undue government intrusion into private lives and habits. Data protection issues implicate both public and private sector actors in this context, and include notice and consent, as well as data security. And, where private sector data is used to shape government policies and actions, important questions about ethics, data quality, the potential for discrimination, and broader human rights questions also arise. Alongside these issues are interwoven concerns about transparency, as well as necessity and proportionality when it comes to the conscription by the public sector of data collected by private companies.

This paper explores issues raised by public sector access to and use of personal data held by the private sector. It considers how such data sharing is legally enabled and within what parameters. Given that laws governing data sharing may not always keep pace with data needs and public concerns, this paper also takes a normative approach which examines whether and in what circumstances such data sharing should take place. To provide a factual context for discussion of the issues, the analysis in this paper is framed around two recent examples from Canada that involved actual or attempted access by government agencies to private sector personal data for public purposes. The cases chosen are different in nature and scope. The first is the attempted acquisition and use by Canada’s national statistics organization, Statistics Canada (StatCan), of data held by credit monitoring companies and financial institutions to generate economic statistics. The second is the use, during the COVID-19 pandemic, of mobility data by the Public Health Agency of Canada (PHAC) to assess the effectiveness of public health policies in reducing the transmission of COVID-19 during lockdowns. The StatCan example involves the compelled sharing of personal data by private sector actors; while the PHAC example involves a government agency that contracted for the use of anonymized data and analytics supplied by private sector companies. Each of these instances generated significant public outcry. This negative publicity no doubt exceeded what either agency anticipated. Both believed that they had a legal basis to gather and/or use the data or analytics, and both believed that their actions served the public good. Yet the outcry is indicative of underlying concerns that had not properly been addressed.

Using these two quite different cases as illustrations, the paper examines the issues raised by the use of private sector data by government. Recognizing that such practices are likely to multiply, it also makes recommendations for best practices. Although the examples considered are Canadian and are shaped by the Canadian legal context, most of the issues they raise are of broader relevance. Part I of this paper sets out the two case studies that are used to tease out and illustrate the issues raised by public sector use of private sector data. Part II discusses the different issues and makes recommendations.

The full pre-print version of the paper is available here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4538632

Published in Privacy

A recent decision of the Federal Court of Canada ends (subject to any appeal) the federal Privacy Commissioner’s attempt to obtain an order against Facebook in relation to personal information practices linked to the Cambridge Analytica scandal. Following a joint investigation with British Columbia’s Information and Privacy Commissioner, the Commissioners had issued a Report of Findings in 2019. The Report concluded that Facebook had breached Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) and B.C.’s Personal Information Protection Act by failing to obtain appropriate consent, failing to adequately safeguard the data of its users and failing to be accountable for the data under its control. Under PIPEDA, the Privacy Commissioner has no order-making powers and can only make non-binding recommendations. For an order to be issued under PIPEDA, an application must be made to the Federal Court under s. 15, either by the complainant, or by the Privacy Commissioner with the complainant’s permission. The proceeding before the court is de novo, meaning that the court renders its own decision on whether there has been a breach of PIPEDA based upon the evidence presented to it.

The Cambridge Analytica scandal involved a researcher who developed a Facebook app. Through this app, the developer collected user data, ostensibly for research purposes. That data was later disclosed to third parties who used it to develop “psychographic” models for purposes of targeting political messages towards segments of Facebook users” (at para 35). It is important to note here that the complaint was not against the app developer, but rather against Facebook. Essentially, the complainants were concerned that Facebook did not adequately protect its users’ privacy. Although it had put in place policies and requirements for third party app developers, the complainants were concerned that it did not adequately monitor the third-party compliance with its policies.

The Federal Court dismissed the Privacy Commissioner’s application largely because of a lack of evidence to establish that Facebook had failed to meet its PIPEDA obligations to safeguard its users’ personal information. Referring to it as an “evidentiary vacuum” (para 71), Justice Manson found that there was a lack of expert evidence regarding what Facebook might have done differently. He also found that there was no evidence from users regarding their expectations of privacy on Facebook. The Court chastised the Commissioner, stating “ultimately it is the Commissioner’s burden to establish a breach of PIPEDA on the basis of evidence, not speculation and inferences derived from a paucity of material facts” (at para 72). Justice Manson found the evidence presented by the Commissioner to be unpersuasive, speculative, and required the court to draw “unsupported inferences”. He was unsympathetic to the Commissioner’s explanation that it did not use its statutory powers to compel evidence (under s. 12.1 of PIPEDA) because “Facebook would not have complied or would have had nothing to offer” (at para 72). Justice Manson noted that had Facebook failed to comply with requests under s. 12.1, the Commissioner could have challenged the refusal.

Yet there is more to this decision than just a dressing down of the Commissioner’s approach to the case. In discussing “meaningful consent” under PIPEDA, Justice Manson frames the question before the court as “whether Facebook made reasonable efforts to ensure users and users’ Facebook friends were advised of the purposes for which their information would be used by third-party applications” (at para 63). This argument is reflected in the Commissioner’s position that Facebook should have done more to ensure that third party app developers on its site complied with their contractual obligations, including those that required developers to obtain consent from app users to the collection of personal data. Facebook’s position was that PIPEDA only requires that it make reasonable efforts to protect the personal data of its users, and that it had done so through its “combination of network-wide policies, user controls and educational resources” (at para 68). It is here that Justice Manson emphasizes the lack of evidence before him, noting that it is not clear what else Facebook could have reasonably been expected to do. In making this point, he states:

There is no expert evidence as to what Facebook could feasibly do differently, nor is there any subjective evidence from Facebook users about their expectations of privacy or evidence that any user did not appreciate the privacy issues at stake when using Facebook. While such evidence may not be strictly necessary, it would have certainly enabled the Court to better assess the reasonableness of meaningful consent in an area where the standard for reasonableness and user expectations may be especially context dependent and ever-evolving. (at para 71) [My emphasis].

This passage should be deeply troubling to those concerned about privacy. By referring to the reasonable expectation of privacy in terms of what users might expect in an ever-evolving technological context, Justice Manson appears to abandon the normative dimensions of the concept. His comments lead towards a conclusion that the reasonable expectation of privacy is an ever-diminishing benchmark as it becomes increasingly naïve to expect any sort of privacy in a data-hungry surveillance society. Yet this is not the case. The concept of the “reasonable expectation of privacy” has significant normative dimensions, as the Supreme Court of Canada reminds us in R. v. Tessling and in the case law that follows it. In Tessling, Justice Binnie noted that subjective expectations of privacy should not be used to undermine the privacy protections in s. 8 of the Charter, stating that “[e]xpectation of privacy is a normative rather than a descriptive standard.” Although this comment is made in relation to the Charter, a reasonable expectation of privacy that is based upon the constant and deliberate erosion of privacy would be equally meaningless in data protection law. Although Justice Manson’s comments about the expectation of privacy may not have affected the outcome of this case, they are troublesome in that they might be picked up by subsequent courts or by the Personal Information and Data Protection Tribunal proposed in Bill C-27.

The decision also contains at least two observations that should set off alarm bells with respect to Bill C-27, a bill to reform PIPEDA. Justice Manson engages in some discussion of the duty of an organization to safeguard information that it has disclosed to a third party. He finds that PIPEDA imposes obligations on organizations with respect to information in their possession, and information transferred for processing. In the case of prospective business transactions, an organization sharing information with a potential purchaser must enter into an agreement to protect that information. However, Justice Manson interprets this specific reference to a requirement for such an agreement to mean that “[i]f an organization were required to protect information transferred to third parties more generally under the safeguarding principle, this provision would be unnecessary” (at para 88). In Bill C-27, s. 39, for example, permits organizations to share de-identified (not anonymized) personal information with certain third parties without the knowledge or consent of individuals for ‘socially beneficial’ purposes without imposing any requirement to put in place contractual provisions to safeguard that information. The comments of Justice Manson clearly highlight the deficiencies of s. 39 which must be amended to include a requirement for such safeguards.

A second issue relates to the human-rights based approach to privacy which both the former Privacy Commissioner Daniel Therrien and the current Commissioner Philippe Dufresne have openly supported. Justice Manson acknowledges, that the Supreme Court of Canada has recognized the quasi-constitutional nature of data protection laws such as PIPEDA, because “the ability of individuals to control their personal information is intimately connected to their individual autonomy, dignity, and privacy” (at para 51). However, neither PIPEDA nor Bill C-27 take a human-rights based approach. Rather, they place personal and commercial interests in personal data on the same footing. Justice Manson states: “Ultimately, given the purpose of PIPEDA is to strike a balance between two competing interests, the Court must interpret it in a flexible, common sense and pragmatic manner” (at para 52). The government has made rather general references to privacy rights in the preamble of Bill C-27 (though not in any preamble to the proposed Consumer Privacy Protection Act) but has steadfastly refused to reference the broader human rights context of privacy in the text of the Bill itself. We are left with a purpose clause that acknowledges “the right of privacy of individuals with respect to their personal information” in a context in which “significant economic activity relies on the analysis, circulation and exchange of personal information”. The purpose clause finishes with a reference to the need of organizations to “collect, use or disclose personal information for purposes that a reasonable person would consider appropriate in the circumstances.” While this reference to the “reasonable person” should highlight the need for a normative approach to reasonable expectations as discussed above, the interpretive approach adopted by Justice Manson also makes clear the consequences of not adopting an explicit human-rights based approach. Privacy is thrown into a balance with commercial interests without fundamental human rights to provide a firm backstop.

Justice Manson seems to suggests that the Commissioner’s approach in this case may flow from frustration with the limits of PIPEDA. He describes the Commissioner’s submissions as “thoughtful pleas for well-thought-out and balanced legislation from Parliament that tackles the challenges raised by social media companies and the digital sharing of personal information, not an unprincipled interpretation from this Court of existing legislation that applies equally to a social media giant as it may apply to the local bank or car dealership.” (at para 90) They say that bad cases make bad law; but bad law might also make bad cases. The challenge is to ensure that Bill C-27 does not reproduce or amplify deficiencies in PIPEDA.

 

Published in Privacy

A recent decision of the Federal Court of Canada exposes the tensions between access to information and privacy in our data society. It also provides important insights into how reidentification risk should be assessed when government agencies or departments respond to requests for datasets with the potential to reveal personal information.

The case involved a challenge by two journalists to Health Canada’s refusal to disclose certain data elements in a dataset of persons permitted to grow medical marijuana for personal use under the licensing scheme that existed before the legalization of cannabis. [See journalist Molly Hayes’ report on the story here]. Health Canada had agreed to provide the first character of the Forward Sortation Area (FSA) of the postal codes of licensed premises but declined to provide the second and third characters or the names of the cities in which licensed production took place. At issue was whether these location data constituted “personal information” – which the government cannot disclose under s. 19(1) of the Access to Information Act (ATIA). A second issue was the degree of effort required of a government department or agency to maximize the release of information in a privacy-protective way. Essentially, this case is about “the appropriate analytical approach to measuring privacy risks in relation to the release of information from structured datasets that contain personal information” (at para 2).

The licensing scheme was available to those who wished to grow their own marijuana for medical purposes or to anyone seeking to be a “designated producer” for a person in need of medical marijuana. Part of the licence application required the disclosure of the medical condition that justified the use of medical marijuana. Where a personal supply of medical marijuana is grown at the user’s home, location information could easily be linked to that individual. Both parties agreed that the last three characters in a six-character postal code would make it too easy to identify individuals. The dispute concerned the first three characters – the FSA. The first character represents a postal district. For example, Ontario, Canada’s largest province, has five postal districts. The second character indicates whether an area within the district is urban or rural. The third character identifies either a “specific rural region, an entire medium-sized city, or a section of a major city” (at para 12). FSAs differ in size; StatCan data from 2016 indicated that populations in FSAs ranged from no inhabitants to over 130,000.

Information about medical marijuana and its production in a rapidly evolving public policy context is a subject in which there is a public interest. In fact, Health Canada proactively publishes some data on its own website regarding the production and use of medical marijuana. Yet, even where a government department or agency publishes data, members of the public can use the ATI system to request different or more specific data. This is what happened in this case.

In his decision, Justice Pentney emphasized that both access to information and the protection of privacy are fundamental rights. The right of access to government information, however, does not include a right to access the personal information of third parties. Personal information is defined in the ATIA as “information about an identifiable individual” (s. 3). This means that all that is required for information to be considered personal is that it can be used – alone or in combination with other information – to identify a specific individual. Justice Pentney reaffirmed that the test for personal information from Gordon v. Canada (Health) remains definitive. Information is personal information “where there is a serious possibility that an individual could be identified through the use of that information, alone or in combination with other available information.” (Gordon, at para 34, emphasis added). More recently, the Federal Court has defined a “serious possibility” as “a possibility that is greater than speculation or a ‘mere possibility', but does not need to reach the level of ‘more likely than not’” (Public Safety, at para 53).

Geographic information is strongly linked to reidentification. A street address is, in many cases, clearly personal information. However, city, town or even province of residence would only be personal information if it can be used in combination with other available data to link to a specific individual. In Gordon, the Federal Court upheld a decision to not release province of residence data for those who had suffered reported adverse drug reactions because these data could be combined with other available data (including obituary notices and even the observations of ‘nosy neighbors’) to identify specific individuals.

The Information Commissioner argued that to meet the ‘serious possibility’ test, Health Canada should be able to concretely demonstrate identifiability by connecting the dots between the data and specific individuals. Justice Pentney disagreed, noting that in the case before him, the expert opinion combined with evidence about other available data and the highly sensitive nature of the information at issue made proof of actual linkages unnecessary. However, he cautioned that “in future cases, the failure to engage in such an exercise might well tip the balance in favour of disclosure” (at para 133).

Justice Pentney also ruled that, because the proceeding before the Federal Court is a hearing de novo, he was not limited to considering the data that were available at the time of the ATIP request. A court can take into account data made available after the request and even after the decision of the Information Commissioner. This makes sense. The rapidly growing availability of new datasets as well as new tools for the analysis and dissemination of data demand a timelier assessment of identifiability. Nevertheless, any pending or possible future ATI requests would be irrelevant to assessing reidentification risk, since these would be hypothetical. Justice Pentney noted: “The fact that a more complete mosaic may be created by future releases is both true and irrelevant, because Health Canada has an ongoing obligation to assess the risks, and if at some future point it concludes that the accumulation of information released created a serious risk, it could refuse to disclose the information that tipped the balance” (at para 112).

The court ultimately agreed with Health Canada that disclosing anything beyond the first character of the FSA could lead to the identification of some individuals within the dataset, and thus would amount to personal information. Health Canada had identified three categories of other available data: data that it had proactively published on its own website; StatCan data about population counts and FSAs; and publicly available data that included data released in response to previous ATIP requests relating to medical marijuana. In this latter category the court noted that there had been a considerable number of prior requests that provided various categories of data, including “type of license, medical condition (with rare conditions removed), dosage, and the issue date of the licence” (at para 64). Other released data included the licensee’s “year of birth, dosage, sex, medical condition (rare conditions removed), and province (city removed)” (at para 64). Once released, these data are in the public domain, and can contribute to a “mosaic effect” which allows data to be combined in ways that might ultimately identify specific individuals. Health Canada had provided evidence of an interactive map of Canada published on the internet that showed the licensing of medical marijuana by FSA between 2001 and 2007. Justice Pentney noted that “[a]n Edmonton Journal article about the interactive map provided a link to a database that allowed users to search by medical condition, postal code, doctor’s speciality, daily dosage, and allowed storage of marijuana” (at para 66). He stated: “the existence of evidence demonstrating that connections among disparate pieces of relevant information have previously been made and that the results have been made available to the public is a relevant consideration in applying the serious possibility test” (at para 109). Justice Pentney observed that members of the public might already have knowledge (such as the age, gender or address) of persons they know who consume marijuana that they might combine with other released data to learn about the person’s underlying medical condition. Further, he notes that “the pattern of requests and the existence of the interactive map show a certain motivation to glean more information about the administration of the licensing regime” (at para 144).

Health Canada had commissioned Dr Khaled El Emam to produce and expert report. Dr. El Emam determined that “there are a number of FSAs that are high risk if either three or two characters of the FSA are released, there are no high-risk FSAs if only the first character is released” (at para 80). Relying on this evidence, Justice Pentney concluded that “releasing more than the first character of an FSA creates a significantly greater risk of reidentification” (at para 157). This risk would meet the “serious possibility” threshold, and therefore the information amounts to “personal information” and cannot be disclosed under the legislation.

The Information Commissioner raised issues about the quality of other available data, suggesting that incomplete and outdated datasets would be less likely to create reidentification risk. For example, since cannabis laws had changed, there are now many more people cultivating marijuana for personal use. This would make it harder to connect the knowledge that a particular person was cultivating marijuana with other data that might lead to the disclosure of a medical condition. Justice Pentney was unconvinced since the quantities of marijuana required for ongoing medical use might exceed the general personal use amounts, and thus would still require a licence, creating continuity in the medical cannabis licensing data before and after the legalization of cannabis. He noted: “The key point is not that the data is statistically comparable for the purposes of scientific or social science research. Rather, the question is whether there is a significant possibility that this data can be combined to identify particular individuals.” (at para 118) Justice Pentney therefore distinguishes between the issue of data quality from a data science perspective and data quality from the perspective of someone seeking to identify specific individuals. He stated: “the fact that the datasets may not be exactly comparable might be a problem for a statistician or social scientist, but it is not an impediment to a motivated user seeking to identify a person who was licensed for personal production or a designated producer under the medical marijuana licensing regime” (at para 119).

Justice Pentney emphasized the relationship between sensitivity of information and reidentification risk, noting that “the type of personal information in question is a central concern for this type of analysis” (at para 107). This is because “the disclosure of some particularly sensitive types of personal information can be expected to have particularly devastating consequences” (at para 107). With highly sensitive information, it is important to reduce reidentification risk, which means limiting disclosure “as much as is feasible” (at para 108).

Justice Pentney also dealt with a further argument that Health Canada should not be able to apply the same risk assessment to all the FSA data; rather, it should assess reidentification risk based on the size of the area identified by the different FSA characters. The legislation allows for severance of information from disclosed records, and the journalists argued that Health Canada could have used severance to reduce the risk of reidentification while releasing more data where the risks were acceptably low. Health Canada responded that to do a more fine-grained analysis of the reidentification risk by FSA would impose an undue burden because of the complexity of the task. In its submissions as intervenor in the case, the Office of the Privacy Commissioner suggested that other techniques could be used to perturb the data so as to significantly lower the risk of reidentification. Such techniques are used, for example, where data are anonymized.

Justice Pentney noted that the effort required by a government department or agency was a matter of proportionality. Here, the data at issue were highly sensitive. The already-disclosed first character of the FSA provided general location information about the licences. Given these facts, “[t]he question is whether a further narrowing of the lens would bring significant benefits, given the effort that doing so would require” (at para 181). He concluded that it would not, noting the lack of in-house expertise at Health Canada to carry out such a complex task. Regarding the suggestion of the Privacy Commissioner that anonymization techniques should be applied, he found that while this is not precluded by the ATIA, it was a complex task that, on the facts before him, went beyond what the law requires in terms of severance.

This is an interesting and important decision. First, it reaffirms the test for ‘personal information’ in a more complex data society context than the earlier jurisprudence. Second, it makes clear that the sensitivity of the information at issue is a crucial factor that will influence an assessment not just of the reidentification risk, but of tolerance for the level of risk involved. This is entirely appropriate. Not only is personal health information highly sensitive, at the time these data were collected, licensing was an important means of gaining access to medical marijuana for people suffering from serious and ongoing medical issues. Their sharing of data with the government was driven by their need and vulnerability. Failure to robustly protect these data would enhance vulnerability. The decision also clarifies the evidentiary burden on government to demonstrate reidentification risk – something that will vary according to the sensitivity of the data. It highlights the dynamic and iterative nature of reidentification risk assessment as the risk will change as more data are made available.

Indirectly, the decision also casts light on the challenges of using the ATI system to access data and perhaps a need to overhaul that system to provide better access to high-quality public-sector information for research and other purposes. Although Health Canada has engaged in proactive disclosure (interestingly, such disclosures were a factor in assessing the ‘other available data’ that could lead to reidentification in this case), more should be done by governments (both federal and provincial) to support and ensure proactive disclosure that better meets the needs of data users while properly protecting privacy. Done properly, this would require an investment in capacity and infrastructure, as well as legislative reform.

Published in Privacy
<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>
Page 1 of 19

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law