Teresa Scassa - Blog

Displaying items by tag: Privacy

 

 

 

I was invited to appear before the Standing Committee on Access to Information, Privacy and Ethics (ETHIC) on February 10, 2022. The Committee was conducting hearings into the use of de-identified, aggregate mobility data by the Public Health Agency of Canada. My opening statement to the committee is below. The recording of this meeting (as well as all of the other meetings on this topic) can be found here: https://www.ourcommons.ca/Committees/en/ETHI/Meetings

Thank you for the invitation to address this Committee on this important issue.

The matter under study by this Committee involves a decision by the Public Health Agency of Canada (PHAC) to use de-identified aggregate mobility data sourced from the private sector to inform public health decision-making during a pandemic.

This use of mobility data – and the reaction to it – highlight some of the particular challenges of our digital and data society:

· It confirms that people are genuinely concerned about how their data are used. It also shows that they struggle to keep abreast of the volume of collection, the multiple actors engaged in collection and processing, and the ways in which their data are shared with and used by others. In this context, consent alone is insufficient to protect individuals.

· The situation also makes clear that data are collected and curated for purposes that go well beyond maintaining customer relationships. Data are the fuel of analytics, profiling, and AI. Some of these uses are desirable and socially beneficial; others are harmful or deeply exploitative. The challenge is to facilitate the positive uses and to stop the harmful and exploitative ones.

· The situation also illustrates how easily data now flow from the private sector to the public sector in Canada. Our current legal framework governs public and private sector uses of personal data separately. Our laws need to be better adapted to address the flow of data across sectors.

Governments have always collected data and used it to inform decision-making. Today they have access to some of the same tools for big data analytics and AI as the private sector, and they have access to vast quantities of data to feed those analytics.

We want governments to make informed decisions based on the best available data, but we want to prevent excessive intrusions upon privacy.

Both PIPEDA and the Privacy Act must be modernized so that they can provide appropriate rules and principles to govern the use of data in a transformed and transforming digital environment. The work of this Committee on the mobility data issue could inform this modernization process.

As you have heard already from other witnesses, PIPEDA and the Privacy Act currently apply only to data about identifiable individuals. This creates an uncomfortable grey zone for de-identified data. The Privacy Commissioner must have some capacity to oversee the use of de-identified data, at the very least to ensure that re-identification does not take place. For example, the province of Ontario addressed this issue in 2019 amendments to its public sector data protection law. Amendments defined de-identified information for the purposes of use by government, required the development of data standards for de-identified data, and provided specific penalties for the re-identification of de-identified personal data.

The Discussion Paper on the Modernization of the Privacy Act speaks of the need for a new framework to facilitate the use of de-identified personal information by government, but we await a Bill to know what form that might take.

The former Bill C-11 – the bill to amend the Personal Information Protection and Electronic Documents Act that died on the Order Paper last fall, specifically defined de-identified personal information. It also created exceptions to the requirements of knowledge and consent to enable organizations to de-identify personal information in their possession; and to use or disclose it in some circumstances – also without knowledge and consent. It would have required de-identification measures proportional to the sensitivity of the information, and would have prohibited the re-identification of de-identified personal information – with stiff penalties.

The former Bill C-11 would also have allowed private sector organizations to share de-identified data without knowledge or consent, with certain entities (particularly government actors), for socially beneficial purposes. This provision would have applied to the specific situation before this committee right now – it would have permitted this kind of data sharing – and without the knowledge or consent of the individuals whose data were de-identified and shared.

This same provision or a revised version of it will likely be in the next bill to reform PIPEDA that is introduced into Parliament. When this happens, important questions to consider will be the scope of this provision (how should socially beneficial purposes be defined?); what degree of transparency should be required on the part of organizations who share our de-identified information?; and how will the sharing of information for socially beneficial purposes by private sector organizations with the government dovetail with any new obligations for the public sector -- including whether there should be any prior review or approval of plans to acquire and/or use the data, and what degree of transparency is required. I hope that the work of this Committee on the mobility data issue will help to inform these important discussions.

Published in Privacy

 

On December 7, 2021, the privacy commissioners of Quebec, British Columbia and Alberta issued orders against the US-based company Clearview AI, following its refusal to voluntarily comply with the findings in the joint investigation report they issued along with the federal privacy commissioner on February 3, 2021.

Clearview AI gained worldwide attention in early 2020 when a New York Times article revealed that its services had been offered to law enforcement agencies for use in a largely non-transparent manner in many countries around the world. Clearview AI’s technology also has the potential for many different applications including in the private sector. It built its massive database of over 10 billion images by scraping photographs from publicly accessible websites across the Internet, and deriving biometric identifiers from the images. Users of its services upload a photograph of a person. The service then analyzes that image and compares it with the stored biometric identifiers. Where there is a match, the user is provided with all matching images and their metadata, including links to the sources of each image.

Clearview AI has been the target of investigation by data protection authorities around the world. France’s Commission Nationale de l'Informatique et des Libertés has found that Clearview AI breached the General Data Protection Regulation (GDPR). Australia and the UK conducted a joint investigation which similarly found the company to be in violation of their respective data protection laws. The UK commissioner has since issued a provisional view, stating its intent to levy a substantial fine. Legal proceedings are currently underway in Illinois, a state which has adopted biometric privacy legislation. Canada’s joint investigation report issued by the federal, Quebec, B.C. and Alberta commissioners found that Clearview AI had breached the federal Personal Information Protection and Electronic Documents Act, as well as the private sector data protection laws of each of the named provinces.

The Canadian joint investigation set out a series of recommendations for Clearview AI. Specifically, it recommended that Clearview AI cease offering its facial recognition services in Canada, “cease the collection, use and disclosure of images and biometric facial arrays collected from individuals in Canada”, and delete any such data in its possession. Clearview AI responded by saying that it had temporarily ceased providing its services in Canada, and that it was willing to continue to do so for a further 18 months. It also indicated that if it offered services in Canada again, it would require its clients to adopt a policy regarding facial recognition technology, and it would offer an audit trail of searches.

On the second and third recommendations, Clearview AI responded that it was simply not possible to determine which photos in its database were of individuals in Canada. It also reiterated its view that images found on the Internet are publicly available and free for use in this manner. It concluded that it had “already gone beyond its obligations”, and that while it was “willing to make some accommodations and met some of the requests of the Privacy Commissioners, it cannot commit itself to anything that is impossible and or [sic] required by law.” (Letter reproduced at para 3 of Order P21-08).

In this post I consider three main issues that flow from the orders issued by the provincial commissioners. The first relates to the cross-border reach of Canadian law. The second relates to enforcement (or lack thereof) in the Canadian context, particularly as compared with what is available in other jurisdictions such as the UK and the EU. The third issue relates to the interest shown by the commissioners in a compromise volunteered by Clearview AI in the ongoing Illinois litigation – and what this might mean for Canadians’ privacy.

 

1. Jurisdiction

Clearview AI maintains that Canadian laws do not apply to it. It argues that it is a US-based company with no physical presence in Canada. Although it initially provided its services to Canadian law enforcement agencies (see this CBC article for details of the use of Clearview by Toronto Police Services), it had since ceased to do so – thus, it no longer had clients in Canada. It scraped its data from platform companies such as Facebook and Instagram, and while many Canadians have accounts with such companies, Clearview’s scraping activities involved access to data hosted on platforms outside of Canada. It therefore argued that it not only did not operate in Canada, it had no ‘real and substantial’ connection to Canada.

The BC Commissioner did not directly address this issue. In his Order, he finds a hook for jurisdiction by referring to the personal data as having been “collected from individuals in British Columbia without their consent”, although it is clear there is no direct collection. He also notes Clearview’s active contemplation of resuming its services in Canada. Alberta’s Commissioner makes a brief reference to jurisdiction, simply stating that “Provincial privacy legislation applies to any private sector organization that collects, uses and discloses information of individuals within that province” (at para 12). The Quebec Commissioner, by contrast, gives a thorough discussion of the jurisdictional issues. In the first place, she notes that some of the images came from public Quebec sources (e.g., newspaper websites). She also observes that nothing indicates that images scraped from Quebec sources have been removed from the database; they therefore continue to be used and disclosed by the company.

Commissioner Poitras cited the Federal Court decision in Lawson for the principle that PIPEDA could apply to a US-based company that collected personal information from Canadian sources – so long as there is a real and substantial connection to Canada. She found a connection to Quebec in the free accounts offered to, and used by, Quebec law enforcement officials. She noted that the RCMP, which operates in Quebec, had also been a paying client of Clearview’s. When Clearview AI was used by clients in Quebec, those clients uploaded photographs to the service in the search for a match. This also constituted a collection of personal information by Clearview AI in Quebec.

Commissioner Poitras found that the location of Clearview’s business and its servers is not a determinative jurisdictional factor for a company that offers its services online around the world, and that collects personal data from the Internet globally. She found that Clearview AI’s database was at the core of its services, and a part of that database was comprised of data from Quebec and about Quebeckers. Clearview had offered its service in Quebec, and its activities had a real impact on the privacy of Quebeckers. Commissioner Poitras noted that millions of images of Quebeckers were appropriated by Clearview without the consent of the individuals in the images; these images were used to build a global biometric facial recognition database. She found that it was particularly important not to create a situation where individuals are denied recourse under quasi-constitutional laws such as data protection laws. These elements in combination, in her view, would suffice to create a real and substantial connections.

Commissioner Poitras did not accept that Clearview’s suspension of Canadian activities changed the situation. She noted that information that had been collected in Quebec remained in the database, which continued to be used by the company. She stated that a company could not appropriate the personal information of a substantial number of Quebeckers, commercialise this information, and then avoid the application of the law by saying they no longer offered services in Quebec.

The jurisdictional questions are both important and thorny. This case is different from cases such as Lawson and Globe24hrs, where the connections with Canada were more straightforward. In Lawson, there was clear evidence that the company offered its services to clients in Canada. It also directly obtained some of its data about Canadians from Canadian sources. In Globe24hrs, there was likewise evidence that Canadians were being charged by the Romanian company to have their personal data removed from the database. In addition, the data came from Canadian court decisions that were scraped from websites located in Canada. In Clearview AI, while some of the scraped data may have been hosted on servers located in Canada, most were scraped from offshore social media platform servers. If Clearview AI stopped offering its services in Canada and stopped scraping data from servers located in Canada, what recourse would Canadians have? The Quebec Commissioner attempts to address this question, but her reasons are based on factual connections that might not be present in the future, or in cases involving other data-scraping respondents. There needs to be a theory of real and substantial connection that specifically addresses the scraping of data from third-party websites, contrary to those websites’ terms of use, and contrary to the legal expectations of the sites’ users that can anchor the jurisdiction of Canadian law, even when the scraper has no other connection to Canada.

Canada is not alone with these jurisdictional issue – Australia’s orders to Clearview AI are currently under appeal, and the jurisdiction of the Australian Commissioner to make such orders will be one of the issues on appeal. A jurisdictional case – one that is convincing not just to privacy commissioners but to the foreign courts that may have to one day determine whether to enforce Canadian decisions – needs to be made.

 

2. Enforcement

At the time the facts of the Clearview AI investigation arose, all four commissioners had limited enforcement powers. The three provincial commissioners could issue orders requiring an organization to change its practices. The federal commissioner has no order-making powers, but can apply to Federal Court to ask that court to issue orders. The relative impotence of the commissioners is illustrated by Clearview’s hubristic response, cited above, that indicates that it had already “gone beyond its obligations”. Clearly, it considers anything that the commissioners had to say on the matter did not amount to an obligation.

The Canadian situation can be contrasted with that in the EU, where commissioners’ orders requiring organizations to change their non-compliant practices are now reinforced by the power to levy significant administrative monetary penalties (AMPs). The same situation exists in the UK. There, the data commissioner has just issued a preliminary enforcement notice and a proposed fine of £17M against Clearview AI. As noted earlier, the enforcement situation is beginning to change in Canada – Quebec’s newly amended legislation permits the levying of substantial AMPs. When some version of Bill C-11 is reintroduced in Parliament in 2022, it will likely also contain the power to levy AMPs. BC and Alberta may eventually follow suit. When this happens, the challenge will be first, to harmonize enforcement approaches across those jurisdictions; and second, to ensure that these penalties can meaningfully be enforced against offshore companies such as Clearview AI.

On the enforcement issue, it is perhaps also worth noting that the orders issued by the three Commissioners in this case are all slightly different. The Quebec Commissioner orders Clearview AI to cease collecting images of Quebeckers without consent, and to cease using these images to create biometric identifiers. It also orders the destruction, within 90 days of receipt of the order, all of the images collected without the consent of Quebeckers, as well as the destruction of the biometric identifiers. Alberta’s Commissioner orders that Clearview cease offering its services to clients in Alberta, cease the collection and use of images and biometrics collected from individuals in Alberta, and delete the same from its databases. BC’s order prohibits the offering of Clearview AI’s services using data collected from British Columbians without their consent to clients in British Columbia. He also orders that Clearview AI use “best efforts” to cease its collection, use and disclosure of images and biometric identifiers of British Columbians without its consent, as well as to use the same “best efforts” to delete images and biometric identifiers collected without consent.

It is to these “best efforts” that I next turn.

 

3. The Illinois Compromise

All three Commissioners make reference to a compromise offered by Clearview AI in the course of ongoing litigation in Illinois under Illinois’ Biometric Information Privacy Act. By referring to “best efforts” in his Order, the BC Commissioner seems to be suggesting that something along these lines would be an acceptable compromise in his jurisdiction.

In its response to the Canadian commissioners, Clearview AI raised the issue that it cannot easily know which photographs in its database are of residents of particular provinces, particularly since these are scraped from the Internet as a whole – and often from social media platforms hosted outside Canada.

Yet Clearview AI has indicated that it has changed some of its business practices to avoid infringing Illinois law. This includes “cancelling all accounts belonging to any entity based in Illinois” (para 12, BC Order). It also includes blocking from any searches all images in the Clearview database that are geolocated in Illinois. In the future, it also offers to create a “geofence” around Illinois. This means that it “will not collect facial vectors from any scraped images that contain metadata associating them with Illinois” (para 12 BC Order). It will also “not collect facial vectors from images stored on servers that are displaying Illinois IP addresses or websites with URLs containing keywords such as “Chicago” or “Illinois”.” Clearview apparently offers to create an “opt-out” mechanism whereby people can ask to have their photos excluded from the database. Finally, it will require its clients to not upload photos of Illinois residents. If such a photo is uploaded, and it contains Illinois-related metadata, no search will be performed.

The central problem with accepting the ‘Illinois compromise’ is that it allows a service built on illegally scraped data to continue operating with only a reduced privacy impact. Ironically, it also requires individuals who wish to benefit from this compromise, to provide more personal data in their online postings. Many people actually suppress geolocation information from their photographs to protect their privacy. Ironically, the ‘Illinois compromise’ can only exclude photos that contain geolocation data. Even with geolocation turned on, it would not exclude the vacation pics of any BC residents taken outside of BC (for example). Further, limiting scraping of images from Illinois-based sites will not prevent the photos of Illinois-based individuals from being included within the database a) if they are already in there, and b) if the images are posted on social media platforms hosted elsewhere.

Clearview AI is a business built upon data collection practices that are illegal in a large number of countries outside the US. The BC Commissioner is clearly of the opinion that a compromise solution is the best that can be hoped for, and he may be right in the circumstances. Yet it is a bitter pill to think that such flouting of privacy laws will ultimately be rewarded, as Clearview gets to keep and commercialize its facial recognition database. Accepting such a compromise could limit the harms of the improper exploitation of personal data, but it does not stop the exploitation of that data in all circumstances. And even this unhappy compromise may be out of reach for Canadians given the rather toothless nature of our current laws – and the jurisdictional challenges discussed earlier.

If anything, this situation cries out for global and harmonized solutions. Notably it requires the US to do much more to bring its wild-west approach to personal data exploitation in line with the approaches of its allies and trading partners. It also will require better cooperation on enforcement across borders. It may also call for social media giants to take more responsibility when it comes to companies that flout their terms and conditions to scrape their sites for personal data. The Clearview AI situation highlights these issues – as well as the dramatic impacts data misuse may have on privacy as personal data continues to be exploited for use in powerful AI technologies.

Published in Privacy

 

It has been quite a while since I posted to my blog. The reason has simply been a crushing workload that has kept me from writing anything that did not have an urgent deadline! In the meantime, so much has been going on in terms of digital and data law and policy in Canada and around the world. I will try to get back on track!

Artificial intelligence (AI) has been garnering a great deal of attention globally –for its potential to drive innovation, its capacity to solve urgent challenges, and its myriad applications across a broad range of sectors. In an article that is forthcoming in the Canadian Journal of Law and Technology, Bradley Henderson, Colleen Flood and I examine issues of algorithmic and data bias leading to discrimination in the healthcare context. AI technologies have tremendous potential across the healthcare system – AI innovation can improve workflows, enhance diagnostics, accelerate research and refine treatment. Yet at the same time, AI technologies bring with them many concerns, among them, bias and discrimination.

Bias can take many forms. In our paper, we focus on those manifestations of bias that can lead to discrimination of the kind recognized in human rights legislation and the Charter. Discrimination can arise either from flawed assumptions being coded into algorithms, from adaptive AI that makes its own correlations, or from unrepresentative data (or from a combination of these).

There are some significant challenges when it comes to the data used to train AI algorithms. Available data may reflect existing disparities and discrimination within the healthcare system. For example, some communities may be underrepresented in the data because of lack of adequate access to healthcare, or from a lack of trust in the healthcare system that tends to keep them away until health issues become acute. Lack of prescription drug coverage or access to paid sick leave may also impact when and how people access health care services. Racial or gender bias in terms of how symptoms or concerns are recorded or how illness is diagnosed can also affect the quality and representativeness of existing stores of data. AI applications developed and trained on data from US-based hospitals may reflect the socio-economic biases that impact access to health care in the US. It may also be questionable the extent to which they are generalizable to the Canadian population or sub-populations. In some cases, data about race or ethnicity may be important markers for understanding diseases and how they manifest themselves but these data may be lacking.

There are already efforts afoot to ensure better access to high quality health data for research and innovation in Canada, and our paper discusses some of these. Addressing data quality and data gaps is certainly one route to tackling bias and discrimination in AI. Our paper also looks at some of the legal and regulatory mechanisms available. On the legal front, we note that there are some recourses available where things go wrong, including human rights complaints, lawsuits for negligence, or even Charter challenges. However, litigating the harms caused by algorithms and data is likely to be complex, expensive, and fraught with difficulty. It is better by far to prevent harms than to push a system to improve itself after costly litigation. We consider the evolving regulatory landscape in Canada to see what approaches are emerging to avoid or mitigate harms. These include regulatory approaches for AI-enabled medical devices, and advanced therapeutic products. However, these systems focus on harms to human health, and would not apply to AI tools developed to improve access to healthcare, manage workflows, conduct risk assessments, and so on. There are regulatory gaps, and we discuss some of these. The paper also makes recommendations regarding improving access to better data for research and innovation, with the accompanying necessary enhancements to privacy laws and data governance regimes to ensure the protection of the public.

One of the proposals made in the paper is that bias and discrimination in healthcare-related AI applications should be treated as a safety issue, bringing a broader range of applications under Health Canada regulatory regimes. We also discuss lifecycle regulatory approaches (as opposed to one-off approvals), and providing warnings about data gaps and limitations. We also consider enhanced practitioner licensing and competency frameworks, requirements at the procurement stage, certification standards and audits. We call for law reform to human rights legislation which is currently not well-adapted to the AI context.

In many ways, this paper is just a preliminary piece. It lays out the landscape and identifies areas where there are legal and regulatory gaps and a need for both law reform and regulatory innovation. The paper is part of the newly launched Machine MD project at uOttawa, which is funded by the Canadian Institutes for Health Research, and that will run for the next four years.

The full pre-print text of the article can be found here.

Published in Privacy

 

The Federal Court has issued its decision in a reference case brought by the Privacy Commissioner of Canada regarding the interpretation of his jurisdiction under the Personal Information Protection and Electronic Documents Act (PIPEDA). The reference relates to a complaint against Google about its search engine, and implicating the so-called ‘right to be forgotten’. Essentially, the complainant in that case seeks an order requiring Google to de-index certain web pages that show up in searches for his name and that contain outdated and inaccurate sensitive information. Google’s response to the complaint was to challenge the jurisdiction of the Commissioner to investigate. It argued that its search engine functions were not a ‘commercial activity’ within the meaning of PIPEDA and that PIPEDA therefore did not apply. It also argued that its search engine was a journalistic or literary function which is excluded from the application of PIPEDA under s. 4(2)(c). The Canadian Broadcasting Corporation (CBC) and the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic (CIPPIC) both intervened.

Associate Chief Justice Gagné ruled that the Commissioner has jurisdiction to deal with the complaint. In this sense, this ruling simply enables the Commissioner to continue with his investigation of the complaint and to issue his Report of Findings – something that could no doubt generate fresh fodder for the courts, since a finding that Google should de-index certain search results would raise interesting freedom of expression issues. Justice Gagné’s decision, however, focuses on whether the Commissioner has jurisdiction to proceed. Her ruling addresses 1) the commercial character of Google’s search engine activity; 2) whether Google’s activities are journalistic in nature; and 3) the relevance of the quasi-constitutional status of PIPEDA. I will consider each of these in turn.

1) The Commercial Character of Google’s Search Engine

Largely for division of powers reasons, PIPEDA applies only to the collection, use or disclosure of personal information in the course of “commercial activity”. Thus, if an organization can demonstrate that it was not engaged in commercial activity, they can escape the application of the law.

Justice Gagné found that Google collected, used and disclosed information in offering its search engine functions. The issue, therefore, was whether it engaged in these practices “in the course of commercial activity”. Justice Gagné noted that Google is one of the most profitable companies in existence, and that most of its profits came from advertising revenues. Although Google receives revenues when a user clicks on an ad that appears in search results, Google argued that not all search results generate ads – this depends on whether other companies have paid to have the particular search terms trigger their ads. In the case of a search for an ordinary user’s name, it is highly unlikely that the search will trigger ads in the results. However, Justice Gagné noted that advertisers can also target ads to individual users of Google’s search engine based on data that Google has collected about that individual from their online activities. According to Justice Gagné, “even if Google provides free services to the content providers and the user of the search engine, it has a flagrant commercial interest in connecting these two players.” (at para 57) She found that search engine users trade their personal data in exchange for the search results that are displayed when they conduct a search. Their data is, in turn, used in Google’s profit-generating activities. She refused to ‘dissect’ Google’s activities into those that are free to users and those that are commercial, stating that the “activities are intertwined, they depend on one another, and they are all necessary components of that business model.” (at para 59) She also noted that “unless it is forced to do so, Google has no commercial interest in de-indexing or de-listing information from its search engine.” (at para 59)

2) Is Google’s Search Engine Function Journalistic in Nature

PIPEDA does not apply to activities that are exclusively for journalistic purposes. This is no doubt to ensure that PIPEDA does not unduly interfere with the freedom of the press. Google argued that its search engine allowed users to find relevant information, and that in providing these services it was engaged in journalistic purposes.

Justice Gagné observed that depending upon the person, a search by name can reveal a broad range of information from multiple and diverse sources. In this way, Google facilitates access to information, but, in her view, it does not perform a journalistic function. She noted: “Google has no control over the content of search results, the search results themselves express no opinion, and Google does not create the content of the search results.” (at para 82) She adopted the test set out in an earlier decision in A.T. v. Globe24hr.com, whereby an activity qualifies as journalism if “its purpose is to (1) inform the community on issues the community values, (2) it involves an element of original production, and (3) it involves a ‘self-conscious discipline calculated to provide an accurate and fair description of facts, opinion and debate at play within a situation.” (at para 83) Applying the test to Google’s activities, she noted that Google did more than just inform a community about matters of interest, and that it did not create or produce content. She observed as well that “there is no effort on the part of Google to determine the fairness or the accuracy of the search results.” (at para 85). She concluded that the search engine functions were not journalistic activity – or that if they were they were not exclusively so. As a result, the journalistic purposes did not exempt Google from the application of PIPEDA.

3) The Relevance of the Quasi-Constitutional Status of PIPEDA

The Supreme Court of Canada has ruled that both public and private sector data protection laws in Canada have quasi-constitutional status. What this means in practical terms is less clear. Certainly it means that they are recognized as laws that protect rights and/or values that are of fundamental importance to a society. For example, in Lavigne, the Supreme Court of Canada stated that the federal Privacy Act served as “a reminder of the extent to which the protection of privacy is necessary to the preservation of a free and democratic society” (at para 25). In United Food and Commercial Workers, the Supreme Court of Canada found that Alberta’s private sector data protection law also had quasi-constitutional status and stated: “The ability of individuals to control their personal information is intimately connected to their individual autonomy, dignity and privacy. These are fundamental values that lie at the heart of a democracy.” (at para 19)

What this means in practical terms is increasingly important as questions are raised about the approach to take to private sector data protection laws in their upcoming reforms. For example, the Privacy Commissioner of Canada has criticized Bill C-11 (a bill to reform PIPEDA) for not adopting a human rights-based approach to privacy – one that is explicitly grounded in human rights values. By contrast, Ontario in its White Paper proposing a possible private sector data protection law for Ontario, indicates that it will adopt a human rights-based approach. One issue at the federal level might be the extent to which the quasi-constitutional nature of a federal data protection law does the work of a human rights-based approach when it comes to shaping interpretation of the statute. The decision in this reference case suggests that the answer is ‘no’. In fact, the Attorney-General of Canada specifically intervened on this point, argue that “[t]he quasi-constitutional nature of PIPEDA does not transform or alter the proper approach to statutory interpretation”. (at para 30). Justice Gagné agreed. The proper approach is set out in this quote from Driedger in Lavigne (at para 25): “the words of an Act are to be read in their entire context and in their grammatical and ordinary sense harmoniously with the scheme of the Act, the object of the Act, and the intention of Parliament.”

In this case, the relevant words of the Act – “commercial activity” and “journalistic purposes” were interpreted by the Court in accordance with ordinary interpretive principles. I do not suggest that these interpretations are wrong or problematic. I do find it interesting, though, that this decision makes it clear that an implicit human rights-based approach is far inferior to making such an approach explicit through actual wording in the legislation. This is a point that may be relevant as we move forward with the PIPEDA reform process.

Next Steps

Google may, of course, appeal this decision to the Federal Court of Appeal. If it does not, the next step will be for the Commissioner to investigate the complaint and to issue its Report of Findings. The Commissioner has no order-making powers under PIPEDA. If an order is required to compel Google to de-index any sites, this will proceed via a hearing de novo in Federal Court. We are still, therefore, a long way from a right to be forgotten in Canada.

Published in Privacy

 

In June 2021, Ontario issued a White Paper that sets out some proposals, including suggested wording, for a new private sector data protection law for the province. This is part of its overall digital and data strategy. Input on the White Paper is sought by August 3, 2021.

I have published table that compares Ontario’s with the federal government’s Bill C-11 (which will not make it through Parliament in the present sitting, and which may get some necessary attention over the summer). It makes sense to compare the proposal to C-11 because, if it passes, any Ontario law would have to be found to be substantially similar to it. The Ontario proposal has clearly been drafted with Bill C-11 in mind. That said, the idea is not to simply copy Bill C-11. The White Paper shows areas where Bill C-11 may be largely copied, but other places where Ontario plans to modify it, add something new, or go in a different direction. Of course, feedback is sought on the contents of the White Paper, and a bill, if and when it is introduced in the Legislature, may look different from what is currently proposed – depending on what feedback the government receives.

I have prepared a Table that compares the Ontario proposal with Bill C-11, with some added commentary. The Table can be found here, with the caveat that the commentary is preliminary – and was generated quite quickly.

Please be sure to respond to the consultation by the August 3 deadline!

Published in Privacy

 

Ontario launched its Digital and Data Strategy on April 30, 2021, in a the document, titled Building a Digital Ontario. The Strategy – based on a consultation process announced late in 2019 – is built around four main themes. These are “equipped to succeed”, “safe and secure”, “connected” and “supported”.

It is important to note that the digital and data strategy is for the Ontario government. That is, it is predominantly about how government services are delivered to the public and about how government data can be made more readily and usefully available to fuel the data economy. Related objectives are to ensure that Ontarians have sufficient connectivity and digital literacy to benefit from digital government services and that there is a sufficiently skilled workforce to support the digital agenda. That said, there are places where the focus of the strategy is blurred. For example, the discussion of privacy and security shifts between public and private sector privacy issues; similarly, it is unclear whether the discussion of AI governance is about public or private sector uses and of AI, or both. What is most particularly off-base is that the cybersecurity elements of the strategy focus on individuals do not address the need for the government to tackle its own cybersecurity issues – particularly in relation to critical digital and data infrastructures in the province. There is a reference to an existing portal with cybersecurity resources for public sector organizations, so presumably that has been checked off the list, though it hardly seems sufficient.

Do we need better digital services from government? Should there be better public sector data sharing infrastructures to support research and innovation while at the same time stringently protecting privacy? Do we need to take cybersecurity more seriously? The answer is clearly yes. Yet in spite of the laudable objectives of the strategy, it remains unsatisfying. I have three main concerns. First, there seems to be more marketing than strategy with much of what is in this document. Too many of the themes/initiatives have a repackaged feel to them – these are things already underway that are being reverse-engineered into a strategy. Second, the document seems to ignore key actors and sectors – it has the feel of a plan hatched in one part of government with minimal communication with other departments, agencies and partners. Third, much of the strategy is simply vague. It is a bit like saying that the strategy is to do important things. Hard to argue with such a goal, but it is not a strategy – more of an aspiration.

My first concern relates to the fact that so much of what is described in the strategy is work already completed or underway. Each section of the strategy document describes progress already made on existing initiatives, such as the existing, Open Data Catalogue, the Cybersecurity Ontario Learning Portal and the Cybersecurity Centre of Excellence. In some cases, the document announces new initiatives that are imminent – for example the launch of a Digital and Data Fellows Innovation Program in summer of 2021, beta principles for responsible AI in spring of 2021, and a new Digital ID for 2021. To be fair, there are a few newish initiatives – for example, the mysterious Data Authority and the development of digital and data standards. But overall, the document is more of an inventory of existing projects framed as a strategy. It feels like marketing.

My second concern is that the document seems to be a catalogue of Ontario Digital Services projects rather than a strategy for the province as a whole. We hear that we need to build a skilled work force, but apart a reference to already launched enhancement of STEM learning in elementary schools, there is nothing about funding for education or research in STEM fields, whether in high school, college or university. There is a program to “bring the best of Ontario’s tech sector into government, to help design Ontario's digital future”, but there’s nothing about funding for internships for students in government or industry. The pandemic has raised awareness of massive challenges in the province around health data; that these are not addressed as part of an overall data strategy suggests that the strategy is developed within a still-siloed government framework. The main ‘promises’ in this document are those within the purview of Digital Services.

The most disconnected part of the strategy is that dealing with privacy. Privacy is one of the pillars of the strategy, and as part of that pillar the document announces a new “Know Your Rights” portal “to help Ontarians learn how to better protect their personal data and stay safe online”. Ontario already has an Office of the Information and Privacy Commissioner that provides a wealth of information of this kind. Unless and until Ontario has its own private sector data protection law (a matter on which the “strategy”, incidentally, is completely silent), information on private sector data protection is also found on the website of the Office of the Privacy Commissioner of Canada. It is frankly hard to see how creating a new portal is going to advance the interests of Ontarians – rather than waste their money. It would have made more sense to enhance the budget and mandate of Ontario’s Information and Privacy Commissioner than to create a portal ultimately destined to provide links to information already available on the OIPC website. This promise highlights that this is not really an Ontario strategy; rather it is a compilation of ODS projects.

My third concern is with the vagueness of the strategy overall. One of the few new pieces – the Data Authority – is described in the most general of terms. We are told it will be “responsible for building modern data infrastructure to support economic and social growth at scale, while ensuring that data is private, secure, anonymous and cannot identify people individually.” But what is meant by “data infrastructure” in this instance? What is the role of the “authority”? Is it a regulator? A data repository? A computing facility? A combination of the above? One wonders if it is actually going to be a build-out or rebranding of the Ontario Health Data Platform which was pulled together to facilitate data sharing during the COVID-19 pandemic.

Notwithstanding these criticisms, it is important to note that many of the initiatives, whether already underway or not, are designed to address important challenges in the digital and data economy. The problem lies with calling this a strategy. It is much more like a to-do list. It starts with a few things conveniently crossed off. It includes a number of things that need finishing, and a few that need starting. In contrast, a strategy involves thinking about where we need to be within a targeted period of time (5 years? 10 years?) and then lays out what we need to do, and to put in place, in order to get there. In the covering memo to this document, Minister of Finance and Treasury Board President Bethlenfalvy sets a high bar for the strategy, stating: “I like to say that we are moving Ontario from the digital stone age to a global trailblazer”. Dampening the hyperbole on either side of that metaphor, we are not in the digital stone age, but those expecting to blaze trails should not be surprised to discover discarded Timmy’s cups along the way.

Published in Privacy

 

A joint ruling from the federal Privacy Commissioner and his provincial counterparts in Quebec, B.C., and Alberta has found that U.S.-based company Clearview AI breached Canadian data protection laws when it scraped photographs from social media websites to create the database it used to support its facial recognition technology. According to the report, the database contained the biometric data of “a vast number of individuals in Canada, including children.” Investigations of complaints under public sector data protection laws about police use of Clearview AI’s services are still ongoing.

The Commissioners’ findings are unequivocal. The information collected by Clearview AI is sensitive biometric data. Express consent was required for its collection and use, and Clearview AI did not obtain consent. The company’s argument that consent was not required because the information was publicly available was firmly rejected. The Commissioners described Clearview AI’s actions as constituting “the mass identification and surveillance of individuals by a private entity in the course of commercial activity.” (at para 72) In defending itself, Clearview AI put forward arguments that were clearly at odds with Canadian law. They also resisted the jurisdiction of the Canadian Commissioners, notwithstanding the fact that they collected the personal data of Canadians and offered their commercial services to Canadian law enforcement agencies. Clearview AI did not accept the Commissioners’ findings, and “has not committed to following” the recommendations.

At the time of this report, Bill C-11, a bill to reform Canada’s current data protection law, is before Parliament. The goal of this post is to consider what difference Bill C-11 might make to the outcome of complaints like this one should it be passed into law. I consider both the substantive provisions of the bill and its new enforcement regime.

Consent

Like the current Personal Information Protection and Electronic Documents Act (PIPEDA), consent is a core requirement of Bill C-11. To collect, use or disclose personal information, an organization must either obtain valid consent, or its activities must fall into one of the exceptions to consent. In the Clearview AI case, there was no consent, and the disputed PIPEDA exception to the consent requirement was the one for ‘publicly available personal information’. While this exception seems broad on its face, to qualify, the information must fall within the parameters set out in the Regulations Specifying Publicly Available Personal Information. These regulations focus on certain categories of publicly available information – such as registry information (land titles, for example), court registries and decisions, published telephone directory information, and public business information listings. In most cases, the regulations provide that the use of the information must also relate directly to the purposes for which it was made public. The regulations also contain an exception for “personal information that appears in a publication, including a magazine, book or newspaper, in printed or electronic form, that is available to the public, where the individual has provided the information.” The interpretation of this provision was central to Clearview AI’s defense of its practices. It argued that social media postings were “personal information that appears in a publication.” The Commissioners adopted a narrow interpretation consistent with this being an exception in quasi-constitutional legislation. They distinguished between the types of publications mentioned in the exception and uncurated, dynamic social-media sites. The Commissioners noted that unlike newspapers or magazines, individuals retain a degree of control over the content of their social media sites. They also observed that to find that all information on the internet falls within the publicly available information exception “would create an extremely broad exemption that undermines the control users may otherwise maintain over their information at the source.” (at para 65) Finally, the Commissioners observed that the exception applied to information provided by the data subject, but that photographs were scraped by Clearview AI regardless of whether they were posted by the data subject or by someone else.

Would the result be any different under Bill C-11? In section 51, Bill C-11 replicates the “publicly available information exception” for collection, use or disclosure of personal information. Like PIPEDA, it also leaves the definition of this term to regulations. However, Canadians should be aware that there has been considerable pressure to expand the regulations so that personal information shared on social media sites is exempted from the consent requirement. For example, in past hearings into PIPEDA reform, the House of Commons ETHI Committee at one point appeared swayed by industry arguments that PIPEDA should be amended to include websites and social media within this exception. Bill C-11 does not resolve this issue; but if passed, it might well be on the table in the drafting of regulations. If nothing else, the Clearview AI case provides a stark illustration of just how important this issue is to the privacy of Canadians.

However, data scrapers may be able to look elsewhere in Bill C-11 for an exception to consent. Bill C-11 contains new exceptions to consent for “business operations” which I have criticized here. One of these exceptions would almost certainly be relied upon by a company in Clearview AI’s position if the bill were passed. The exceptions allow for the collection and use of personal information without an individual’s knowledge or consent if, among other things, it is for “an activity in the course of which obtaining the individual’s consent would be impracticable because the organization does not have a direct relationship with the individual.” (18(2)(e)). A company that scrapes data from social media sites to create a facial recognition database would find it impracticable to get consent because it has no direct relationship with any of the affected individuals. The exception seems to fit.

That said, s. 18(1) does set some general guardrails. The one that seems relevant in this case is that the exceptions to consent are only available where “a reasonable person would expect such a collection or use for that activity”. Hopefully, collection of images from social media websites to fuel facial recognition technology would not be something that a reasonable person would expect; certainly, the Commissioners would not find it to be so. In addition, section 12 of Bill C-11 requires that information be collected or used “only for purposes that a reasonable person would consider appropriate in the circumstances” (a requirement carried over from PIPEDA, s. 5(3)). In their findings, the Commissioners ruled that the collection and use of images by Clearview AI was for a purpose that a reasonable person would find inappropriate. The same conclusion could be reached under Bill C-11.

There is reason to be cautiously optimistic, then, that Bill C-11 would lead to the same result on a similar set of facts: the conclusion that the wholesale scraping of personal data from social media sites to build a facial recognition database without consent is not permitted. However, the scope of the exception in s. 18(2)(e) is still a matter of concern. The more exceptions that an organization pushing the boundaries feels it can wriggle into, the more likely it will be to engage in a privacy-compromising activities. In addition, there may be a range of different uses for scraped data and “what a reasonable person would expect” is a rather squishy buffer between privacy and wholesale data exploitation.

Enforcement

Bill C-11 is meant to substantially increase enforcement options when it comes to privacy. Strong enforcement is particularly important in cases where organizations are not interested in accepting the guidance of regulators. This is certainly the case with Clearview AI, which expressly rejected the Commissioners’ findings. Would Bill C-11 strengthen the regulator’s hand?

The Report of Findings in this case reflects the growing trend of having the federal and provincial commissioners that oversee private sector data protection laws jointly investigate complaints involving issues that affect individuals across Canada. This cooperation is important as it ensures consistent interpretation of what is meant to be substantially similar legislation across jurisdictions. Nothing in Bill C-11 would prevent the federal Commissioner from continuing to engage in this cross-jurisdictional collaboration – in fact, subsection 116(2) expressly encourages it.

Some will point to the Commissioner’s new order-making powers as another way to strengthen his enforcement hand. The Commissioner can now direct an organization to take measures to comply with the legislation or to cease activities that are in contravention of the legislation (s. 92(2)). This is a good thing. However, these orders are subject to appeal to the new Personal Information Protection and Data Tribunal (the Tribunal). By contrast, orders of the Commissioners of BC and Alberta are final, subject only to judicial review.

In addition, it is not just the orders of the Commissioner that are appealable under C-11, but also his findings. This raises questions about how the new structure under Bill C-11 might affect cooperative inquiries like the one in this case. Conclusions shared with other Commissioners can be appealed by respondents to the Tribunal, which owes no deference to the Commissioner on questions of law. As I and others have already noted, the composition of the Tribunal is somewhat concerning; Bill C-11 would require only a minimum of one member of the tribunal to have expertise in privacy law. While it is true that proceedings before the Federal Court were de novo, and thus the Commissioner was afforded no formal deference in that context either, access to Federal Court was more limited than the wide-open appeals route to the Tribunal. The Bill C-11 structure really seems to shift the authority to interpret and apply the law away from the Commissioner and to the mysterious and not necessarily expert Tribunal.

Bill C-11 also has a much-touted new power to issue substantial fines for breach of the legislation. Interestingly, however, this does not seem to be the kind of case in which a fine would be available. Fines, provided for under s. 93(1) of Bill C-11 are available only with respect to the breach of certain obligations under the statute (these are listed in s. 93(1)). Playing fast and loose with the requirement to obtain consent is not one of them. This is interesting, given the supposedly central place consent plays within the Bill. Further thought might need to be given to the list of ‘fine-able contraventions’.

Overall, then, although C-11 could lead to a very similar result on similar facts, the path to that result may be less certain. It is also not clear that there is anything in the enforcement provisions of the legislation that will add heft to the Commissioner’s findings. In practical terms, the decisions that matter will be those of the Tribunal, and it remains to be seen how well this Tribunal will serve Canadians.

Published in Privacy

 

This post is the third in a series that considers the extent to which the Digital Charter Implementation Act (Bill C-11) by overhauling Canada’s federal private sector data protection law, implements the principles contained in the government’s Digital Charter. This post addresses the fourth principle of the Charter: Transparency, Portability and Interoperability, which provides that “Canadians will have clear and manageable access to their personal data and should be free to share or transfer it without undue burden.”

Europe’s General Data Protection Regulation (GDPR) introduced the concept of data portability (data mobility) as part of an overall data protection framework. The essence of the data portability right in article 20 of the GDPR is:

(1) The data subject shall have the right to receive the personal data concerning him or her, which he or she has provided to a controller, in a structured, commonly used and machine-readable format and have the right to transmit those data to another controller without hindrance from the controller to which the personal data have been provided [...]

In this version, the data flows from one controller to another via the data subject. There is no requirement for data to be in a standard, interoperable format – it need only be in a common, machine-readable format.

Data portability is not a traditional part of data protection; it largely serves consumer protection/competition law interest. Nevertheless, it is linked to data protection through the concept of individual control over personal information. For example, consider an individual who subscribes to a streaming service for audiovisual entertainment. The service provider acquires considerable data about that individual and their viewing preferences over a period of time. If a new company enters the market, they might offer a better price, but the consumer may be put off by the lack of accurate or helpful recommendations or special offers/promotions tailored to their tastes. The difference in the service offered lies in the fact that the incumbent has much more data about the consumer. A data mobility right, in theory, allows an individual to port their data to the new entrant. The more level playing field fosters competition that is in the individual’s interest, and serves the broader public interest by stimulating competition.

The fourth pillar of the Digital Charter clearly recognizes the idea of control that underlies data mobility, suggesting that individuals should be free to share or transfer their data “without undue burden.” Bill C-11 contains a data mobility provision that is meant to implement this pillar of the Charter. However, this provision is considerably different from what is found in the GDPR.

One of the challenges with the GDPR’s data portability right is that not all data will be seamlessly interoperable from one service provider to another. This could greatly limit the usefulness of the data portability right. It could also impose a significant burden on SMEs who might face demands for the production and transfer of data that they are not sufficiently resourced to meet. It might also place individuals’ privacy at greater risk, potentially spreading their data to multiple companies, some of which might be ill-equipped to provide the appropriate privacy protection.

These concerns may explain why Bill C-11 takes a relatively cautious approach to data mobility. Section 72 of the Consumer Privacy Protection Act portion of Bill C-11 provides:

72 Subject to the regulations, on the request of an individual, an organization must as soon as feasible disclose the personal information that it has collected from the individual to an organization designated by the individual, if both organizations are subject to a data mobility framework provided under the regulations. [My emphasis]

It is important to note that in this version of mobility, data flows from one organization to another rather than through the individual, as is the case under the GDPR. The highlighted portion of s. 72 makes it clear that data mobility will not be a universal right. It will be available only where a data mobility framework is in place. Such frameworks will be provided for in regulations. Section 120 of Bill C-11 states:

120 The Governor in Council may make regulations respecting the disclosure of personal information under section 72, including regulations

(a) respecting data mobility frameworks that provide for

(i) safeguards that must be put in place by organizations to enable the secure disclosure of personal information under section 72 and the collection of that information, and

(ii) parameters for the technical means for ensuring interoperability in respect of the disclosure and collection of that information;

(b) specifying organizations that are subject to a data mobility framework; and

(c) providing for exceptions to the requirement to disclose personal information under that section, including exceptions related to the protection of proprietary or confidential commercial information.

The regulations provide for frameworks that will impose security safeguards on participating organizations, and ensure data interoperability. Paragraph 120(b) also suggests that not all organizations within a sector will automatically be entitled to participate in a mobility framework; they may have to qualify by demonstrating that they meet certain security and technical requirements. A final (and interesting) limitation on the mobility framework relates to exceptions to disclosure where information that might otherwise be considered personal information is also proprietary or confidential commercial information. This gets at the distinction between raw and derived data – data collected directly from individuals might be subject to the mobility framework, but profiles or analytics based on that data might not – even if they pertain to the individual.

It is reasonable to expect that open banking (now renamed ‘consumer-directed finance’) will be the first experiment with data mobility. The federal Department of Finance released a report on open banking in January 2020, and has since been engaged in a second round of consultations. Consumer-directed finance is intended to address the burgeoning fintech industry which offers many new and attractive financial management digital services to consumers but which relies on access to consumer financial data. Currently (and alarmingly) this need for data is met by fintechs asking individuals to share account passwords so that they can regularly scrape financial data from multiple sources (accounts, credit cards, etc.) in order to offer their services. A regulated framework for data mobility is seen as much more secure, since safeguards can be built into the system, and participants can be vetted to ensure they meet security and privacy standards. Data interoperability between all participants will also enhance the quality of the services provided.

If financial services is the first area for development of data mobility in Canada, what other areas for data mobility might Canadians expect? The answer is: not many. The kind of scheme contemplated for open banking has already required a considerable investment of time and energy, and it is not yet ready to launch. Of course, financial data is among the most sensitive of personal data; other schemes might be simpler to design and create. But they will still take a great deal of time. One sector where some form of data mobility might eventually be contemplated is telecommunications. (Note that Australia’s comparable “consumer data right” is being unrolled first with open banking and will be followed by initiatives in the telecommunications and energy sectors).

Data mobility in the CPPA will also be limited by its stringency. It is no accident that banking and telecommunications fall within federal jurisdiction. The regulations contemplated by s. 120 go beyond simple data protection and impact how companies do business. The federal government will face serious challenges if it attempts to create data mobility frameworks within sectors or industries under provincial jurisdiction. Leadership on this front will have to come from the provinces. Those with their own private sector data protection laws could choose to address data mobility on their own terms. Quebec has already done this in Bill 64, which would amend its private sector data protection law to provide:

112 [. . .] Unless doing so raises serious practical difficulties, computerized personal information collected from the applicant must, at his request, be communicated to him in a structured, commonly used technological format. The information must also be communicated, at the applicant’s request, to any person or body authorized by law to collect such information.

It remains to be seen what Alberta and British Columbia might decide to do – along with Ontario, if in fact it decides to proceed with its own private sector data protection law. As a result, while there might be a couple of important experiments with data mobility under the CPPA, the data mobility right within that framework is likely to remain relatively constrained.

Published in Privacy

 

This post is the second in a series that considers the extent to which the Digital Charter Implementation Act, by overhauling Canada’s federal private sector data protection law, implements the principles contained in the government’s Digital Charter. It addresses the tenth principle of the Charter: Strong Enforcement and Real Accountability. This principle provides that “There will be clear, meaningful penalties for violations of the laws and regulations that support these principles.”

Canada’s current data protection law, the Personal Information Protection and Electronic Documents Act (PIPEDA) has been criticized for the relatively anemic protection it provides for personal information. Although complaints may be filed with the Commissioner, the process ends with a non-binding “report of findings”. After receiving a report, a complainant who seeks either a binding order or compensation must make a further application to the Federal Court. Recourse to Federal Court is challenging for unrepresented plaintiffs. Yet, awards of damages have been low enough to make it distinctly not worth anyone’s while to hire a lawyer to assist them with such a claim. As a result, the vast majority of cases going to the Federal Court have been brought by unrepresented plaintiffs. Damage awards have been low, and nobody has been particularly impressed. It is now far more likely that privacy issues – at least where data breaches are concerned – will be addressed through class action lawsuits, which have proliferated across the country.

Of course, the protection of personal information is not all about seeking compensation or court orders. In fact, through the complaints process over the years, the Commissioner has worked to improve data protection practices through a variety of soft compliance measures, including investigating complaints and making recommendations for changes. The Commissioner also uses audit powers and, more recently, compliance agreements, to ensure that organizations meet their data protection obligations. Nevertheless, high profile data breaches have left Canadians feeling vulnerable and unprotected. There is also a concern that some data-hungry companies are making fortunes from personal data and that weak legislative sanctions provide no real incentive to limit their rampant collection, use and disclosure of personal data. Public unease has been augmented by overt resistance to the Commissioner’s rulings in some instances. For example, Facebook was defiant in response to the Commissioner’s findings in relation to the Cambridge Analytica scandal. Even more recently, in an inquiry into the use of facial recognition technologies in shopping malls, the respondent politely declined to accept the Commissioner’s findings that certain of their practices were in breach of PIPEDA.

The Digital Charter Implementation Act is meant to address PIPEDA’s enforcement shortfall. It provides for the enactment of two statutes related to personal data protection: The Consumer Privacy Protection Act (CPPA) and the Personal Information and Data Protection Tribunal Act (the PIDPTA). A government Fact Sheet describes this combination as providing a “Comprehensive and accessible enforcement model”. The revamped version of PIPEDA, the CPPA would give the Commissioner the power to order an organization to comply with its obligations under the CPPA or to stop collecting or using personal information. This is an improvement, although the order-making powers are subject to a right of appeal to the new Tribunal created by the PIDPTA. At least the Tribunal will owe some deference to the Commissioner on questions of fact or of mixed law and fact – proceedings before the Federal Court under PIPEDA were entirely de novo.

Under the CPPA, the Commissioner will also be able to recommend that the Tribunal impose a fine. Fines are available only for certain breaches of the legislation. These are ones that involve excessive collection of personal information; use or disclosure of personal information for new purposes without consent or exception; making consent to personal data collection a condition of the provision of a product or service (beyond what is necessary to provide that product or service); obtaining consent by deception; improper retention or disposal of personal information; failure to dispose of personal information at an individual’s request; breach of security safeguards; or failure to provide breach notification. The fines can be substantial, with a maximum penalty of the higher of $10,000,000 or 3% of the organization’s gross global revenue for the preceding financial year. Of course, that is the upper end. Fines are discretionary and subject to a number of considerations, and fines are explicitly not meant to be punitive.

Within this structure, the Tribunal will play a significant role. It was no doubt created to provide greater distance between the Commissioner and the imposition of fines on organizations. In this respect, it is a good thing. The Commissioner still plays an important role in encouraging organizations to comply voluntarily with the legislation. This role is fairer and easier to perform when there is greater separation between the ombuds functions of the Commissioner and the ability to impose penalties. More problematically, the Tribunal will hear appeals of both findings and orders made by the Commissioner. The appeal layer is new and will add delays to the resolution of complaints. An alternative would be to have left orders subject to judicial review, with no appeals. In theory, going to the Tribunal will be faster and perhaps less costly than a trip to Federal Court. But in practice, the Tribunal’s value will depend on its composition and workload. Under the PIDPTA, the Tribunal will have only six members, not necessarily full-time, and only one of these is required to have experience with privacy. Decisions of the tribunal cannot be appealed, but they will be subject to judicial review by the Federal Court.

The CPPA also creates a new private right of action. Section 106 provides that an individual affected by a breach of the Act can sue for damages for “loss or injury that the individual has suffered”. However, in order to do so, the individual must first make a complaint. That complaint must be considered by the Commissioner. The Commissioner’s findings and order must either not be appealed or any appeal must have been dealt with by the Tribunal. Note that not all complaints will be considered by the Commissioner. The Commissioner can decline to deal with complaints for a number of reasons (see s. 83) or can discontinue an investigation (see s. 85). There is also a right of action for loss or injury where and organization has been convicted of an offence under the legislation. An offence requires an investigation, a recommendation, and consideration by the Tribunal. All of these steps will take time. It will be a truly dogged individual who pursues the private right of action under the CPPA.

Ultimately, then, the question is whether this new raft of enforcement-related provisions is an improvement? To better get a sense of how these provisions might work in practice, consider the example of the massive data breach at Desjardins that recently led to a Commissioner’s report of findings. The data breach was a result of employees not following internal company policies, flawed training and oversight, as well as certain employees going ‘rogue’ and using personal data for their own benefit. In the Report of Findings, the Commissioner makes a number of recommendations most of which have already been implemented by the organization. As a result, the Commissioner has ruled the complaint well-founded and conditionally resolved. Class action lawsuits related to the breach have already been filed.

How might this outcome be different if the new legislation were in place? A complaint would still be filed and investigated. The Commissioner would issue his findings as to whether any provisions of the CPPA were contravened. He would have order-making powers and could decide to recommend that a penalty be imposed. However, if his recommendations are all accepted by an organization, there is no need for an order. The Commissioner might, given the nature and size of the breach, decide to recommend that a fine be imposed. However, considering the factors in the legislation and the organization’s cooperation, he might decide it was not appropriate.

Assuming a recommendation were made to impose a penalty, the Tribunal would have to determine whether to do so. It must consider a number of factors, including the organization’s ability to pay the fine, any financial benefit derived by the organization from the activity, whether individuals have voluntarily been compensated by the organization, and the organization’s history of complying with the legislation. The legislation also specifically provides that “the purpose of a penalty is to promote compliance with this Act and not to punish.” (s. 94(6)) In a case where the organization was not exploiting the data for its own profit, took steps quickly to remedy the issues by complying with the Commissioner’s recommendations, and provided credit monitoring services for affected individuals, it is not obvious that a fine would be imposed. As for the private right of action in the legislation, it is not likely to alter the fact that massive data breaches of this kind will be addressed through class action lawsuits.

The reworking of the enforcement provisions may therefore not be hugely impactful in the majority of cases. This is not necessarily a bad thing, if the lack of impact is due to the fact that the goals of the legislation are otherwise being met. Where it may make a difference is in cases where organizations resist the Commissioner’s findings or where they act in flagrant disregard of data protection rights. It is certainly worth having more tools for enforcement in these cases. Here, the big question mark is the Tribunal – and more particularly, its composition.

But there may also be consequences felt by individuals as a result of the changes. The Commissioner’s findings – not just any orders he might make – are now subject to appeal to the Tribunal. This will likely undermine his authority and might undercut his ability to achieve soft compliance with the law. It is also likely to delay resolution of complaints, thus also delaying access to the private right of action contemplated under the legislation. It shifts power regarding what constitutes a breach of the legislation from the Commissioner to the new Tribunal. This may ultimately be the most concerning aspect of the legislation. So much will depend on who is appointed to the Tribunal, and the Bill does not require demonstrable privacy expertise as a general pre-requisite for membership. At the very least, this should be changed.

Published in Privacy
Monday, 21 December 2020 08:03

The Gutting of Consent in Bill C-11

 

Bill C-11, the bill to reform Canada’s private sector data protection regime, is titled the Digital Charter Implementation Act. The Digital Charter is a 10-point plan set out by the federal government to frame its digital agenda. This is the first of a series of posts that considers Bill C-11 in light of some of the principles of the Digital Charter.

A key pillar of the Digital Charter is “consent”. It states: “Canadians will have control over what data they are sharing, who is using their personal data and for what purposes, and know that their privacy is protected.” A “Fact Sheet” published about Bill C-11 explains: “Modernized consent rules would ensure that individuals have the plain-language information they need to make meaningful choices about the use of their personal information.” How well does this describe what Bill C-11 actually does?

It is now generally well accepted that individuals face an enormous consent burden when it comes to personal information. Personal data is collected from every digitally-enabled transaction; it is collected when we use online platforms and as we surf the internet; it is harvested from our phones, and from every app on our phones; home appliances are increasingly co-opted to harvest data about us; our cars collect and transmit data – the list is endless. There are privacy policies somewhere that govern each of these activities, but we do not have the time to read them. If we did, we would likely struggle to grasp their full meaning. And, in any event, these policies are basically take-it-or-leave-it. Add to this the fact that most people’s preoccupation is necessarily with the actual product or service, and not with the many different ways in which collected data might be used or shared. They are unlikely to be able to fully grasp how all this might at some future point affect them. Consent is thus largely a fiction.

How does Bill C-11 address this problem? It starts by requiring consent, at or before the time that personal information is collected. This consent must be “valid”, and validity will depend on plain language information being provided to the individual about the purpose for the collection, use or disclosure of the information, the way in which it will be collected, used or disclosed, any “reasonably foreseeable consequences” of this collection, use or disclosure, the specific type of personal information to be collected, and the names of any third parties or types of third parties with whom the information may be shared. It requires express consent, unless the organization “establishes that it is appropriate to rely on an individual’s implied consent”. The organization cannot make the provision of a product or service conditional on granting consent to the collection, use or disclosure of personal information, unless that information is necessary to the provision of the product or service. Consent cannot be obtained by fraud or deception. And, finally, individuals have the right to withdraw consent, on reasonable notice, and subject to a raft of other exceptions which include the “reasonable terms of a contract”.

It sounds good until you realize that none of this is actually particularly new. Yes, the law has been tightened up a bit around implied consent and the overall wording has been tweaked. But the basic principles are substantially the same as those in PIPEDA. Some of the tweaks are not necessarily for the better. The plain language list of information required for “valid consent” under Bill C-11 changes PIPEDA’s focus on the ability of the target audience for a product or service to properly grasp the nature, purposes and consequences of the collection, use and disclosure of personal data. By considering the target audience, the PIPEDA language is likely better adapted to things like protecting children’s privacy.

If, as the government seems to suggest, there is a new implementation of the “consent” principle in Bill C-11, it is not to be found in the main consent provisions. These are largely a rehash of PIPEDA, and, to the extent they are different, they are not obviously better.

What has changed – and ever so much for the worse – are the exceptions to consent, particularly the ones found in sections 18 to 21 of Bill C-11. These exceptions are not the long laundry-list of exceptions to consent that were already found in PIPEDA (those have all made their way into Bill C-11 as well). Sections 18 and 19, in particular, are new in Bill C-11, and they can only be seen as enhancing consent if you conceive of consent as a burden that should be largely eliminated.

Essentially, the government has tackled two different public policy issues in one set of provisions. The first issue is the consent burden described above. This can be summed up as: Privacy policies are too long and complex, and no one has time to read them. The legislative solution is to make them shorter by reducing the information they must contain. The second public policy goal is to make it easier for organizations to use the personal data they have collected in new ways without having to go back to individuals for their consent. The solution, though, is to carve out exceptions that address not just new uses of data already collected, but that are broad enough to include the initial collection of data. When these two solutions are combined, the result is quite frankly a data protection disaster.

A first problem is that these exceptions are not just to consent, but to knowledge and consent. In other words, not only does an organization not need to seek consent for the listed activities, it does not even need to inform the individual about them. It is very hard to hold an organization to account for things about which one has no knowledge.

The first set of exceptions to knowledge and consent in section 18 are for “business activities”. Perhaps recognizing that this provision creates a kind of open season on personal data, it begins with important limitations. The exception to knowledge and consent created by this provision is available only where the collection or use of the data is for one of the listed business activities; a reasonable person “would expect such a collection or use for that activity”; and “the personal information is not collected or used for the purpose of influencing the individual’s behaviour or decision.” These are important guard rails. But, as noted above, without knowledge of the collection, use or disclosure, it will be difficult to hold organizations to account.

The list of consent-free activities is open ended – it can be added to by regulation. No doubt this is to make the legislation more responsive to changing practices or circumstances, but it is a mechanism by which the list can expand and grow with relative ease. And some of the listed activities have the potential for dramatic privacy impacts. For example, organizations can collect or use personal data without an individual’s knowledge or consent to reduce their commercial risk. This suggests, shockingly, that financial profiling of individuals without their knowledge or consent is fair game. Organizations may also collect personal data without knowledge or consent “that is necessary for the safety of a product or service that the organization provides or delivers”. In an era of connected cars, appliances, medical devices, and home alarm systems, to give just a few examples, the kinds of information that might fall into this category could be surprising. Even more troubling, though, is the provision that allows for collection and use of personal data for activities “in the course of which obtaining the individual’s consent would be impracticable because the organization does not have a direct relationship with the individual.” No one knows what this really means – because it could mean all kinds of things. I will give just one example below.

The next exception, in section 19, allows an organization to transfer an individual’s personal information to a service provider without their knowledge or consent. Let’s say you go to a company’s website and you need customer service. There is a chatbot available on the site to assist you. The chatbot is part of a suite of digital customer services provided to the company by a service provider, and your personal information is transferred to them without your knowledge or consent to enable them to deliver these services. The service provider, on its own behalf, also wants to improve its chatbot AI by recording the chat transcripts, and possibly by collecting other data from you. Based on the exception mentioned above (where knowledge and consent would be impracticable because the service provider does not have a direct relationship with you), it can do this without your knowledge or consent. And you don’t even know about the service provider in the first place because of the exception in section 19. From a service point of view, it’s all very smooth and seamless. But let’s go back to the Digital Charter statement: “Canadians will have control over what data they are sharing, who is using their personal data and for what purposes, and know that their privacy is protected.” How are you feeling about this now?

In fairness, there are other provisions of the Act that govern transfers of data to service providers to ensure privacy protection and accountability. (I may draw a road map in a later post…you will need one to find these provisions which are scattered throughout the Bill). And, in fairness, there is a ‘transparency’ provision in s. 62(2)(b) that requires organizations to “make available” a “general account of how the organization makes use of personal information”. This explicitly includes “how the organization applies the exceptions to the requirement to obtain consent under this Act.” It is difficult to know what this might look like. But a “general account” being “made available” is not the same as a requirement to provide clear information and obtain consent at or before the time that the data is collected and used.

There are ways to reduce the consent burden and to facilitate legitimate uses of data already collected by organizations other than removing the requirements for knowledge or consent in a broad and potentially open-ended list of circumstances. One of these is the concept of “legitimate interests” in art. 6(1) of the EU’s GDPR. Of course, the legitimate interests of organizations under the GDPR are carefully balanced against the “interests or fundamental rights and freedoms of the data subject.” As noted in an earlier post, recognizing the human rights implications of data protection is something that the federal government is simply not prepared to do.

The bottom line is that Bill C-11, in its current form, does not enhance consent. Instead, it will directly undermine it. At the very least, section 18 must be drastically overhauled.

Published in Privacy
<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>
Page 3 of 19

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law