Teresa Scassa - Blog

Displaying items by tag: Privacy

 

Research for this article was made possible with the support of the Heinrich Boell Foundation Washington, DC.

This piece was originally published by Heinrich Boell Stiftung as part of their series on the broad impacts of the COVID-19 pandemic. The original publication can be found here.

 

 

A strong sense of regional sovereignty in the Canadian health care system may lead to different choices for technologies to track and contain the spread of the coronavirus. A multiplicity of non-interoperable apps could put their effectiveness in question and could create regional differences in approaches to privacy..

By Teresa Scassa

Canada’s national capital Ottawa is located in the province of Ontario but sits on the border with Quebec. As soon as restrictions on movement and activities due to the coronavirus begin to lift, the workforce will once again flow in both directions across a river that separates the two provinces. As with other countries around the world, Canada is debating how to use technology to prevent a second wave of infections. Yet as it stands right now, there is a chance that commuters between Ontario and Quebec could have different contact-tracing apps installed on their phone to track their movements, and that these apps might not be fully interoperable.

Innovation in contact-tracing apps is happening in real time, and amid serious concerns about privacy and security. In Canada, many provinces are on the threshold of adopting contact-tracing apps. Canadian app developers, building on technologies adopted elsewhere, will be offering solutions that rely on decentralized, centralized, or partially centralized data storage. At least one Canadian-built app proposes broader functionalities, including AI-enhancement. And, as is so often the case in Canada, its federal structure could lead to a multiplicity of different apps being adopted across the country. Similar challenges may be faced in the United States.

One app to rule them all?

Canada is a federal state, with 10 provinces and 3 territories. Under its constitution, health care is a matter of provincial jurisdiction, although the federal government regulates food and drug safety. It has also played a role in health care through its spending power, often linking federal health spending to particular priorities. However, when it comes to on-the-ground decision-making around the provision of health care services and public health on a regional level, the provinces are sovereign. Canadian federalism has been tested over the years by Quebec’s independence movement, and more recently by dissatisfaction from Western provinces, particularly Alberta. These tensions mean that co-operation and collaboration are not always top of mind.

When it comes to adoption of contact tracing apps, there is the distinct possibility in Canada that different provinces will make different choices. On May 1 Alberta became the first Canadian province to launch a contact tracing app. There have been reports, for example that New Brunswick is considering a contact tracing app from a local app developer, and the government of Newfoundland and Labrador has also indicated it is considering an app. Other governments contemplating contact tracing apps include Manitoba and Saskatchewan. The possibility that multiple different apps will be adopted across the country is heightened by reports that one municipal entity – Ottawa Public Health – may also have plans to adopt its own version of a contact-tracing app.

Although different contact-tracing apps may not seem like much of an issue with most Canadians under orders to stay home, as restrictions begin to loosen, the need for interoperability will become more acute. If non-interoperable contact-tracing apps were to be adopted in Ontario and Quebec (or even in Ontario, Quebec and Ottawa itself), their individual effectiveness would be substantially undermined. Similar situations could play out in border areas across the country, as well as more generally as Canadians begin to travel across the country.

On May 5, 2020, Doug Ford, the premier of Ontario, Canada’s most populous province, called for a national strategy for contact tracing apps in order to prevent fragmentation. His call for cohesion no doubt recognizes the extent to which Canada’s sometimes shambolic federalism could undermine collective public health goals. Yet with so many provinces headed in so many different directions, often with local app developers as partners, it remains to be seen what can be done to harmonize efforts.

Privacy and contact tracing in Canada

The international privacy debate around contact-tracing apps has centred on limiting the ability of governments to access data that reveals individuals’ patterns of movement and associations. Attention has focused on the differences between centralized and decentralized storage of data collected by contact-tracing apps. With decentralized data storage, all data is locally stored on the app user’s phone; public health authorities are able to carry out contact-tracing based on app data only through a complex technological process that keeps user identities and contacts obscure. This model would be supported by the Google/Apple API, and seems likely to be adopted in many EU states. These apps will erase contact data after it ceases to be relevant, and will cease to function at the end of the pandemic period.

By contrast, with centralized data storage, data about app registrants and their contacts is stored on a central server accessible to public health authorities. A compromise position is found with apps in which data is initially stored only on a user’s phone. If a user tests positive for COVID-19, their data is shared with authorities who then engage in contact-tracing. As an additional privacy protection, express consent can be required before users upload their data to central storage. This is a feature of both the Australian and Alberta models.

Decentralized storage has gained considerable traction in the EU where there are deep concerns about function creep and about the risk that user contact data could be used to create ‘social graphs’ of individuals. The European privacy debates are influenced by the General Data Protection Regulation (GDPR) and its shift toward greater individual control over personal data. In Canada, although the federal privacy commissioner has been advancing a ‘privacy as a human right’ approach to data protection, and although there has been considerable public frustration over the state of private sector data protection, little public sentiment seems to have galvanized around contact-tracing apps. Although Canadians have reacted strongly against perceived overcollection of personal data by public sector bodies in the past, in the pandemic context there seems to be a greater public willingness to accept some incursions on privacy for the public good. What incursions will be acceptable remains to be seen. The federal, provincial and territorial privacy commissioners (with the notable exception of the Alberta commissioner whose hands have been somewhat tied by the launch of the Alberta app) have issued a joint statement on the privacy requirements to be met by contact-tracing apps.

The Alberta contact-tracing app has received the cautious endorsement of the province’s Privacy Commissioner who described it as a “less intrusive” approach (presumably than full centralized storage). She noted that she had reviewed the Privacy Impact Assessment (PIA) (a study done to assess the privacy implications of the app), and was still seeking assurances that collected data would not be used for secondary purposes. She also indicated that the government had committed to the publication of a summary of the Privacy Impact Assessment, although no date was provided for its eventual publication.

Given the attention already paid to privacy in Europe and elsewhere, and given that Australia’s similar app was launched in conjunction with the public release of its full PIA, the Alberta launch should set off both privacy and transparency alarms in Canada. In a context in which decisions are made quickly and in which individuals are asked to sacrifice some measure of privacy for the public good, sound privacy decision-making, supported by full transparent PIAs, and an iterative process for rectifying privacy issues as they emerge, seems a minimum requirement. The release of the Alberta app has also created a gap in the common front of privacy commissioners, and raises questions about the interoperability of contact-tracing apps across Canada. It remains to be seen whether Canada’s federal structure will lead not just to different apps in different provinces, but to different levels of transparency and privacy as well.

 

Published in Privacy

On April 15, 2020 Facebook filed an application for judicial review of the Privacy Commissioner’s “decisions to investigate and continue investigating” Facebook, and seeking to quash the Report of Findings issued on April 25, 2019. This joint investigation involving the BC and federal privacy commissioners was carried out in the wake of the Cambridge Analytica scandal.

The Report of Findings found that Facebook had breached a number of its obligations under the federal Personal Information Protection and Electronic Documents Act (PIPEDA) and B.C.’s Personal Information Protection Act (PIPA). [As I explain here, it is not possible to violate both statutes on the same set of facts, so it is no surprise that nothing further has happened under PIPA]. The Report of Findings set out a series of recommendations. It also contained a section on Facebook’s response to the recommendations in which the commissioners chastise Facebook. The Report led to some strongly worded criticism of Facebook by the federal Privacy Commissioner. On February 6, 2020, the Commissioner referred the matter to Federal Court for a hearing de novo under PIPEDA.

The application for judicial review is surprising. Under the Federal Courts Act, a party has thirty days from the date of a decision affecting it to seek judicial review. For Facebook, that limitation ran out a long time ago. Further, section 18.1 of the Federal Courts Act provides for judicial review of decision, but a Report of Findings is not a decision. The Commissioner does not have the power to make binding orders. Only the Federal Court can do that, after a hearing de novo. The decisions challenged in the application for judicial review are therefore the “decisions to investigate and to continue investigating” Facebook.

In its application for judicial review Facebook argues that the complainants lacked standing because they did not allege that they were users of Facebook or that their personal information had been impacted by Cambridge Analytica’s activities. Instead, they raised general concerns about Facebook’s practices leading to the Cambridge Analytica scandal. This raises the issue of whether a complaint under PIPEDA must be filed by someone directly affected by a company’s practice. The statute is not clear. Section 11(1) of PIPEDA merely states: “An individual may file with the Commissioner a written complaint against an organization for contravening a provision of Division 1 or 1.1 or for not following a recommendation set out in Schedule 1.” Facebook’s argument is that a specific affected complainant is required even though Facebook’s general practices might have left Canadian users vulnerable. This is linked to a further argument by Facebook that the investigation lacked a Canadian nexus since there was no evidence that any data about Canadians was obtained or used by Cambridge Analytica.

Another argument raised by Facebook is that that the investigation amounted to a “broad audit of Facebook’s personal information management practices, not an investigation into a particular PIPEDA contravention” as required by Paragraph 11(1) of PIPEDA. Facebook argues that the separate audit power under PIPEDA has built-in limitations, and that the investigation power is much more extensive. They argue, essentially, that the investigation was an audit without the limits. Facebook also argues that the report of findings was issued outside of the one-year time limit set in s. 13(1) of PIPEDA. In fact, it was released after thirteen rather than twelve months.

Finally, Facebook argues that the investigation carried out by the Commissioner lacked procedural fairness and independence. The allegations are that the sweeping scope of the complaint made against Facebook was not disclosed until shortly before the report was released and that as a result Facebook had been unaware of the case it had to meet. It also alleges a lack of impartiality and independence on the part of the Office of the Privacy Commissioner in the investigation. No further details are provided.

The lack of timeliness of this application may well doom it. Section 18.1 of the Federal Courts Act sets the thirty-day time limit from the date when the party receives notice of the decision it seeks to challenge; the decision in this case is the decision to initiate the investigation, which would have been communicated to Facebook almost two years ago. Although judges have discretion to extend the limitation period, and although Facebook argues it did not receive adequate communication regarding the scope of the investigation, even then their application comes almost a year after the release of the Report of Findings. Perhaps more significantly, it comes two and a half months after the Commissioner filed his application for a hearing de novo before the Federal Court. The judicial review application seems to be a bit of a long shot.

Long shot though it may be, it may be intended as a shot across the bows of both the Office of the Privacy Commissioner and the federal government. PIPEDA is due for reform in the near future. Better powers of enforcement for PIPEDA have been on the government’s agenda; better enforcement is a pillar of the Digital Charter. The Commissioner and others have raised enforcement as one of the major weaknesses of the current law. In fact, the lack of response by Facebook to the recommendations of the Commissioner following the Report of Findings was raised by the Commissioner as evidence of the need for stronger enforcement powers. One of the sought-after changes is the power for the Commissioner to be able to issue binding orders.

This application for judicial review, regardless of its success, puts on the record concerns about procedural fairness that will need to be front of mind in any reforms that increase the powers of the Commissioner. As pointed out by former Commissioner Jennifer Stoddart in a short article many years ago, PIPEDA creates an ombuds model in which the Commissioner plays a variety of roles, including promoting and encouraging compliance with the legislation, mediating and attempting early resolution of disputes and investigating and reporting on complaints. Perhaps so as to give a degree of separation between these roles and any binding order of compliance, it is left to the Federal Court to issue orders after a de novo hearing. Regardless of its merits, the Facebook application for judicial review raises important procedural fairness issues even within this soft-compliance model, particularly since the Commissioner took Facebook so publicly to task for not complying with its non-binding recommendations. If PIPEDA were to be amended to include order-making powers, then attention to procedural fairness issues will be even more crucial. Order-making powers might require clearer rules around procedures as well as potentially greater separation of functions within the OPC, or possibly the creation of a separate adjudication body (e.g. a privacy tribunal).

Published in Privacy

Given that we are in the middle of a pandemic, it is easy to miss the amendments to Ontario’s Personal Health Information Protection Act (PHIPA) and the Freedom of Information and Protection of Privacy Act (FIPPA) that were part of the omnibus Economic and Fiscal Update Act, 2020 (Bill 188) which whipped through the legislature and received Royal Assent on March 25, 2020.

There is much that is interesting in these amendments. The government is clearly on a mission to adapt PHIPA to the digital age, and many of the new provisions are designed to do just that. For example, although many health information custodians already do this as a best practice, a new provision in the law (not yet in force) will require health information custodians that use digital means to manage health information to maintain an electronic audit log. Such a log must detail the identity of anyone who deals with the information, as well as the date and time of any access or handling of the personal information. The Commissioner may request a custodian to provide him with the log for audit or review. Clearly this is a measure designed to improve accountability for the handling of digital health information and to discourage snooping (which is also further discouraged by an increase in the possible fine for snooping found later in the bill).

The amendments will also create new obligations for “consumer electronic service providers”. These companies offer services to individuals to help manage their personal health information. The substance of the obligations remains to be further fleshed out in regulations; the obligations will not take effect until the regulations are in place. The Commissioner will have a new power to order that a health information custodian or class of custodians cease providing personal health information to a consumer electronic service provider. Presumably this will occur in cases where there are concerns about the privacy practices of the provider.

Interestingly, at a time when there is much clamor for the federal Privacy Commissioner to have new enforcement powers to better protect personal information, the PHIPA amendments give the provincial Commissioner the power to levy administrative penalties against “any person” who, in the opinion of the Commissioner, has contravened the Act or its regulations. The administrative penalties are meant either to serve as ‘encouragement’ to comply with the Act, or as a means of “preventing a person from deriving, directly or indirectly, any economic benefit as a result of contravention” of PHIPA. The amount of the penalty should reflect these purposes and must be in accordance with regulations. The amendments also set a two-year limitation period from the date of the most recent contravention for the imposition of administrative penalties. In order to avoid the appearance of a conflict of interest, administrative penalties are paid to the Minister of Finance of the province. These provisions await the enactment of regulations before taking effect.

The deidentification of personal information is a strategy relied upon to carry out research without adversely impacting privacy, but the power of data analytics today raises serious concerns about reidentification risk. It is worth noting that the definition of “de-identify” in PHIPA will be amended, pending the enactment of regulations to that can require the removal of any information “in accordance with such requirements as may be prescribed.” The requirements for deidentification will thus made more adaptable to changes in technology.

The above discussion reflects some of the PHIPA amendments; readers should be aware that there are others, and these can be found in Bill 188. Some take effect immediately; others await the enactment of regulations.

I turn now to the amendments to FIPPA, which is Ontario’s public sector data protection law. To understand these amendments, it is necessary to know that the last set of FIPPA amendments (also pushed through in an omnibus bill) created and empowered “inter-ministerial data integration units”. This was done to facilitate inter-department data sharing with a view to enabling a greater sharing of personal information across the government (as opposed to the more siloed practices of the past). The idea was to allow the government to derive more insights from its data by enabling horizontal sharing, while still protecting privacy.

These new amendments add to the mix the “extra-ministerial data integration unit”, which is defined in the law as “a person or entity, or an administrative division of a person or entity, that is designated as an extra-ministerial data integration unit in the regulations”. The amendments also give to these extra-ministerial data integration units many of the same powers to collect and use data as are available to inter-ministerial data integration units. Notably, however, an extra-ministerial data integration unit, according to its definition, need not be a public-sector body. It could be a person, a non-profit, or even a private sector organization. It must be designated in the regulations, but it is important to note the potential scope. These legislative changes appear to pave the way for new models of data governance in smart city and other contexts.

The Institute for Clinical Evaluative Sciences (ICES) is an Ontario-based independent non-profit organization that has operated as a kind of data trust for health information in Ontario. It is a “prescribed entity” under s. 45 of PHIPA which has allowed it to collect “personal health information for the purpose of analysis or compiling statistical information with respect to the management of, evaluation or monitoring of, the allocation of resources to or planning for all or part of the health system, including the delivery of services.” It is a trusted institution which has been limited in its ability to expand its data analytics to integrate other relevant data by public sector data protection laws. In many ways, these amendments to FIPPA are aimed at better enabling ICES to expand its functions, and it is anticipated that ICES will be designated in the regulations. However, the amendments are cast broadly enough that there is room to designate other entities, enabling the sharing of municipal and provincial data with newly designated entities for the purposes set out in FIPPA, which include: “(a) the management or allocation of resources; (b) the planning for the delivery of programs and services provided or funded by the Government of Ontario, including services provided or funded in whole or in part or directly or indirectly; and (c) the evaluation of those programs and services.” The scope for new models of governance for public sector data is thus expanded.

Both sets of amendments – to FIPPA and to PHIPA – are therefore interesting and significant. The are also buried in an omnibus bill. Last year, the Ontario government launched a Data Strategy Consultation that I have criticized elsewhere for being both rushed and short on detail. The Task Force was meant to report by the end of 2019; not surprisingly, given the unrealistic timelines, they have not yet reported. It is not even clear that a report is still contemplated.

While it is true that technology is evolving rapidly and that there is an urgent need to develop a data strategy, the continued lack of transparency and the failure to communicate clearly about steps already underway is profoundly disappointing. One of the pillars of the data strategy was meant to be privacy and trust. Yet we have already seen two rounds of amendments to the province’s privacy laws pushed through in omnibus bills with little or no explanation. Many of these changes would be difficult for the lay person to understand or contextualize without assistance; some are frankly almost impenetrable. Ontario may have a data strategy. It might even be a good one. However, it seems to be one that can only be discovered or understood by searching for clues in omnibus bills. I realize that we are currently in a period of crisis and resources may be needed elsewhere at the moment, but this obscurity predates the pandemic. Transparent communication is a cornerstone of trust. It would be good to have a bit more of it.

Published in Privacy

The COVID-19 pandemic has sparked considerable debate and discussion about the role of data in managing the crisis. Much of the discussion has centred around personal data, and in these discussions the balance between privacy rights and the broader public interest is often a focus of debate. Invoking the general ratcheting up of surveillance after 9-11, privacy advocates warn of the potential for privacy invasive emergency measures to further undermine individual privacy even after the crisis is over.

This post will focus on the potential for government use of data in the hands of private sector companies. There are already numerous examples of where this has taken place or where it is proposed. The nature and intensity of the privacy issues raised by these uses depends very much on context. For the purposes of this discussion, I have identified three categories of proposed uses of private sector data by the public sector. (Note: My colleague Michael Geist has also written about 3 categories of data – his are slightly different).

The first category involves the use of private sector data to mine it for knowledge or insights. For example, researchers and public health agencies have already experimented with using social media data to detect the presence or spread of disease. Some of this research is carried out on publicly accessible social media data and the identity of specific individuals is not necessary to the research, although geolocation generally is. Many private sector companies sit on a wealth of data that reveals the location and movements of individuals, and this could provide a rich source of data when combined with public health data. Although much could be done with aggregate and deidentified data in this context, privacy is still an issue. One concern is the potential for re-identification. Yet the full nature and scope of concerns could be highly case-specific and would depend upon what data is used, in what form, and with what other data it is combined.

Government might, or might not be, the lead actor when it comes to the use of private sector data in this way. Private sector companies could produce analytics based on their own stores of data. They might do so for a variety of reasons, including experimentation with analytics or AI, a desire to contribute to solutions, or to provide analytics services to public and private sector actors. There is also the potential for public-private collaborations around data.

Private sector companies acting on their own would most likely publish only aggregate or deidentified data, possibly in the form of visualizations. If the published information is not personal information, this type of dissemination is possible, although these companies would need to be attentive to reidentification risks.

In cases where personal data is shared with the public sector, there might be other legal options. The Personal Information Protection and Electronic Documents Act (PIPEDA) contains a research exception that allows organizations to disclose information without consent “for statistical, or scholarly study or research, purposes that cannot be achieved without disclosing the information, [and] it is impracticable to obtain consent”. Such disclosure under s. 7(3)(f) requires that the organization inform the Commissioner in advance of any such disclosure, presumably to allow the Commissioner to weigh in on the legitimacy of what is proposed. The passage of a specific law, most likely on an emergency basis, could also enable disclosure of personal information without consent. Such an option would be most likely to be pursued where the government seeks to compel private sector companies to disclose information to them. Ideally, any such law would set clear parameters on the use and disposal of such data, and could put strict time limits on data sharing to coincide with the state of emergency. A specific law could also provide for oversight and accountability.

The second category is where information is sought by governments in order to specifically identify and track individuals in order to enable authorities to take certain actions with respect to those individuals. An example is where cell phone location data of individuals who have been diagnosed with the disease is sought by government officials so that they can retrospectively track their movements to identify where infected persons have been and with whom they have had contact (contact-tracing).This might be done in order to inform the public of places and times where infected persons have been (without revealing the identity of the infected person) or it might be done to send messages directly to people who were in the vicinity of the infected person to notify them of their own possible infection. In such cases, authorities access and make use of the data of the infected person as well as the data of persons in proximity to them. Such data could also be used to track movements of infected persons in order to see if they are complying with quarantine requirements. For example, public authorities could combine data from border crossings post-spring break with cell phone data to see if those individuals are complying with directives to self-quarantine for 14 days.

The use of private sector data in this way could be problematic under existing Canadian privacy law. Telcos are subject to PIPEDA, which does not contain an exception to the requirement for consent that would be an easy fit in these circumstances. However, PIPEDA does permit disclosure without consent where it is ‘required by law’. A special law, specific to the crisis, could be enacted to facilitate this sort of data sharing. Any such law should also contain its own checks and balances to ensure that data collection and use is appropriate and proportional.

Israel provides an example of a country that enacted regulations to allow the use of cell phone data to track individuals diagnosed with COVID-19. A podcast on this issue by Michael Geist featuring an interview with Israeli law professor Michael Birnhack exposes some of the challenges with this sort of measure. In a decision issued shortly after the recording of the podcast, the Israeli Supreme Court ruled that the regulations failed to meet the appropriate balance between privacy and the demands of the public health crisis. The case makes it clear that it is necessary to find an appropriate balance between what is needed to address a crisis and what best ensures respect for privacy and civil liberties. It is not an all or nothing proposition – privacy or public health. It is a question of balance, transparency, accountability and proportionality.

It is interesting to note that in this context, at least one country has asked individuals to voluntarily share their location and contact information. Singapore has developed an app called TraceTogether that uses Bluetooth signals to identify the phones of other app users that are within two metres of each user. The design of the app includes privacy protective measures. Sharing personal data with appropriate consent is easily permitted under public and private sector laws so long as appropriate safeguards are in place.

A third category of use of personal information involves the public sharing of information about the movements of individuals known to be infected with the virus. Ostensibly this is in order to give people information they may need to protect themselves from unwanted exposure. South Korea offers an example of such measures – it has provided highly detailed information about the location and movements of infected persons; the detail provide could lead to identification. Given the fact in Canada at least, testing has been limited due to insufficient resources, a decision to release detailed information about those who test positive could serve to stigmatize those persons while giving others a false sense of security. Some have raised concerns that such measures would also discourage individuals from coming forward to be tested or to seek treatment out of concerns over stigmatization. In Canada, the disclosure of specific personal health information of individuals – or information that could lead to their identification – is an extreme measure that breaches basic personal health information protection requirements. It is hard to see on what basis the public release of this type of information could be at all proportionate.

A common theme in all of the debates and discussions around data and privacy in the current context is that exceptional circumstances call for exceptional measures. The COVID-19 pandemic has spurred national and regional governments to declare states of emergency. These governments have imposed a broad range of limitations on citizen activities in a bid to stop the spread of the virus. The crisis is real, the costs to human life, health and to the economy are potentially devastating. Sadly, it is also the case that while many do their best to comply with restrictions, others flaunt them to greater or lesser extents, undermining the safety of everyone. In this context, it is not surprising that more drastic, less voluntary measures are contemplated, and that some of these will have important implications for privacy and civil liberties. Privacy and civil liberties, however, are crucially important values and should not be casual victims of pandemic panic. A careful balancing of interests can be reflected not just in the measures involving the collection and use of data, but also in issues of oversight, transparency, accountability, and, perhaps most importantly, in limits on the duration of collection and use.

Published in Privacy

Clearview AI and its controversial facial recognition technology have been making headlines for weeks now. In Canada, the company is under joint investigation by federal and provincial privacy commissioners. The RCMP is being investigated by the federal Privacy Commissioner after having admitted to using Clearview AI. The Ontario privacy commissioner has expressed serious concerns about reports of Ontario police services adopting the technology. In the meantime, the company is dealing with a reported data breach in which hackers accessed its entire client list.

Clearview AI offers facial recognition technology to ‘law enforcement agencies.’ The term is not defined on their site, and at least one newspaper report suggests that it is defined broadly, with private security (for example university campus police) able to obtain access. Clearview AI scrapes images from publicly accessible websites across the internet and compiles them in a massive database. When a client provides them with an image of a person, they use facial recognition algorithms to match the individual in the image with images in its database. Images in the database are linked to their sources which contain other identifying information (for example, they might link to a Facebook profile page). The use of the service is touted as speeding up all manner of investigations by facilitating the identification of either perpetrators or victims of crimes.

This post addresses a number of different issues raised by the Clearview AI controversy, framed around the two different sets of privacy investigations. The post concludes with additional comments about transparency and accountability.

1. Clearview AI & PIPEDA

Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) applies to the collection, use and disclosure of personal information by private sector organizations engaged in commercial activities. Although Clearview AI is a U.S. company, PIPEDA will still apply if there is a sufficient nexus to Canada. In this case, the service clearly captures data about Canadians, and the facial recognition services are marketed to Canadian law enforcement agencies. This should be enough of a connection.

The federal Privacy Commissioner is joined in his investigation by the Commissioners of Quebec, B.C. and Alberta. Each of these provinces has its own private sector data protection laws that apply to organizations that collect, use and disclose personal information within the borders of their respective province. The joint investigation signals the positive level of collaboration and co-operation that exists between privacy commissioners in Canada. However, as I explain in an earlier post, the relevant laws are structured so that only one statute applies to a particular set of facts. This joint investigation may raise important jurisdictional questions similar to those raised in the Facebook/Cambridge Analytica joint investigation and that were not satisfactorily resolved in that case. It is a minor issue, but nonetheless one that is relevant and interesting from a privacy governance perspective.

The federal Commissioner’s investigation will focus on whether Clearview AI complied with PIPEDA when it collected, used and disclosed the personal information which populates its massive database. Clearview AI’s position on the legality of its actions is clearly based on U.S. law. It states on its website that: “Clearview searches the open web. Clearview does not and cannot search any private or protected info, including in your private social media accounts.” In the U.S., there is much less in the way of privacy protection for information in ‘public’ space. In Canada however, the law is different. Although there is an exception in PIPEDA (and in comparable provincial private sector laws) to the requirement of consent for the collection, use or disclosure of “publicly available information”, this exception is cast in narrow terms. It is certainly not broad enough to encompass information shared by individuals through social media. Interestingly, in hearings into PIPEDA reform, the House of Commons ETHI Committee at one point seemed swayed by industry arguments that PIPEDA should be amended to include websites and social media within the exception for “publicly available personal information”. In an earlier post, I argued that this was a dangerous direction in which to head, and the Clearview AI controversy seems to confirm this. Sharing photographs online for the purposes of social interaction should not be taken as consent to use those images in commercial facial recognition technologies. What is more, the law should not be amended to deem it to be so.

To the extent, then, that the database contains personal information of Canadians that was collected without their knowledge or consent, the conclusion will likely be that there has been a breach of PIPEDA. The further use and disclosure of personal information without consent will also amount to a breach. An appropriate remedy would include ordering Clearview AI to remove all personal information of Canadians that was collected without consent from its database. Unfortunately, the federal Commissioner does not have order-making powers. If the investigation finds a breach of PIPEDA, it will still be necessary to go to Federal Court to ask that court to hold its own hearing, reach its own conclusions, and make an order. This is what is currently taking place in relation the Facebook/Cambridge Analytica investigation, and it makes somewhat of a mockery of our privacy laws. Stronger enforcement powers are on the agenda for legislative reform of PIPEDA, and it is to be hoped that something will be done about this before too long.

 

2. The Privacy Act investigation

The federal Privacy Commissioner has also launched an investigation into the RCMP’s now admitted use of Clearview AI technology. The results of this investigation should be interesting.

The federal Privacy Act was drafted for an era in which government institution generally collected the information they needed and used from individuals. Governments, in providing all manner of services, would compile significant amounts of data, and public sector privacy laws set the rules for governance of this data. These laws were not written for our emerging context in which government institutions increasingly rely on data analytics and data-fueled AI services provided by the private sector. In the Clearview AI situation, it is not the RCMP that has collected a massive database of images for facial recognition. Nor has the RCMP contracted with a private sector company to build this service for it. Instead, it is using Clearview AI’s services to make presumably ad hoc inquiries, seeking identity information in specific instances. It is not clear whether or how the federal Privacy Act will apply in this context. If the focus is on the RCMP’s ‘collection’ and ‘use’ of personal information, it is arguable that this is confined to the details of each separate query, and not to the use of facial recognition on a large scale. The Privacy Act might simply not be up to addressing how government institutions should interact with these data-fuelled private sector services.

The Privacy Act is, in fact, out of date and clearly acknowledged to be so. The Department of Justice has been working on reforms and has attempted some initial consultation. But the Privacy Act has not received the same level of public and media attention as has PIPEDA. And while we might see reform of PIPEDA in the not too distant future, reform of the Privacy Act may not make it onto the legislative agenda of a minority government. If this is the case, it will leave us with another big governance gap for the digital age.

If the Privacy Act is not to be reformed any time soon, it will be very interesting to see what the Privacy Commissioner’s investigation reveals. The interpretation of section 6(2) of the Privacy Act could be of particular importance. It provides that: “A government institution shall take all reasonable steps to ensure that personal information that is used for an administrative purpose by the institution is as accurate, up-to-date and complete as possible.” In 2018 the Supreme Court of Canada issued a rather interesting decision in Ewert v. Canada, which I wrote about here. The case involved a Métis man’s challenge to the use of actuarial risk-assessment tests by Correctional Services Canada to make decisions related to his incarceration. He argued that the tests were “developed and tested on predominantly non-Indigenous populations and that there was no research confirming that they were valid when applied to Indigenous persons.” (at para 12). The Corrections and Conditional Release Act contained language very similar to s. 6(2) of the Privacy Act. The Supreme Court of Canada ruled that this language placed an onus on the CSC to ensure that all of the data it relied upon in its decision-making about inmates met that standard – including the data generated from the use of the assessment tools. This ruling may have very interesting implications not just for the investigation into the RCMP’s use of Clearview’s technology, but also for public sector use of private sector data-fueled analytics and AI where those tools are based upon personal data. The issue is whether, in this case, the RCMP is responsible for ensuring the accuracy and reliability of the data generated by a private sector AI system on which they rely.

One final note on the use of Clearview AI’s services by the RCMP – and by other police services in Canada. A look at Clearview AI’s website reveals its own defensiveness about its technologies, which it describes as helping “to identify child molesters, murderers, suspected terrorists, and other dangerous people quickly, accurately, and reliably to keep our families and communities safe.” Police service representatives have also responded defensively to media inquiries, and their admissions of use come with very few details. If nothing else, this situation highlights the crucial importance of transparency, oversight and accountability in relation to these technologies that have privacy and human rights implications. Transparency can help to identify and examine concerns, and to ensure that the technologies are accurate, reliable and free from bias. Policies need to be put in place to reflect clear decisions about what crimes or circumstances justify the use of these technologies (and which ones do not). Policies should specify who is authorized to make the decision to use this technology and according to what criteria. There should be record-keeping and an audit trail. Keep in mind that technologies of this kind, if unsupervised, can be used to identify, stalk or harass strangers. It is not hard to imagine someone use this technology to identify a person seen with an ex-spouse, or even to identify an attractive woman seen at a bar. They can also be used to identify peaceful protestors. The potential for misuse is enormous. Transparency, oversight and accountability are essential if these technologies are to be used responsibly. The sheepish and vague admissions of use of Clearview AI technology by Canadian police services is a stark reminder that there is much governance work to be done around such technologies in Canada even beyond privacy law issues.

Published in Privacy

A recent story in iPolitics states that both the Liberals and the Conservatives support strengthening data protection laws in Canada, although it also suggests they may differ as to the best way to do so.

The Liberals have been talking about strengthening Canada’s data protection laws – both the Privacy Act (public sector) and the Personal Information Protection and Electronic Documents Act (PIPEDA) (private sector) since well before the last election, although their emphasis has been on PIPEDA. The mandate letters of both the Ministers of Justice and Industry contained directions to reform privacy laws. As I discuss in a recent post, these mandate letters speak of greater powers for the Privacy Commissioner, as well as some form of “appropriate compensation” for data breaches. There are also hints at a GDPR-style right of erasure, a right to withdraw consent to processing of data, and rights of data portability. With Canada facing a new adequacy assessment under the EU’s General Data Protection Regulation (GDPR) it is perhaps not surprising to see this inclusion of more EU-style rights.

Weirdly, though, the mandate letters of the Minister of Industry and the Minister of Heritage also contain direction to create the new role of “Data Commissioner” to serve an as-yet unclear mandate. The concept of a Data Commissioner comes almost entirely out of the blue. It seems to be first raised before the ETHI Committee on February 7, 2019 by Dr. Jeffrey Roy of Dalhousie University. He referenced in support of this idea a new Data Commissioner role being created in Australia as well as the existence of a UK Chief Data Officer. How it got from an ETHI Committee transcript to a mandate letter is still a mystery.

If this, in a nutshell, is the Liberal’s plan, it contains both the good, the worrisome, and the bizarre. Strengthening PIPEDA – both in terms of actual rights and enforcement of those rights is a good thing, although the emphasis in the mandate letters seems very oriented towards platforms and other issues that have been in the popular press. This is somewhat worrisome. What is required is a considered and substantive overhaul of the law, not a few colourful and strategically-placed band-aids.

There is no question that the role of the federal Privacy Commissioner is front and centre in this round of reform. There have been widespread calls to increase his authority to permit him to issue fines and to make binding orders. These measures might help address the fundamental weakness of Canada’s private sector data protection laws, but they will require some careful thinking about the drafting of the legislation to ensure that some of the important advisory and dispute resolution roles of the Commissioner’s office are not compromised. And, as we learned with reform of the Access to Information Act, there are order-making powers and then there are order-making powers. It will not be a solution to graft onto the legislation cautious and complicated order-making powers that increase bureaucracy without advancing data protection.

The bizarre comes in the form of the references to a new Data Commissioner. At a time when we clearly have not yet managed to properly empower the Privacy Commissioner, it is disturbing that we might be considering creating a new bureaucracy with apparently overlapping jurisdiction. The mandate letters suggest that the so-called data commissioner would oversee (among other things?) data and platform companies, and would have some sort of data protection role in this regard. His or her role might therefore overlap with both those of the Privacy Commissioner and the Competition Bureau. It is worth noting that the Competition Bureau has already dipped its toe into the waters of data use and abuse. The case for a new bureaucracy is not evident.

The Conservatives seem to be opposed to the creation of the new Data Commissioner, which is a good thing. However, Michelle Rempel Garner was reported by iPolitics as rejecting “setting up pedantic, out of date, ineffectual and bloated government regulatory bodies to enforce data privacy.” It is not clear whether this is simply a rejection of the new Data Commissioner’s office, or also a condemnation of the current regulatory approach to data protection (think baby and bath water). Instead, the Conservatives seem to be proposing creating a new data ownership right for Canadians, placing the economic value of Canadians’ data in their hands.

This is a bad idea for many reasons. In the first place, creating a market model for personal data will do little to protect Canadians. Instead, it will create a context in which there truly is no privacy because the commercial exchange of one’s data for products and services will include a transfer of any data rights. It will also accentuate existing gaps between the wealthy and those less well off. The rich can choose to pay extra for privacy; others will have no choice but to sell their data. Further, the EU, which has seriously studied data ownership rights (and not just for individuals) has walked away from them each time. Data ownership rights are just too complicated. There are too many different interests in data to assign ownership to just one party. If a company uses a proprietary algorithm to profile your preferences for films or books, is this your data which you own, or theirs because they have created it?

What is much more important is the recognition of different interests in data and the strengthening, through law, of the interests of individuals. This is what the GDPR has done. Rights of data portability and erasure, the right to withdraw consent to processing, and many other rights within the GDPR give individuals much stronger interests in their data, along with enforcement tools to protect those interests. Those strengthened interests are now supporting new business models that place consumers at the centre of data decision-making. Open banking (or consumer-directed banking), currently being studied by the Department of Finance in Canada, is an example of this, but there are others as well.

The fix, in the end, is relatively simple. PIPEDA needs to be amended to both strengthen and expand the existing interests of individuals in their personal data. It also needs to be amended to provide for appropriate enforcement, compensation, and fines. Without accountability, the rights will be effectively meaningless. It also needs to happen sooner rather than later.

 

(With thanks to my RA Émilie-Anne Fleury who was able to find the reference to the Data Commissioner in the ETHI Committee transcripts)

Published in Privacy

The year 2020 is likely to bring with it significant legal developments in privacy law in Canada. Perhaps the most important of these at the federal level will come in the form of legislative change. In new Mandate letters, the Prime Minister has charged both the Minister of Justice and the Minister of Innovation Science and Industry with obligations to overhaul public and private sector data protection laws. It is widely anticipated that a new bill to reform the Personal Information Protection and Electronic Documents Act (PIPEDA) will be forthcoming this year, and amendments to the Privacy Act are also expected at some point.

The mandate letters are interesting in what they both do and do not reveal about changes to come in these areas. In the first place, both mandate letters contain identical wording around privacy issues. Their respective letters require the two Ministers to work with each other:

. . . to advance Canada’s Digital Charter and enhanced powers for the Privacy Commissioner, in order to establish a new set of online rights, including: data portability; the ability to withdraw, remove and erase basic personal data from a platform; the knowledge of how personal data is being used, including with a national advertising registry and the ability to withdraw consent for the sharing or sale of data; the ability to review and challenge the amount of personal data that a company or government has collected; proactive data security requirements; the ability to be informed when personal data is breached with appropriate compensation; and the ability to be free from online discrimination including bias and harassment. [my emphasis]

A first thing to note is that the letters reference GDPR-style rights in the form of data portability and the right of erasure. If implemented, these should give individuals considerably more control over their personal information and will strengthen individual interests in their own data. It will be interesting to see what form these rights take. A sophisticated version of data portability has been contemplated in the context of open banking, and a recent announcement makes it clear that work on open banking is ongoing (even though open banking is notably absent from the mandate letter of the Minister of Finance). GDPR-style portability is a start, though it is much less potent as a means of empowering individuals.

The right of erasure is oddly framed. The letters describe it as “the ability to withdraw, remove and erase basic personal data from a platform” (my emphasis). It is unclear why the right of erasure would be limited to basic information on platforms. Individuals should have the right to withdraw, remove and erase personal data from all organizations that have collected it, so long as that erasure is not inconsistent with the purposes for which it was provided and for which it is still required.

Enhancements to rights of notice and new rights to challenge the extent of data collection and retention will be interesting reforms. The references to “appropriate compensation” suggest that the government is attuned to well-publicized concerns that the consequences of PIPEDA breaches are an insufficient incentive to improve privacy practices. Yet it is unclear what form such compensation will take and what procedures will be in place for individuals to pursue it. It is not evident, for example, whether compensation will only be available for data security breaches, or whether it will extend to breaches of other PIPEDA obligations. It is unclear whether the right to adequate compensation will also apply to breaches of the Privacy Act. The letters are mum as to whether it will involve statutory damages linked to a private right of action, or some other form of compensation fund. It is interesting to note that although the government has talked about new powers for the Commissioner including the ability to levy significant fines, these do not appear in the mandate letters.

Perhaps the most surprising feature of the Minister of Industry’s mandate letter is the direction to work with the Minister of Canadian Heritage to “create new regulations for large digital companies to better protect people’s personal data and encourage greater competition in the digital marketplace.” This suggests that new privacy obligations that are sector-specific and separate from PIPEDA are contemplated for “large digital companies”, whatever that might mean. These rules are to be overseen by a brand new Data Commissioner. Undoubtedly, this will raise interesting issues regarding duplication of resources, as well as divided jurisdiction and potentially different approaches to privacy depending on whether an organization is large or small, digital or otherwise.

Published in Privacy

Class action lawsuits for privacy breaches are becoming all the rage in Canada – this is perhaps unsurprising given the growing number of data breaches. However, a proceeding certified and settled in October 2019 stands out as significantly different from the majority of Canadian privacy class action suits.

Most privacy class action lawsuits involve data breaches. Essentially, an entity trusted with the personal information of large numbers of individuals is sued because they lost the data stored on an unsecured device, a rogue employee absconded with the data or repurposed it, a hacker circumvented their security measures, or they simply allowed information to be improperly disclosed due to lax practices or other failings. In each of these scenarios, the common factor is a data breach and improper disclosure of personal information. Haikola v. Personal Insurance Co. is a notably different. In Haikola, the alleged misconduct is the over collection of personal information in breach of the Personal Information Protection and Electronic Documents Act (PIPEDA).

The legal issues in this case arose after the representative class plaintiff, Mr. Haikola, was involved in a car accident. In settling his claim, his insurance company asked him to consent to providing them access to his credit score with a credit reporting agency. Mr. Haikola agreed, although he felt that he had had no choice but to do so. He followed up with the insurance company on several occasions, seeking more information about why the information was required, but did not receive a satisfactory explanation. He filed a complaint with the Office of the Privacy Commissioner. The subsequent investigation led to a Report of Findings that concluded, in the words of Justice Glustein, that the insurance company’s “collection and use of credit scores during the auto insurance claim assessment process is not something that a reasonable person would consider to be appropriate.” (at para 13) The company eventually changed its practices.

Under PIPEDA, the Commissioner’s findings are not binding. Once a complainant has received a Report of Findings, they can choose to bring an application under s. 14 of PIPEDA to Federal Court for an order and/or an award of damages. After receiving his Report of Findings, Mr. Haikola took the unusual step of seeking to commence a class action lawsuit under s. 14 of PIPEDA. The defendants argued that the Federal Court had no jurisdiction under s. 14 to certify a class action lawsuit. There is no case law on this issue, and it is not at all clear that class action recourse is contemplated under s. 14.

The parties, in the meantime, negotiated a settlement agreement. However, quite apart from the issue of whether a class action suit could be certified under s. 14 of PIPEDA, it was unclear whether the Federal Court could “make an enforceable order in a PIPEDA class action against a non-governmental entity.” (at para 28) With advice from the Federal Court case management judge, the parties agreed that Mr. Haikola would commence an action in Ontario Superior Court, requesting certification of the class action lawsuit and approval of the settlement. The sole cause of action in the suit initiated in Ontario Superior Court was for breach of contract. The argument was that in the contract between the insurance company and its customers, the insurance company undertook to “”act as required or authorized by law” in the collection, use, and disclosure of the Class Members’ personal information – including information from credit reporting agencies.” (at para 56) This would include meeting its PIPEDA obligations.

The class included persons whose credit history was used as part of a claim settlement process. The insurance company identified 8,525 people who fell into this category. The settlement provided for the paying out of $2,250,000. The court estimated that if every member of the class filed a valid claim, each would receive approximately $150.

In considering whether a class action lawsuit was the preferable procedure, Justice Glustein noted that generally, for this type of privacy complaint, the normal recourse was under PIPEDA. The structure of PIPEDA is such that each affected individual would have to file a complaint; the filing of a complaint and the issuance of a report were both prerequisites to commencing an action in Federal Court. Justice Glustein considered this to be a barrier to access to justice, particularly since most individuals would have claims “of only a very modest value”. (at para 66) He found that “The common law claim proposed is preferable to each Class Member making a privacy complaint, waiting for the resolution of the complaint from the Privacy Commissioner with a formal report, and then commencing a Federal Court action.” (at para 67)

Justice Glustein certified the proceedings and approved the settlement agreement. He was certainly aware of the potential weaknesses of the plaintiff’s case – these were factors he took into account in assessing the reasonableness of the amount of the settlement. Not only were there real issues as to whether a class action lawsuit was a possible recourse for breach of PIPEDA, a proceeding under s. 14 is de novo, meaning the court would not be bound by the findings of the Privacy Commissioner. Further, the Federal Court has been parsimonious with damages under PIPEDA, awarding them only in the most “egregious” circumstances. It is, in fact, rare for a Federal Court judge to award damages unless there has been an improper disclosure of personal information. In this case, the insurance company was found to have collected too much information, but there had been no breach or loss of personal data.

This case is interesting because raises the possibility of class action lawsuits being used for privacy complaints other than data security breaches. This should put fear into the heart of any company whose general practices or policies have led them to collect too much personal information, obtain insufficient consent, or retain data for longer than necessary (to name just a few possible shortcomings). Perhaps the facts in Haikola are exceptional enough to avoid a landslide of litigation. Justice Glustein was clearly sympathetic towards a plaintiff who had doggedly pursued his privacy rights in the face of an insufficiently responsive company, and who had been vindicated by the OPC’s Report of Findings. Justice Glustein noted as well that it was the plaintiff who had sought to initiate the class action lawsuit – he had not been recruited by class counsel.

There is clearly also an element in this decision of frustration and dissatisfaction with the current state of Canadian data protection law. Justice Glustein observed: “If systemic PIPEDA breaches are not rectified by a class procedure, it is not clear what incentive large insurers and others will have to avoid overcollection of information.” (at para 88) Justice Glustein also observed that “While the Privacy Commissioner may encourage or require changes to future practices, it [sic] has very limited powers to enforce compliance through strong regulatory penalties.” (at para 88) This is certainly true, and many (including the Privacy Commissioner) have called for greater enforcement powers to strengthen PIPEDA. This comment, taken with Justice Glustein’s additional comment that the settlement imposes on the Defendants a “meaningful business cost” for the overcollection of personal information, are nothing short of a condemnation of Canada’s private sector data protection regime.

The government has heard such condemnations from the Commissioner himself, as well as from many other critics of PIPEDA. It is now hearing it from the courts. Hopefully it is paying attention. This is not just because PIPEDA obligations need stronger and more diverse enforcement options to provide meaningful privacy protection, but also because class action lawsuits are a blunt tool, ill-designed to serve carefully-tailored public policy objectives in this area.

 

 

Published in Privacy

The Ontario Energy Board (OEB) has just released a decision that should be of interest to those concerned about data governance for data sharing. The decision relates to an application by Ontario’s Smart Metering Entity (SME) for a licence to begin sharing Ontario’s smart metering data with third parties. The SME was established in Ontario as part of the governance structure for the data collected through government-mandated smart metering for all electricity consumers in the province.

Smart meters in Ontario collect fine-grained electrical consumption data. There are clear privacy interests in this consumption data as a person’s patterns of electrical consumption can reveal much about their activities, habits and preferences. In theory, fine-grained, aggregate, deidentified electrical consumption data can be useful for a broad range of purposes, including feeding the ever-hungry data economy. The SME was charged with governing this data resource in a way that would meet the needs of third parties (researchers, governments, and the private sector) to have access to the data while respecting consumer privacy. In 2019, Merlynda Vilain and I published a paper about the SME and its mandate to govern smart metering data in the public interest.

In its October 24, 2019 decision, the OEB considers an application by the SME seeking approval for its plan to provide access to smart metering data. The SME’s plan is built around three categories of data. The first, labelled “public offers”, consists of “highly aggregated products” ”such as monthly, seasonal or quarterly consumption data aggregated by postal district (i.e. the first digit of the postal code).” (OEB Order, p. 8) This data would be provided free of charge, and subject to unspecified terms and conditions.

The second category of data is “standard private offerings”. This consists of “pre-designed extracts based on popular data requests”. The examples provided include “Hourly or daily consumption data aggregated by 6, 5, 4 or 3 digit Postal Code at the municipal level, specifying the Distributor Rate Class and Commodity Rate Class”, as well as different types of visualizations. This category of data would be made available subject to a Data Use Agreement and at “market prices”.

The third category of data is “custom private offerings”, which are data sets customized to meet the demands of specific clients. These data sets would be subject to a Data Use Agreement and sold at “market price”.

Market price, is, of course, different from a fee for cost recovery. The SME in its application indicated that not only would the fees charged cover the costs of producing the data sets, any profits from the sale of smart metering data would be put towards lowering the Smart Metering Charge. In other words, the sale of data could potentially result in lower energy costs. This is an example of a plan to sell aggregate consumer data with a view to benefitting the class as a whole, although the extent of any benefits is difficult to assess without more information about market pricing and about the privacy risks and implications of the shared data. On the privacy issues, the SME maintained that shared data would be de-identified, although it acknowledged that there was some (unspecified) reidentification risk. It argued that factors mitigating against reidentification would include its work with a privacy consultant, compliance with guidance from the Office of the Information and Privacy Commissioner, the use of Data Use Agreements to limit the actions of the party acquiring the data, and the establishment of an Ethics Review Committee.

Those involved in data governance for data sharing will see how the SME’s proposal features some of the key elements and challenges in the data-sharing context. There is a perceived demand for high-value data, an attempt to meet that demand, privacy issues arising because the data is generated by individual activities and consumption, and a need to think about the terms and conditions of sharing, including cost/price. In this case, the data governance entity is a public body that must act under terms set by the regulator (the OEB), and it requires OEB approval of any data sharing plan. In this case, the OEB heard from the SME as well as a number of interveners, including the Building Owners and Managers Association, the Consumers Council of Canada, the Electricity Distributors Association, Ontario Power Generation Inc., and the Vulnerable Energy Consumers Coalition.

The decision of the OEB is interesting for a number of reasons. First, the approach taken is a precautionary one – the OEB sends the SME back to the drawing board over concerns about privacy and about the pricing scheme. In doing so, it appears to have paid some attention to the sometimes heated data governance discussions that have been taking place in Canada.

The OEB began by noting that none of the interveners objected to the first part of the SME plan – to make its “public offerings” category of data available to the public free of charge. In fact, this was the only part of the plan that received OEB approval. The OEB noted that “As these products would comprise highly aggregated data, they do not raise the same concerns about privacy as more tailored products.” It also concluded that the costs associated with preparing and sharing this data were part of the SME’s normal operations.

More problematic were the other categories of data for which sharing was planned. The OEB accepted that customers have a reasonable expectation of privacy “albeit a “significantly attenuated” one” (at p. 13) in their energy consumption data. The Board also noted that for some commercial customers, the consumption data might be confidential commercial information. The OEB observed that in spite of the fact that the plan was to de-identify the data, there remained some reidentification risk. It stated that “in light of the concerns expressed by stakeholders in this proceeding, the SME should proceed cautiously with third party access”. (at 13-14) The OEB considered that consumers needed to be well-informed about the collection and sharing of their data, and that while the SME has attempted to consult on these issues, “a more comprehensive consumer engagement process should take place.” (at 14) The OEB noted that “it is not clear form the evidence that consumers support the notion that consumption data (even if de-identified) should be offered for sale to third parties.” (at 14)

This approach reflects a shift in position on the part of the OEB. Past discussions of data sharing have regarded this data primarily as a public asset that should be put to use in the public interest. In the case of third party data sharing, this public interest was largely in the stimulation of the economy and innovation. What is different in this OEB Order is a much greater recognition of the importance of individual and collective consent. In its directions to the SME, the OEB asks for more detail from the SME’s consultation with consumers, the need to propose “a protocol for receiving and dealing with consumer complaints regarding the release of the data” (at 14), a plan for informing consumers about the release of deidentified information to third parties, and a need to obtain approval “of the basic terms of any Data Use Agreement with third parties.” (at 14).

In addition to these concerns about privacy and consultation, the OEB expressed reservations about the SME’s plans to share data at ‘market prices’. Some of the interveners noted that the SME held a monopoly position with respect to smart metering data, and there was therefore no market price for such data. The OEB called for the SME to develop a marketing plan that “should address pricing to ensure reasonably priced access by commercial and non-commercial users.” (at 14)

This decision is important and interesting for a number of reasons. First, it reflects a cautious, go-slow, precautionary approach to data sharing that might not have existed before Ontarians lost their data innocence in the debates over plans for Sidewalk Toronto. The OEB’s concerns include reidentification risk, proper consultation, accountability, and the terms and conditions for data sharing. The need to adequately and appropriately consult those individuals whose data is to be shared is an important theme in this decision. Although the SME claims to have made efforts to include consumer perspectives, the OEB is not satisfied that these efforts went far enough.

The decision also lands in the middle of the Ontario government’s data strategy consultation (which I have written about here, here and here). The consultation process – which lacks detail and is moving far too quickly – is clearly geared towards increasing data sharing and leveraging data for economic development and innovation, all while maintaining public ‘trust and confidence’. The Ontario government clearly wants to make some quick changes. Yet what this OEB decision reflects is a need to adopt a precautionary approach and to ensure adequate consultation and public awareness. As frameworks, models and templates are developed, things can being to move more quickly – but there is real value in getting things right from the outset.

Published in Privacy

Ontario is currently engaged in a data strategy consultation process. The stated goals are to create economic opportunities and to improve government services by facilitating greater data sharing and by using more analytics and artificial intelligence. The plan is to do this while maintaining the ‘trust and confidence’ of Ontarians. The consultation process has had an extraordinarily low profile considering what is at stake. That said, it is happening so quickly that it is easy to miss. Even for those paying attention, the consultation is long on boosterism and short on detail. This post outlines some reasons why Ontarians should be concerned.

1. Major transformation without proper debate/consultation

Developing a data strategy is a good idea. Data-driven innovations are dramatically changing our economy and society. There are many ways for government to become more effective and efficient by embracing new technologies. It could also become more transparent and find new ways to engage citizens. To do these things some changes to the law and policy infrastructure will be necessary. Businesses seeking to innovate and grow in the digital and data economy will need better access to quality data and, among other things, new models for data governance and data sharing. Data-driven technologies also bring with them risks of harm and these too may need new legislation or normative frameworks. There is a lot to consider and some of the changes will be transformative, and will rely upon citizen data. These are all good reasons to consult deeply and broadly, both to seek input and to lay the foundations for a transparent public engagement.

The data strategy consultation was announced in February 2019, with a report to be published before the end of the year. The consultation centres around three discussion papers, the first of which was only made available in mid-August 2019 and the last of which has just been released, with comments due by the end of November. The public meetings held as part of these consultations are taking place on very short notice. The process is hurried, obscure and fails to properly engage the full range of stakeholders.

2, Superficiality

Quite apart from the rushed nature of the process, the discussion papers are woefully inadequate. They are full of assertions of the benefits of what is planned, with the occasional nod to the importance of privacy and trust. There is little detail about the nature, scope or timelines for what the government plans to do.

The discussion papers give only brief glimpses of things that merit a much more detailed treatment. For example, on the issue of broad-scale data sharing between different government departments and agencies, we are told that “while ‘connecting the dots’ between datasets can help government provide better services, there are privacy and cybersecurity risks to be managed.” ‘Connecting the dots’ can mean all sorts of things. Perhaps data matching will be used to find ways to improve service delivery. Data analytics might also be used to discover or anticipate certain citizen behaviors. This could include identifying patterns that suggest certain individuals are fraudulently obtaining benefits or cheating on taxes, or identifying children potentially at risk. The data-matching possibilities are endless. And while the goals might be important, there are significant risks of harm. Beyond privacy concerns, there are issues of discrimination and undue surveillance. What processes will be in place to ensure transparency and accountability as these programs are developed and implemented? The consultation documents are so general and superficial that they fail to identify, let alone invite engagement on some of the real challenges posed by the government’s (undisclosed) plans.

An alarming glimpse of what lies beneath the superficial gloss of these documents is found in the second discussion paper which focuses on “Creating Economic Benefits”. The document talks about the value that can be derived by the private sector from data shared by governments. It then casually states “Given that Ontario has a wealth of data in digital health assets, clinical and administrative health data can also be considered as a high-value dataset that may present various opportunities for Ontario.” This suggests that the government is planning to make the personal health data of Ontarians available to the private sector. As the Privacy Commissioner, in his comment on this aspect of the discussion paper, aptly notes, “It is important to distinguish between the high value of health-related data in terms of utilizing it to foster innovation and research, and its high monetary value (that is, health-related data as a commodity to be sold as a source of revenue for the government). The specific scope of what the government may be contemplating is not clear from the discussion paper.”

3. The Gaps

The government is designing a data strategy but its focus is relatively narrow. Ontario’s Privacy Commissioner has pointed out that there are many other data-related reforms that could enhance government transparency including open contracting and open procurement, as well as other reforms to improve access to government information. While the Simpler, Faster, Better Services Act introduced reforms around open data, open data is not necessarily the best route to transparency, especially with a government that has indicated it wants to be more strategic about its release of open data and that sees it primarily as a driver of the economy.

4. Social impacts

This consultation process relies far too much on the increasingly tired trope of “trust and confidence”. The first consultation document, a truly abstract exercise in asking people what they think about plans that have neither been discussed or disclosed, is titled “Promoting Public Trust and Confidence”. Trust and confidence must be earned, not promoted.

Although the first discussion paper identifies a broad range of issues, including bias and discrimination, surveillance, data privacy and security, these are raised largely in the abstract. In the subsequent discussion papers on creating economic benefits and smarter government, the issues are boiled down to individual data privacy and security. There needs to be a detailed, robust and informed discussion on the impacts of proposed technological changes on individuals and communities, as well as on limits, oversight and safeguards.

5. ‘Stakeholders’ and the Rest of Us

Another issue that should concern Ontarians in this consultation is whose voices really matter. The lightning fast consultation hints at some major changes, many of which are driven by industry demands (such as the massive sharing of personal health data with the private sector). Industry clearly has the ear of government and does not need the consultation process in order to be heard.

The data strategy consultation has been poorly publicized. The discussion papers have been published late in the process, contain little detail, and have narrow windows for providing feedback. The paper on trust and confidence was released in mid-August with comments due just after Labour Day. The timing could hardly be worse for ensuring public engagement. The public meetings around the province are scheduled with very short notice. This consultation favours larger organizations with the resources to throw together a quick response or to find someone who can attend a meeting at short notice. It does not favour the general public, nor does it favour civil society groups and academia with limited resources and personnel.

At the same time that this speedy data consultation is taking place, there are closed-door consultations underway with “stakeholders” about reforms to the Personal Health Information Protection Act. While there is no doubt that much could be done to modernize this legislation, the fact that it taking place behind the scenes of the superficial data strategy consultation is deeply troubling. There is also, reportedly, work being done on an ethical AI strategy for the government. Not only is this not part of any public consultation process, it is only hinted at in the third discussion paper.

It is also profoundly disturbing that institutions that serve the public interest such as the Office of the Information and Privacy Commissioner so clearly do not have the ear of government. The Privacy Commissioner’s input has been reduced to letters written in response to the discussion papers. These letters politely invite the government to seek out his expertise on issues that are squarely within his mandate.

Proposed massive technological change, ‘trust us’ assurances about privacy that fall short of the mark, and a disregard for early and inclusive consultations are a recipe for disaster. People are not data cows to be milked by government and industry, and acknowledged with only a pat on the rump and a vague assurance that they will be well looked after. The data strategy must serve all Ontarians and must be built on a foundation of credible and meaningful public engagement. As the Sidewalk Toronto process has demonstrated, people do care, the private sector doesn’t have all the answers, and transformative change needs social legitimacy.

Published in Privacy
<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>
Page 5 of 19

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law