Teresa Scassa - Blog

Teresa Scassa

Teresa Scassa

On June 13, 2018 the Supreme Court of Canada handed down a decision that may have implications for how issues of bias in algorithmic decision-making in Canada will be dealt with. Ewert v. Canada is the result of an eighteen-year struggle by Mr. Ewert, a federal inmate and Métis man, to challenge the use of certain actuarial risk-assessment tools to make decisions about his carceral needs and about his risk of recidivism. His concerns, raised in his initial grievance in 2000, have been that these tools were “developed and tested on predominantly non-Indigenous populations and that there was no research confirming that they were valid when applied to Indigenous persons.” (at para 12) After his grievances went nowhere, he eventually sought a declaration in Federal Court that the tests breached his rights to equality and to due process under the Canadian Charter of Rights and Freedoms, and that they were also a breach of the Corrections and Conditional Release Act (CCRA), which requires the Correctional Service of Canada (CSC) to “take all reasonable steps to ensure that any information about an offender that it uses is as accurate, up to data and complete as possible.” (s. 24(1)). Although the Charter arguments were unsuccessful, the majority of the Supreme Court of Canada agreed with the trial judge that CSC had breached its obligations under the CCRA. Two justices in dissent agreed with the Federal Court of Appeal that neither the Charter nor the CCRA had been breached.

Although this is not explicitly a decision about ‘algorithmic decision-making’ as the term is used in the big data and artificial intelligence (AI) contexts, the basic elements are present. An assessment tool developed and tested using a significant volume of data is used to generate predictive data to aid in decision-making in individual cases. The case also highlights a common concern in the algorithmic decision-making context: that either the data used to develop and train the algorithm, or the assumptions coded into the algorithm, create biases that can lead to inaccurate predictions about individuals who fall outside the dominant group that has influenced the data and the assumptions.

As such, my analysis is not about the particular circumstances of Mr. Ewert, nor is it about the impact of the judgement within the correctional system in Canada. Instead, I parse the decision to see what it reveals about how courts might approach issues of bias in algorithmic decision-making, and what impact the decision may have in this emerging context.

1. ‘Information’ and ‘accuracy’

A central feature of the decision of the majority in Ewert is its interpretation of s. 24(1) of the CCRA. To repeat the wording of this section, it provides that “The Service shall take all reasonable steps to ensure that any information about an offender that it uses is as accurate, up to date and complete as possible.” [My emphasis] In order to conclude that this provision was breached, it was necessary for the majority to find that Mr. Ewert’s test results were “information” within the meaning of this section, and that the CCRA had not taken all reasonable steps to ensure its accuracy.

The dissenting justices took the view that when s. 24(1) referred to “information” and to the requirement to ensure its accuracy, the statute included only the kind of personal information collected from inmates, information about the offence committed, and a range of other information specified in s. 23 of the Act. The dissenting justices preferred the view of the CSC that “information” meant ““primary facts” and not “inferences or assessments drawn by the Service”” (at para 107). The majority disagreed. It found that when Parliament intended to refer to specific information in the CCRA it did so. When it used the term “information” in an unqualified way, as it did in s. 24(1), it had a much broader meaning. Thus, according to the majority, “the knowledge the CSC might derive from the impugned tools – for example, that an offender has a personality disorder or that there is a high risk than an offender will violently reoffend – is “information” about that offender” (at para 33). This interpretation of “information” is an important one. According to the majority, profiles and predictions applied to a person are “information” about that individual.

In this case, the Crown had argued that s. 24(1) should not apply to the predictive results of the assessment tools because it imposed an obligation to ensure that “information” is “as accurate” as possible. It argued that the term “accurate” was not appropriate to the predictive data generated by the tools. Rather, the tools “may have “different levels of predictive validity, in the sense that they predict poorly, moderately well or strongly””. (at para 43) The dissenting justices were clearly influenced by this argument, finding that: “a psychological test can be more or less valid or reliable, but it cannot properly be described as being “accurate” or “inaccurate”.” (at para 115) According to the dissent, all that was required was that accurate records of an inmate’s test scores must be maintained – not that the tests themselves must be accurate. The majority disagreed. In its view, the concept of accuracy could be adapted to different types of information. When applied to psychological assessment tools, “the CSC must take steps to ensure that it relies on test scores that predict risks strongly rather than those that do so poorly.” (at para 43)

It is worth noting that the Crown also argued that the assessment tools were important in decision-making because “the information derived from them is objective and thus mitigates against bias in subjective clinical assessments” (at para 41). While the underlying point is that the tools might produce more objective assessments than individual psychologists who might bring their own biases to an assessment process, the use of the term “objective” to describe the output is troubling. If the tools incorporate biases, or are not appropriately sensitive to cultural differences, then the output is ‘objective’ in only a very narrow sense of the word, and the use of the word masks underlying issues of bias. Interestingly, the majority took the view that if the tools are considered useful “because the information derived from them can be scientifically validated. . . this is all the more reason to conclude that s. 24(1) imposes an obligation on the CSC to take reasonable steps to ensure that the information is accurate.” (at para 41)

It should be noted that while this discussion all revolves around the particular wording of the CCRA, Principle 4.6 of Schedule I of the Personal Information Protection and Electronic Documents Act (PIPEDA) contains the obligation that: “Personal information shall be as accurate, complete, and up-to-date as is necessary for the purposes for which it is to be used.” Further, s. 6(2) of the Privacy Act provides that: “A government institution shall take all reasonable steps to ensure that personal information that is used for an administrative purpose by the institution is as accurate, up-to-date and complete as possible.A similar interpretation of “information” and “accuracy” in these statutes could be very helpful in addressing issues of bias in algorithmic decision-making more broadly.

2. Reasonable steps to ensure accuracy

According to the majority, “[t]he question is not whether the CSC relied on inaccurate information, but whether it took all reasonable steps to ensure that it did not.” (at para 47). This distinction is important – it means that Mr. Ewert did not have to show that his actual test scores were inaccurate, something that would be quite burdensome for him to do. According to the majority, “[s]howing that the CSC failed to take all reasonable steps in this respect may, as a practical matter, require showing that there was some reason for the CSC to doubt the accuracy of information in its possession about an offender.” (at para 47, my emphasis) The majority noted that the trial judge had found that “the CSC had long been aware of concerns regarding the possibility of psychological and actuarial tools exhibiting cultural bias.” (at para 49) The concerns had led to research being carried out in other jurisdictions about the validity of the tools when used to assess certain other cultural minority groups. The majority also noted that the CSC had carried out research “into the validity of certain actuarial tools other than the impugned tools when applied to Indigenous offenders” (at para 49) and that this research had led to those tools no longer being used. However, in this case, in spite of concerns, the CSC had taken no steps to assess the validity of the tools, and it continued to apply them to Indigenous offenders. The majority noted that the CCRA, which set out guiding principles in s. 4, specifically required correctional policies and practices to respect cultural, linguistic and other differences and to take into account “the special needs of women, aboriginal peoples, persons requiring mental health care and other groups” (s. 4(g)) The majority found that this principle “represents an acknowledgement of the systemic discrimination faced by Indigenous persons in the Canadian correctional system.” (at para 53) As a result, it found it incumbent on CSC to give “meaningful effect” to this principle “in performing all of its functions”. In particular, the majority found that “this provision requires the CSC to ensure that its practices, however neutral they may appear to be, do not discriminate against Indigenous persons.”(at para 54) The majority observed that although it has been 25 years since this principle was added to the legislation, “there is nothing to suggest that the situation has improved in the realm of corrections” (at para 60). It expressed dismay that “the gap between Indigenous and non-Indigenous offenders has continued to widen on nearly every indicator of correctional performance”. (at para 60) It noted that “Although many factors contributing to the broader issue of Indigenous over-incarceration and alienation from the criminal justice system are beyond the CSC’s control, there are many matters within its control that could mitigate these pressing societal problems. . . Taking reasonable steps to ensure that the CSC uses assessment tools that are free of cultural bias would be one.”(at para 61) [my emphasis]

According to the majority of the Court, therefore, what is required by s. 24(1) of the CCRA is for the CSC to carry out research into whether and to what extent the assessment tools it uses “are subject to cross-cultural variance when applied to Indigenous offenders.” (at para 67) Any further action would depend on the results of the research.

What is interesting here is that the onus is placed on the CSC (influenced by the guiding principles in the CCRA) to take positive steps to verify the validity of the assessment tools on which it relies. The Court does not specify who is meant to carry out the research in question, what standards it should meet, or how extensive it should be. These are important issues. It should be noted that discussions of algorithmic bias often consider solutions involving independent third-party assessment of the algorithms or the data used to develop them.

3. The Charter arguments

Two Charter arguments were raised by counsel for Mr. Ewert. The first was a s. 7 due process argument. Counsel for Mr. Ewert argued that reliance on the assessment tools violated his right to liberty and security of the person in a manner that was not in accordance with the principles of fundamental justice. The tools were argued to fall short of the principles of fundamental justice because of their arbitrariness (lacking any rational connection to the government objective) and overbreadth. The court was unanimous in finding that reliance on the tools was not arbitrary, stating that “The finding that there is uncertainty about the extent to which the tests are accurate when applied to Indigenous offenders is not sufficient to establish that there is no rational connection between reliance on the tests and the relevant government objective.” (at para 73) Without further research, the extent and impact of any cultural bias could not be known.

Mr. Ewert also argued that the results of the use of the tools infringed his right to equality under s. 15 of the Charter. The Court gave little time or attention to this argument, finding that there was not enough evidence to show that the tools had a disproportionate impact on Indigenous inmates when compared to non-Indigenous inmates.

The Charter is part of the Constitution and applies only to government action. There are many instances in which governments may come to rely upon algorithmic decision-making. While concerns might be raised about bias and discriminatory impacts from these processes, this case demonstrates the challenge faced by those who would raise such arguments. The decision in Ewert suggests that in order to establish discrimination, it will be necessary either to demonstrate discriminatory impacts or effects, or to show how the algorithm itself and/or the data used to develop it incorporate biases or discriminatory assumptions. Establishing any of these things will impose a significant evidentiary burden on the party raising the issue of discrimination. Even where the Charter does not apply and individuals must rely upon human rights legislation, establishing discrimination with complex (and likely inaccessible or non-transparent algorithms and data) will be highly burdensome.

Concluding thoughts

This case raises important and interesting issues that are relevant in algorithmic decision-making of all kinds. The result obtained in this case favoured Mr. Ewert, but it should be noted that it took him 18 years to achieve this result, and he required the assistance of a dedicated team of lawyers. There is clearly much work to do to ensure that fairness and transparency in algorithmic decision-making is accessible and realizable.

Mr. Ewert’s success was ultimately based, not upon human rights legislation or the Charter, but upon federal legislation which required the keeping of accurate information. As noted above, PIPEDA and the Privacy Act impose a similar requirement on organizations that collect, use or disclose personal information to ensure the accuracy of that information. Using the interpretive approach of the Supreme Court of Canada in Ewert v. Canada, this statutory language may provide a basis for supporting a broader right to fair and unbiased algorithmic decision-making. Yet, as this case also demonstrates, it may be challenging for those who feel they are adversely impacted to make their case, absent evidence of long-standing and widespread concerns about particular tests in specific contexts.

 

The issue of the application of privacy/data protection laws to political parties in Canada is not new – Colin Bennett and Robin Bayley wrote a report on this issue for the Office of the Privacy Commissioner of Canada in 2012. It gained new momentum in the wake of the Cambridge Analytica scandal when it was brought home to the public in a fairly dramatic way the extent to which personal information might be used not just to profile and target individuals, but to sway their opinions in order to influence the outcome of elections.

In the fallout from Cambridge Analytica there have been a couple of recent developments in Canada around the application of privacy laws to political parties. First, the federal government included some remarkably tepid provisions into Bill C-76 on Elections Act reform. These provisions, which I critique here, require parties to adopt and post a privacy policy, but otherwise contain no normative requirements. In other words, they do not hold political parties to any particular rules or norms regarding their collection, use or disclosure of personal information. There is also no provision for independent oversight. The only complaint that can be made – to the Commissioner of Elections – is about the failure to adopt and post a privacy policy. The federal government has expressed surprise at the negative reaction these proposed amendments have received and has indicated a willingness to do something more, but that something has not yet materialized. Meanwhile, it is being reported that the Bill, even as it stands, is not likely to clear the Senate before the summer recess, putting in doubt the ability of any amendments to be in place and implemented in time for the next election.

Meanwhile, on June 6 2018, the Quebec government introduced Bill no 188 into the National Assembly. If passed, this Bill would give the Quebec Director General of Elections the duty to examine and evaluate the practices of the provincial political parties’ collection, use and disclosure of personal information. The Director General must also assess their information security practices. If the Bill is passed into law, he will be required to report his findings to the National Assembly no later than the first of October 2019. The Director General will make any recommendations in this report that he feels are appropriate in the circumstances. The Bill also modifies laws applicable to municipal and school board elections so that the Director-General can be directed by the National Assembly to conduct a similar assessment and report back. While this Bill would not make any changes to current practices in the short term, it is clearly aimed at gathering data with a view to informing any future legislative reform that might be deemed necessary.

 

In the wake of the Cambridge Analytica scandal, Canada’s federal government has come under increased criticism for the fact that Canadian political parties are not subject to existing privacy legislation. This criticism is not new. For example, Prof. Colin Bennett and Robin Bayley wrote a report on the issue for the Office of the Privacy Commissioner of Canada in 2012.

The government’s response, if it can be called a response, has come in Bill C-76, the Act to amend the Canada Elections Act and other Acts and to make certain consequential amendments which was introduced in the House of Commons on April 30, 2018. This Bill would require all federal political parties to have privacy policies in order to become or remain registered. It also sets out what must be included in the privacy policy.

By way of preamble to this critique of the legislative half-measures introduced by the government, it is important to note that Canada already has both a public sector Privacy Act and a private sector Personal Information Protection and Electronic Documents Act (PIPEDA). Each of these statutes sets out rules for collection, use and disclosure of personal information and each provides for an oversight regime and a complaints process. Both statutes have been the subject of substantial critique for not going far enough to address privacy concerns, particularly in the age of big data. In February 2018, the House of Commons Standing Committee on Access to Information, Privacy and Ethics issued a report on PIPEDA, and recommended some significant amendments to adapt the statute to protecting privacy in a big data environment. Thus, the context in which the provisions regarding political parties’ privacy obligations are introduced is one in which a) we already have privacy laws that set data protection standards; b) these laws are generally considered to be in need of significant amendment to better address privacy; and c) the Cambridge Analytica scandal has revealed just how complex, problematic and damaging the misuse of personal information in the context of elections can be.

Once this context is understood, the privacy ‘obligations’ that the government proposes to place on political parties in the proposed amendments can be seen for what they are: an almost contemptuous and entirely cosmetic quick fix designed to deflect attention from the very serious privacy issues raised by the use of personal information by political parties.

First, the basic requirement placed on political parties will be to have a privacy policy. The policy will also have to be published on the party’s internet site. That’s pretty much it. Are you feeling better about your privacy yet?

To be fair, the Bill also specifies what the policy must contain:

(k) the party’s policy for the protection of personal information [will include]:

(i) a statement indicating the types of personal information that the party collects and how it collects that information,

(ii) a statement indicating how the party protects personal information under its control,

(iii) a statement indicating how the party uses personal information under its control and under what circumstances that personal information may be sold to any person or entity,

(iv) a statement indicating the training concerning the collection and use of personal information to be given to any employee of the party who could have access to personal information under the party’s control,

(v) a statement indicating the party’s practices concerning

(A) the collection and use of personal information created from online activity, and

(B) its use of cookies, and

(vi) the name and contact information of a person to whom concerns regarding the party’s policy for the protection of personal information can be addressed; and

(l) the address of the page — accessible to the public — on the party’s Internet site where its policy for the protection of personal information is published under subsection (4).

It is particularly noteworthy that unlike PIPEDA (or any other data protection law, for that matter), there is no requirement to obtain consent to any collection, use or disclosure of personal information. A party’s policy simply has to tell you what information it collects and how. Political parties are also not subject to any of the other limitations found in PIPEDA. There is no requirement that the purposes for collection, use or disclosure meet a reasonableness standard; there is no requirement to limit collection only to what is necessary to achieve any stated purposes; there is nothing on data retention limits; and there is no right of access or correction. And, while there is a requirement to identify a contact person to whom any concerns or complaints may be addressed, there is no oversight of a party’s compliance with their policy. (Note that it would be impossible to oversee compliance with any actual norms, since none are imposed). There is also no external complaints mechanism available. If a party fails to comply with requirements to have a policy, post it, and provide notice of any changes, it can be deregistered. That’s about it.

This is clearly not good enough. It is not what Canadians need or deserve. It does not even come close to meeting the standards set in PIPEDA, which is itself badly in need of an overhaul. The data resources and data analytics tools available to political parties have created a context in which data protection has become important not just to personal privacy values but to important public values as well, such as the integrity and fairness of elections. Not only are these proposed amendments insufficient to meet the privacy needs of Canadians, they are shockingly cynical in their attempt to derail the calls for serious action on this issue.

What is the proper balance between privacy and the open courts principle when it comes to providing access to the decisions of administrative tribunals? This is the issue addressed by Justice Ed Morgan in a recent Ontario Superior Court decision. The case arose after the Toronto Star brought an application to have parts of Ontario’s Freedom of Information and Protection of Privacy Act (FIPPA) declared unconstitutional. To understand this application, some background may be helpful.

Courts in Canada operate under the “open courts principle”. This principle has been described as “one of the hallmarks of a democratic society” and it is linked to the right of freedom of expression guaranteed by s. 2(b) of the Canadian Charter of Rights and Freedoms. The freedom of expression is implicated because in order for the press and the public to be able to debate and discuss what takes place in open court, they must have access to the proceedings and to the records of proceedings. As Justice Morgan notes in his decision, the open courts principle applies not just to courts, but also to administrative tribunals, since the legitimacy of the proceedings before such tribunals requires similar transparency.

Administrative bodies are established by legislation to carry out a number of different functions. This can include the adjudication of matters related to the subject matter of their enabling legislation. As the administrative arm of government has expanded, so too has the number and variety of administrative tribunals at both the federal and provincial levels. Examples of tribunals established under provincial legislation include landlord-tenant boards, human rights tribunals, securities commissions, environmental review tribunals, workers’ compensation tribunals, labour relations boards, and criminal injury compensations boards – to name just a very few. These administrative bodies are often charged with the adjudication of disputes over matters that are of fundamental importance to individuals, impacting their livelihood, their housing, their human rights, and their compensation and disability claims.

Because administrative tribunals are established by provincial legislation, they are public bodies, and as such, are subject to provincial (or, as the case may be, federal) legislation governing access to information and the protection of personal information in the hands of the public sector. The applicability of Ontario’s Freedom of Information and Protection of Privacy Act is at the heart of this case. The Toronto Star brought its application with respect to the 14 administrative tribunals found in the list of institutions to which FIPPA applies in a Schedule to that Act. It complained that because FIPPA applied to these tribunals, the public presumptively had to file access to information requests under that statute in order to access the adjudicative records of the tribunals. It is important to note that the challenge to the legislation was limited a) to administrative tribunals, and b) to their adjudicative records (as opposed to other records that might relate to their operations). Thus the focus was really on the presumptive right of the public, according to the open courts principles, to have access to the proceedings and adjudicative records of tribunals.

Justice Morgan noted that the process under FIPPA requires an applicant to make a formal request for particular records and to pay a fee. The head of the institution then considers the request and has 30 days in which it must advise the applicant as to whether access will be granted. The institution may also notify the applicant that a longer period of time is required to respond to the request. It must give notice to anyone who might be affected by the request and must give that person time in which to make representations. The institution might refuse to disclose records or it might disclose records with redactions; a dissatisfied applicant has a right of appeal to the Information and Privacy Commissioner.

In addition to the time required for this process to unfold, FIPPA also sets out a number of grounds on which access can be denied. Section 42(1) provides that “An institution shall not disclose personal information in its custody or under its control”. While there are some exceptions to this general rule, none of them relates to adjudicative bodies specifically. Justice Morgan noted that the statute provides a broad definition of personal information. While the default rule is non-disclosure, the statute gives the head of an institution some discretion to disclose records containing personal information. Thus, for example, the head of an institution may disclose personal information if to do so “does not constitute an unjustified invasion of personal privacy” (s. 21(1)(f)). The statute sets out certain circumstances in which an unjustified invasion of personal privacy is presumed to occur (s. 21(3)), and these chiefly relate to the sensitivity of the personal information at issue. The list includes many things which might be part of adjudication before an administrative tribunal, including employment or educational history, an individual’s finances, income, or assets, an individual’s eligibility for social service or welfare benefits, the level of such benefits, and so on. The Toronto Star led evidence that “the personal information exemption is so widely invoked that it has become the rule rather than an exemption to the rule.” (at para 27). Justice Morgan agreed, characterizing non-disclosure as having become the default rule.

FIPPA contains a “public interest override” in s. 23, which allows the head of an institution to release records notwithstanding the applicability of an exception to the rule of disclosure, where “a compelling public interest in the disclosure of the record clearly outweighs the purpose of the exemption.” However, Justice Morgan noted that the interpretation of this provision has been so narrow that the asserted public interest must be found to be more important than the broad objective of protecting personal information. In the case of adjudicative records, the Information and Privacy Commissioner’s approach has been to require the requester to demonstrate “that there is a public interest in the Adjudicative Record not simply to inform the public about the particular case, but for the larger societal purpose of aiding the public in making political choices” (at para 31). According to Justice Morgan, “this would eliminate all but the largest and most politically prominent of cases from media access to Adjudicative Records and the details contained therein” (at para 32).

The practice of the 14 adjudicative bodies at issue in this case showed a wide variance in the ways in which they addressed issues of access. Justice Morgan noted that 8 of the 14 bodies did not require a FIPPA application to be made; requests for access to and copies of records could be directed by applicants to the tribunal itself. According to Justice Morgan, this is not a problem. He stated: “their ability to fashion their own mechanism for public access to Adjudicative Records, and to make their own fine-tuned determinations of the correct balance between openness and privacy, fall within the power of those adjudicative institutions to control their own processes” (at para 48). The focus of the court’s decision is therefore on the other 6 adjudicative bodies that require those seeking access to adjudicative records to follow the process set out in the legislation. The Star emphasized the importance of timeliness when it came to reporting on decisions of adjudicative bodies. It led evidence about instances where obtaining access to records from some tribunals took many weeks or months, and that when disclosure occurred, the documents were often heavily redacted.

Justice Morgan noted that the Supreme Court of Canada has already found that s. 2(b) protects “guaranteed access to the courts to gather information” (at para 53, citing Canadian Broadcasting Corp. v. New Brunswick (A.G.)), and that this right includes access to “exhibits entered into evidence, photocopies of all such records, and the ability to disseminate those records by means of broadcast or other publication” (at para 53). He found that FIPPA breaches s. 2(b) because it essentially creates a presumption of non-disclosure of personal information “and imposes an onus on the requesting party to justify the disclosure of the record” (at para 56). He also found that the delay created by the FIPPA system “burdens freedom of the press and amounts in the first instance to an infringement” of s. 2(b) of the Charter (at para 70). However, it is important to note that under the Charter framework, the state can still justify a presumptive breach of a Charter right by showing under s. 1 of the Charter that it is a reasonable limit, demonstrably justified in a free and democratic society.

In this case, Justice Morgan found that the ‘reverse onus’ placed on the party requesting access to an adjudicative record to show why the record should be released could not be justified under s. 1 of the Charter. He noted that in contexts outside of FIPPA – for example, where courts consider whether to impose a publication ban on a hearing – the presumption is openness, and the party seeking to limit disclosure or dissemination of information must show how a limitation would serve the public interest. He stated that the case law makes it clear “that it is the openness of the system, and not the privacy or other concerns of law enforcement, regulators, or innocent parties, that takes primacy in this balance” (at para 90). Put another way, he states that “The open court principle is the fundamental one and the personal information and privacy concerns are secondary to it” (at para 94).

On the delays created by the FIPPA system, Justice Morgan noted that “Untimely disclosure that loses the audience is akin to no disclosure at all” (at para 95). However, he was receptive to submissions made by the Ontario Judicial Council (OJC) which had “admonished the court to be cognizant of the complex task of fashioning a disclosure system for a very diverse body of administrative institutions” (at para 102). The OJC warned the court of the potential for “unintended consequences” if it were to completely remove tribunals from the FIPPA regime. The concern here was not so much for privacy; rather it was for the great diversity of administrative tribunals, many of which are under-resourced and under-staffed, and who might find themselves “overwhelmed in a suddenly FIPPA-free procedural environment” (at para 103). Justice Morgan also noted that while the Toronto Star was frustrated with the bureaucracy involved in making FIPPA applications, “bureaucracy in and of itself is not a Charter violation. It’s just annoying.” (at para 104) He noted that the timelines set out in FIPPA were designed to make the law operate fairly, and that “Where the evidence in the record shows that there have been inordinate delays, the source of the problems may lie more with the particular administrators or decision-makers who extend the FIPPA timelines than with the statutory system itself” (at para 105). He expressed hope that by removing the ‘reverse onus’ approach, any issues of delay might be substantially reduced.

As a result, Justice Morgan found the “presumption of non-disclosure for producing Adjudicative Records containing “personal information” as defined in s. 2(1)” to violate the Charter. Given the complexity of finding a solution to this problem, he gave the legislature one year in which to amend FIPPA. He makes it clear that tribunals are not required to follow the FIPPA request process in providing access to their Adjudicative Records, but it does not breach the Charter for them to do so.

This is an interesting decision that addresses what is clearly a problematic approach to providing access to decisions of administrative tribunals. What the case does not address are the very real issues of privacy that are raised by the broad publication of administrative tribunal decisions. Much ink has already been spilled on the problems with the publication of personal information in court and tribunal decisions. Indeed the Globe24hr case considered by both the Office of the Privacy Commissioner of Canada and the Federal Court reveals some of the consequences for individual privacy when such decisions are published in online repositories. Of course, nothing in Justice Morgan’s decision requires online publication, but openness must be presumed to include such risks. In crafting a new legislative solution for adjudicative records, the Ontario government might be well advised to look at some of the materials produced regarding different strategies to protect privacy in open tribunal decisions and might consider more formal guidance for tribunals in this regard.

 

**********************

Interested in the issues raised by this case? Here is a sampling of some other decisions that also touch on the open courts principle in the context of administrative tribunals:

Canadian Broadcasting Corp. v. Canada (Attorney General)

United Food & Commercial Workers Union Local 1518 v. Sunrise Poultry Processors Ltd.

These three cases deal with individuals trying to get personal information redacted from tribunal decisions destined to be published online in order to protect their personal information: Fowlie v. Canada; A.B. v. Brampton (City); Pakzad v. The Queen

Tuesday, 17 April 2018 08:50

New Study on Whistleblowing in Canada

Earlier this year, uOttawa’s Florian Martin-Bariteau and Véronique Newman released a study titled Whistleblowing in Canada. The study was funded by SSHRC as part of its Knowledge Synthesis program. The goal of this program is to provide an incisive overview of a particular area to synthesize key research and to identify knowledge gaps. The report they have produced does just that. Given the very timeliness of the topic (after all, the Cambridge Analytica scandal was disclosed by a whistleblower), and the relative paucity of legal research in the area, this report is particularly important.

The first part of the report provides an inventory of existing whistleblower frameworks across public and private sectors in Canada, including those linked to administrative agencies. This on its own makes a significant contribution. The authors refer to the existing legislative and policy framework as a “patchwork”. They note that the public sector framework is characterized by fairly stringent criteria that must be met to justify disclosures to authorities. At the same time, there are near universal restrictions against disclosure to the broader public. The authors note that whistleblower protection in the private sector is relatively thin, with a few exceptions in areas such as labour relations, health and environmental standards.

The second part of the report identifies policy issues and knowledge gaps. Observing that Canada lags behind its international partners in providing whistleblower protection, the authors are critical of narrow statutory definitions of whistleblowing, legal uncertainties faced by whistleblowers, and an insufficient framework for the private sector. The authors are also critical of the general lack of protection for public disclosures, noting that “internal mechanisms in government agencies are often unclear or inefficient and may fail to ensure the anonymity of the whistleblower” (at p. 5). Indeed, the authors are also critical of how existing regimes make anonymity difficult or impossible. The authors call for more research on the subject of whistleblowing, and highlight a number of important research gaps.

Among other things, the authors call for research to help draw the line between leaks, hacks and whistleblowing. This too is important given the different ways in which corporate or government wrongdoing has been exposed in recent years. There is no doubt that the issues raised in this study are important, and it is a terrific resource for those interested in the topic.

The post is the second in a series that looks at the recommendations contained in the report on the Personal Information Protection and Electronic Documents Act (PIPEDA) issued by the House of Commons Standing Committee on Access to Information and Privacy Ethics (ETHI). My first post considered ETHI’s recommendation to retain consent at the heart of PIPEDA with some enhancements. At the same time, ETHI recommended some new exceptions to consent. This post looks at one of these – the exception relating to publicly available information.

Although individual consent is at the heart of the PIPEDA model – and ETHI would keep it there – the growing number of exceptions to consent in PIPEDA is reason for concern. In fact, the last round of amendments to PIPEDA in the 2015 Digital Privacy Act, saw the addition of ten new exceptions to consent. While some of these were relatively uncontroversial (e.g. making it clear that consent was not needed to communicate with the next of kin of an injured, ill or deceased person) others were much more substantial in nature. In its 2018 report ETHI has made several recommendations that continue this trend – creating new contexts in which individual consent will no longer be required for the collection, use or disclosure of personal information. In this post, I focus on one of these – the recommendation that the exception to consent for the use of “publicly available information” be dramatically expanded to include content shared by individuals on social media. In light of the recent Facebook/Cambridge Analytica scandal, this recommended change deserves some serious resistance.

PIPEDA already contains a carefully limited exception to consent to the collection, use or disclosure of personal information where it is “publicly available” as defined in the Regulations Specifying Publicly Available Information. These regulations identify five narrowly construed categories of publicly available information. The first is telephone directory information (but only where the subscriber has the option to opt out of being included in the directory). The second is name and contact information that is included in a professional business directory listing that is available to the public; nevertheless, such information can only be collected, used or disclosed without consent where it relates “directly to the purpose for which the information appears in the registry” (i.e. contacting the individual for business purposes). There is a similar exception for information in a public registry established by law (for example, a land titles registry); this information can similarly only be collected, used or disclosed for purposes related to those for which it appears in the record or document. Thus, consent is not required to collect land registry information for the purposes of concluding a real estate transaction. However, it is not permitted to extract personal information from such a registry, without consent, to use for marketing. A fourth category of publicly available personal information is information appearing in court or tribunal records or documents. This respects the open courts principle, but the exception is limited to collection, use or disclosure that relates directly to the purpose for which the information appears in the record or document. This means that online repositories of court and tribunal decisions cannot be mined for personal information; however, personal information can be used without consent to further the open courts principle (for example, a reporter gathering information to use in a newspaper story).

This brings us to the fifth category of publicly available information – the one ETHI would explode to include vast quantities of personal information. Currently, this category reads:

e) personal information that appears in a publication, including a magazine, book or newspaper, in printed or electronic form, that is available to the public, where the individual has provided the information.

ETHI’s recommendation is to make this “technologically neutral” by having it include content shared by individuals over social media. According to ETHI, a “number of witnesses considered this provision to be “obsolete.” (at p. 27) Perhaps not surprisingly, these witnesses represented organizations and associations whose members would love to have unrestricted access to the contents of Canadians’ social media feeds and pages. The Privacy Commissioner was less impressed with the arguments for change. He stated: “we caution against the common misconception that simply because personal information happens to be generally accessible online, there is no privacy interest attached to it.” (at p. 28) The Commissioner recommended careful study with a view to balancing “fundamental individual and societal rights.” This cautious approach seems to have been ignored. The scope of ETHI’s proposed change is particularly disturbing given the very carefully constrained exceptions that currently exist for publicly available information. A review of the Regulations should tell any reader that this was always intended to be a very narrow exception with tightly drawn boundaries; it was never meant to create a free-for-all open season on the personal information of Canadians.

The Cambridge Analytica scandal reveals the harms that can flow from unrestrained access to the sensitive and wide-ranging types and volumes of personal information that are found on social media sites. Yet even as that scandal unfolds, it is important to note that everyone (including Facebook) seems to agree that user consent was both required and abused. What ETHI recommends is an exception that would obviate the need for consent to the collection, use and disclosure of the personal information of Canadians shared on social media platforms. This could not be more unwelcome and inappropriate.

Counsel for the Canadian Life and Health Insurance Association, in addressing ETHI, indicated that the current exception “no longer reflects reality or the expectations of the individuals it is intended to protect.” (at p. 27) A number of industry representatives also spoke of the need to make the exception “technologically neutral”, a line that ETHI clearly bought when it repeated this catch phrase in its recommendation. The facile rhetoric of technological neutrality should always be approached with enormous caution. The ‘old tech’ of books and magazines involved: a) relatively little exposure of personal information; b) carefully mediated exposure (through editorial review, fact-checking, ethical policies, etc.); c) and time and space limitations that tended to focus publication on the public interest. Social media is something completely different. It is a means of peer-to-peer communication and interaction which is entirely different in character and purpose from a magazine or newspaper. To treat it as the digital equivalent is not technological neutrality, it is technological nonsensicality.

It is important to remember that while the exception to consent for publicly available information exists in PIPEDA; the definition of its parameters is found in a regulation. Amendments to legislation require a long and public process; however, changes to regulations can happen much more quickly and with less room for public input. This recommendation by ETHI is therefore doubly disturbing – it could have a dramatic impact on the privacy rights of Canadians, and could do so more quickly and quietly than through the regular legislative process. The Privacy Commissioner was entirely correct in stating that there should be no change to these regulations without careful consideration and a balancing of interests, and perhaps no change at all.

The recent scandal regarding the harvesting and use of the personal information of millions of Facebook users in order to direct content towards them aimed at influence their voting behavior raises some interesting questions about the robustness of our data protection frameworks. In this case, a UK-based professor collected personal information via an app, ostensibly for non-commercial research purposes. In doing so he was bound by terms of service with Facebook. The data collection was in the form of an online quiz. Participants were paid to answer a series of questions, and in this sense they consented to and were compensated for the collection of this personal information. However, their consent was to the use of this information only for non-commercial academic research. In addition, the app was able to harvest personal information from the Facebook friends of the study participants – something which took place without the knowledge or consent of those individuals. The professor later sold his app and his data to Cambridge Analytica, which used it to target individuals with propaganda aimed at influencing their vote in the 2016 US Presidential Election.

A first issue raised by this case is a tip-of-the-iceberg issue. Social media platforms – not just Facebook – collect significant amounts of very rich data about users. They have a number of strategies for commercializing these treasure troves of data, including providing access to the platform to app developers or providing APIs on a commercial basis that give access to streams of user data. Users typically consent to some secondary uses of their personal information under the platform’s terms of service (TOS). Social media platform companies also have TOS that set the terms and conditions under which developers or API users can obtain access to the platform and/or its data. What the Cambridge Analytica case reveals is what may (or may not) happen when a developer breaches these TOS.

Because developer TOS are a contract between the platform and the developer, a major problem is the lack of transparency and the grey areas around enforcement. I have written about this elsewhere in the context of another ugly case involving social media platform data – the Geofeedia scandal (see my short blog post here, full article here). In that case, a company under contract with Twitter and other platforms misused the data it contracted for by transforming it into data analytics for police services that allowed police to target protesters against police killings of African American men. This was a breach of contractual terms between Twitter and the developer. It came to public awareness only because of the work of a third party (in that case, the ACLU of California). In the case of Cambridge Analytica, the story also only came to light because of a whistleblower (albeit one who had been involved with the company’s activities). In either instance it is important to ask whether, absent third party disclosure, the situation would ever have come to light. Given that social media companies provide, on a commercial basis, access to vast amounts of personal information, it is important to ask what, if any, proactive measures they take to ensure that developers comply with their TOS. Does enforcement only take place when there is a public relations disaster? If so, what other unauthorized exploitations of personal information are occurring without our knowledge or awareness? And should platform companies that are sources of huge amounts of personal information be held to a higher standard of responsibility when it comes to their commercial dealing with this personal information?

Different countries have different data protection laws, so in this instance I will focus on Canadian law, to the extent that it applies. Indeed, the federal Privacy Commissioner has announced that he is looking into Facebook’s conduct in this case. Under the Personal Information Protection and Electronic Documents Act (PIPEDA), a company is responsible for the personal information it collects. If it shares those data with another company, it is responsible for ensuring proper limitations and safeguards are in place so that any use or disclosure is consistent with the originating company’s privacy policy. This is known as the accountability principle. Clearly, in this case, if the data of Canadians was involved, Facebook would have some responsibility under PIPEDA. What is less clear is how far this responsibility extents. Clause 4.1.3 of Schedule I to PIPEDA reads: “An organization is responsible for personal information in its possession or custody, including information that has been transferred to a third party for processing. The organization shall use contractual or other means to provide a comparable level of protection while the information is being processed by a third party.” [My emphasis]. One question, therefore, is whether it is enough for Facebook to simply have in place a contract that requires its developers to respect privacy laws, or whether Facebook’s responsibility goes further. Note that in this case Facebook appears to have directed Cambridge Analytica to destroy all improperly collected data. And it appears to have cut Cambridge Analytica off from further access to its data. Do these steps satisfy Facebook’s obligations under PIPEDA? It is not at all clear that PIPEDA places any responsibilities on organizations to actively supervise or monitor companies with which it has shared data under contract. It is fair to ask, therefore, whether in cases where social media platforms share huge volumes of personal data with developers, is the data-sharing framework in PIPEDA sufficient to protect the privacy interests of the public.

Another interesting question arising from the scandal is whether what took place amounts to a data breach. Facebook has claimed that it was not a data breach – from their perspective, this is a case of a developer that broke its contract with Facebook. It is easy to see why Facebook would want to characterize the incident in this way. Data breaches can bring down a whole other level of enforcement, and can also give rise to liability in class action law suits for failure to properly protect the information. In Canada, new data breach notification provisions (which have still not come into effect under PIPEDA) would impose notification requirements on an organization that experienced a breach. It is interesting to note, though, that he data breach notification requirements are triggered where there is a “real risk of significant harm to an individual” [my emphasis]. Given what has taken place in the Cambridge Analytical scandal, it is worth asking whether the drafters of this provision should have included a real risk of significant harm to the broader public. In this case, the personal information was used to subvert democratic processes, something that is a public rather than an individual harm.

The point about public harm is an important one. In both the Geofeedia and the Cambridge Analytica scandals, the exploitation of personal information was on such a scale and for such purposes that although individual privacy may have been compromised, the greater harms were to the public good. Our data protection model is based upon consent, and places the individual and his or her choices at its core. Increasingly, however, protecting privacy serves goals that go well beyond the interests of any one individual. Not only is the consent model broken in an era of ubiquitous and continuous collection of data, it is inadequate to address the harms that come from improper exploitation of personal information in our big data environment.

In February 2018 the Standing Committee on Access to Information, Privacy and Ethics (ETHI) issued its report based on its hearings into the state of Canada’s Personal Information Protection and Electronic Documents Act. The Committee hearings were welcomed by many in Canada’s privacy community who felt that PIPEDA had become obsolete and unworkable as a means of protecting the personal information of Canadians in the hands of the private sector. The report, titled Towards Privacy by Design: Review of the Personal Information Protection and Electronic Documents Act seems to come to much the same conclusion. ETHI ultimately makes recommendations for a number of changes to PIPEDA, some of which could be quite significant.

This blog post is the first in a series that looks at the ETHI Report and its recommendations. It addresses the issue of consent.

The enactment of PIPEDA in 2001 introduced a consent-based model for the protection of personal information in the hands of the private sector in Canada. The model has at its core a series of fair information principles that are meant to guide businesses in shaping their collection, use and disclosure of personal information. Consent is a core principle; other principles support consent by ensuring that individuals have adequate and timely notice of the collection of personal information and are informed of the purposes of collection.

Unfortunately, the principle of consent has been drastically undermined by advances in technology and by a dramatic increase in the commercial value of personal information. In many cases, personal information is now actual currency and not just the by-product of transactions, changing the very fundamentals of the consent paradigm. In the digital environment, the collection of personal information is also carried out continually. Not only is personal information collected with every digital interaction, it is collected even while people are not specifically interacting with organizations. For example, mobile phones and their myriad apps collect and transmit personal information even while not in use. Increasingly networked and interconnected appliances, entertainment systems, digital assistants and even children’s toys collect and communicate steady streams of data to businesses and their affiliates.

These developments have made individual consent somewhat of a joke. There are simply too many collection points and too many privacy policies for consumers to read. Most of these policies are incomprehensible to ordinary individuals; many are entirely too vague when it comes to information use and sharing; and individuals can easily lose sight of consents given months or years previously to apps or devices that are largely forgotten but that nevertheless continuing to harvest personal information in the background. Managing consent in this environment is beyond the reach of most. To add insult to injury, the resignation felt by consumers without meaningful options for consent is often interpreted as a lack of interest in privacy. As new uses (and new markets) for personal information continue to evolve, it is clear that the old model of consent is no longer adequate to serve the important privacy interests of individuals.

The ETHI Report acknowledges the challenges faced by the consent model; it heard from many witnesses who identified problems with consent and many who proposed different models or solutions. Ultimately, however, ETHI concludes that “rather than overhauling the consent model, it would be best to make minor adjustments and let the stakeholders – the Office of the Privacy Commissioner (OPC), businesses, government, etc. – adapt their practices in order to maintain and enhance meaningful consent.”(at p. 20)

The fact that the list of stakeholders does not include the public – those whose personal information and privacy are at stake – is telling. It signals ambivalence about the importance of privacy within the PIPEDA framework. In spite of being an interest hailed by the Supreme Court of Canada as quasi-constitutional in nature, privacy is still not approached by Parliament as a human right. The prevailing legislative view seems to be that PIPEDA is meant to facilitate the exchange of personal information with the private sector; privacy is protected to the extent that it is necessary to support public confidence in such exchanges. The current notion of consent places a significant burden on individuals to manage their own privacy and, by extension, places any blame for oversharing on poor choices. It is a cynically neo-liberal model of regulation in which the individual ultimately must assume responsibility for their actions notwithstanding the fact that the deck has been so completely and utterly stacked against them.

The OPC recently issued a report on consent which also recommended the retention of consent as a core principle, but recognized the need to take concrete steps to maintain its integrity. The OPC recommendations included using technological tools, developing more accessible privacy policies, adjusting the level of consent required to the risk of harm, creating no-go zones for the use of personal information, and enhancing privacy protection for children. ETHI’s rather soft recommendations on consent may be premised on an understanding that much of this work will go ahead without legislative change.

Among the minor adjustments to consent recommended by ETHI is that PIPEDA be amended to make opt-in consent the default for any use of personal information for secondary purposes. This means that while there might be opt-out consent for the basic services for which a consumer is contracting (in other words, if you provide your name and address for the delivery of an item, it can be assumed you are consenting to the use of the information for that purpose), consumers must agree to the collection, use or disclosure of their personal information for secondary or collateral purposes. ETHI’s recommendation also indicates that opt-in consent might eventually become the norm in all circumstances. Such a change may have some benefits. Opt out consent is invidious. Think of social media platform default settings that enable a high level of personal information sharing, leaving consumers to find and adjust these settings if they want greater protection for their privacy. An opt-in consent requirement might be particularly helpful in addressing such problems. Nevertheless, it will not be much use in the context of long, complex (and largely unread) privacy policies. Many such policies ask consumers to consent to a broad range of uses and disclosures of personal information, including secondary purposes described in the broadest of terms. A shift to opt-in consent will not help if agreeing to a standard set of unread terms amounts to opting-in.

ETHI also considered whether and how individuals should be able to revoke their consent to the collection, use or disclosure of their personal information. The issues are complex. ETHI gave the example of social media, where information shared by an individual might be further disseminated by many others, making it challenging to give effect to a revocation of consent. ETHI recommends that the government “study the issue of revocation of consent in order to clarify the form of revocation required and its legal and practical implications”.

ETHI also recommended that the government consider specific rules around consent for minors, as well as the collection, use and disclosure of their personal information. Kids use a wide range of technologies, but may be particularly vulnerable because of a limited awareness of their rights and recourses, as well as of the long-term impacts of personal information improvidently shared in their youth. The issues are complex and worthy of further study. It is important to note, however, that requiring parental consent is not an adequate solution if the basic framework for consent is not addressed. Parents themselves may struggle to understand the technologies and their implications and may be already overwhelmed by multiple long and complex privacy policies. The second part of the ETHI recommendation which speaks to specific rules around the collection, use and disclosure of the personal information of minors may be more helpful in addressing some of the challenges in this area. Just as we have banned some forms of advertising directed at children, we might also choose to ban some kinds of collection or uses of children’s personal information.

In terms of enhancing consent, these recommendations are thin on detail and do not provide a great deal of direction. They seem to be informed by a belief that a variety of initiatives to enhance consent through improved privacy policies (including technologically enhanced policies) may suffice. They are also influenced by concerns expressed by business about the importance of maintaining the ‘flexibility’ of the current regime. While there is much that is interesting elsewhere within the ETHI report, the discussion of consent feels incomplete and disappointing. Minor adjustments will not make a major difference.

Up next: One of the features of PIPEDA that has proven particularly challenging when it comes to consent is the ever-growing list of exceptions to the consent requirement. In my next post I will consider ETHI’s recommendations that would add to that list, and that also address ‘alternatives’ to consent.

The Office of the Privacy Commissioner of Canada has released its Draft Position on Online Reputation. It’s an important issue and one that is of great concern to many Canadians. In the Report, the OPC makes recommendations for legislative change and proposes other measures (education, for example) to better protect online reputation. However, the report has also generated considerable controversy for the position it has taken on how the Personal Information Protection and Electronic Documents Act currently applies in this context. In this post I will focus on the Commissioner’s expressed view that PIPEDA applies to search engine activities in a way that would allow Canadians to request the de-indexing of personal information from search engines, with the potential to complain to the Commissioner if these demands are not met.

PIPEDA applies to the collection, use and disclosure of personal information in the course of commercial activity. The Commissioner reasons, in this report, that search engines are engaged in commercial activity, even if search functions are free to consumers. An example is the placement of ads in search results. According to the Commissioner, because search engines can provide search results that contain (or lead to) personal information, these search engines are collecting, using and disclosing personal information in the course of commercial activity.

With all due respect, this view seems inconsistent with current case law. In 2010, the Federal Court in State Farm Mutual Automobile Insurance Co. v. Canada (Privacy Commissioner) ruled that an insurance company that collected personal information on behalf of an individual it was representing in a law suit was not collecting that information in the course of commercial activity. This was notwithstanding the fact that the insurance company was a commercial business. The Court was of the view that, at essence, the information was being collected on behalf of a private person (the defendant) so that he could defend a legal action (a private and non-commercial matter to which PIPEDA did not apply). Quite tellingly, at para 106, the court stated: “if the primary activity or conduct at hand, in this case the collection of evidence on a plaintiff by an individual defendant in order to mount a defence to a civil tort action, is not a commercial activity contemplated by PIPEDA, then that activity or conduct remains exempt from PIPEDA even if third parties are retained by an individual to carry out that activity or conduct on his or her behalf.”

The same reasoning applies to search engines. Yes, Google makes a lot of money, some of which comes from its search engine functions. However, the search engines are there for anyone to use, and the relevant activities, for the purposes of the application of PIPEDA, are those of the users. If a private individual carries out a Google search for his or her own purposes, that activity does not amount to the collection of personal information in the course of commercial activity. If a company does so for its commercial purposes, then that company – and not Google – will have to answer under PIPEDA for the collection, use or disclosure of that personal information. The view that Google is on the hook for all searches is not tenable. It is also problematic for the reasons set out by my colleague Michael Geist in his recent post.

I also note with some concern the way in which the “journalistic purposes” exception is treated in the Commissioner’s report. This exception is one of several designed to balance privacy with freedom of expression interests. In this context, the argument is that a search engine facilitates access to information, and is a tool used by anyone carrying out online research. This is true, and for the reasons set out above, PIPEDA does not apply unless that research is carried out in the course of commercial activities to which the statute would apply. Nevertheless, in discussing the exception, the Commissioner states:

Some have argued that search engines are nevertheless exempt from PIPEDA because they serve a journalistic or literary function. However, search engines do not distinguish between journalistic/literary material. They return content in search results regardless of whether it is journalistic or literary in nature. We are therefore not convinced that search engines are acting for “journalistic” or “literary” purposes, or at least not exclusively for such purposes as required by paragraph 4(2)(c).

What troubles me here is the statement that “search engines do not distinguish between journalistic and literary material”. They don’t need to. The nature of what is sought is not the issue. The issue is the purpose. If an individual uses Google in the course of non-commercial activity, PIPEDA does not apply. If a journalist uses Google for journalistic purposes, PIPEDA does not apply. The nature of the content that is searched is immaterial. The quote goes on to talk about whether search engines act for journalistic or literary purposes – that too is not the point. Search engines are tools. They are used by actors. It is the purposes of those actors that are material, and it is to those actors that PIPEDA will apply – if they are collecting, using or disclosing personal information in the course of commercial activity.

The Report is open for comment until April 19, 2018.

In October 2016, the data analytics company Geofeedia made headlines when the California chapter of the American Civil Liberties Union (ACLU) issued the results of a major study which sought to determine the extent to which police services in California were using social media data analytics. These analytics were based upon geo-referenced information posted by ordinary individuals to social media websites such as Twitter and Facebook. Information of this kind is treated as “public” in the United States because it is freely contributed by users to a public forum. Nevertheless, the use of social media data analytics by police raises important civil liberties and privacy questions. In some cases, users may not be aware that their tweets or posts contain additional meta data including geolocation information. In all cases, the power of data analytics permits rapid cross-referencing of data from multiple sources, permitting the construction of profiles that go well beyond the information contributed in single posts.

The extent to which social media data analytics are used by police services is difficult to assess because there is often inadequate transparency both about the actual use of such services and the purposes for which they are used. Through a laborious process of filing freedom of information requests the ACLU sought to find out which police services were contracting for social media data analytics. The results of their study showed widespread use. What they found in the case of Geofeedia went further. Although Geofeedia was not the only data analytics company to mine social media data and to market its services to government authorities, its representatives had engaged in email exchanges with police about their services. In these emails, company employees used two recent sets of protests against police as examples of the usefulness of social media data analytics. These protests were those that followed the death in police custody of Freddie Gray, a young African-American man who had been arrested in Baltimore, and the shooting death by police of Michael Brown, an eighteen-year-old African-American man in Ferguson, Missouri. By explicitly offering services that could be used to monitor those who protested police violence against African Americans, the Geofeedia emails aggravated a climate of mistrust and division, and confirmed a belief held by many that authorities were using surveillance and profiling to target racialized communities.

In a new paper, just published in the online, open-access journal SCRIPTed, I use the story around the discovery of Geofeedia’s activities and the backlash that followed to frame a broader discussion of police use of social media data analytics. Although this paper began as an exploration of the privacy issues raised by the state’s use of social media data analytics, it shifted into a paper about transparency. Clearly, privacy issues – as well as other civil liberties questions – remain of fundamental importance. Yet, the reality is that without adequate transparency there simply is no easy way to determine whether police are relying on social media data analytics, on what scale and for what purposes. This lack of transparency makes it difficult to hold anyone to account. The ACLU’s work to document the problem in California was painstaking and time consuming, as was a similar effort by the Brennan Center for Justice, also discussed in this paper. And, while the Geofeedia case provided an important example of the real problems that underlie such practices, it only came to light because Geofeedia’s employees made certain representations by email instead of in person or over the phone. A company need only direct that email not be used for these kinds of communications for the content of these communications to disappear from public view.

My paper examines the use of social media data analytics by police services, and then considers a range of different transparency issues. I explore some of the challenges to transparency that may flow from the way in which social media data analytics are described or characterized by police services. I then consider transparency from several different perspectives. In the first place I look at transparency in terms of developing explicit policies regarding social media data analytics. These policies are not just for police, but also for social media platforms and the developers that use their data. I then consider transparency as a form of oversight. I look at the ways in which greater transparency can cast light on the activities of the providers and users of social media data and data analytics. Finally, I consider the need for greater transparency around the monitoring of compliance with policies (those governing police or developers) and the enforcement of these policies.

A full text of my paper is available here under a CC Licence.

<< Start < Prev 11 12 13 14 15 16 17 18 19 20 Next > End >>
Page 11 of 37

Canadian Trademark Law

Published in 2015 by Lexis Nexis

Canadian Trademark Law 2d Edition

Buy on LexisNexis

Electronic Commerce and Internet Law in Canada, 2nd Edition

Published in 2012 by CCH Canadian Ltd.

Electronic Commerce and Internet Law in Canada

Buy on CCH Canadian

Intellectual Property for the 21st Century

Intellectual Property Law for the 21st Century:

Interdisciplinary Approaches

Purchase from Irwin Law