When is Data no longer Data?

Oliver Jack Dean

The Draft Investigatory Powers Bill, first published in 2015 claimed that data is not itself when it includes “any information which is not data”, see clause 195(1).

This particular clause is in itself a frenetic addition to verbiage passed by both Houses of Parliament, and Queen Elizabeth II which was eventually embedded into the Investigatory Powers Act on 29 November 2016.

As is often the case when embedding and creating statues, such decisions are always constrained due to the time and historical context of the inception of such statues, this being between 2015 - 2016.

Yet, clause 195(1) has greatly enlarged an important topic of debate and a relevant problem for us all living in the twenty-first century: When is data no longer data?

Moreover, when have our human rights, with regards to personal data truly been breached?

The label “data” would have been relatively simple to define 30 years ago.

By modern standards, across multiple industries and sectors and throughout national courts, finding a universal definition for the label “data” has become a slippery slope.

Full consideration should be given to clause 195(1) because the expression “which is not data” happens to constitute many arbitrary questions that we still have little to no answer.

The label or to be more precise, the label’s definition has opened up fierce debate across the world but nothing can disguise the fact that the label “data” has become indirectly multifaceted due to the increasingly wide distribution of technology in our everyday lives.

To put these matters under another lense, which may be more suitable as we are concerned with the label “data”, Xavier Leroy, I think, succinctly summarised this particular struggle and issue of interest at an Inaugural lecture at Collège de France called “Software, between Mind and Matter” in 2018:

On the one hand, programming languages, tools, and methodologies have progressed immensely. On the other, the complexity of software keeps increasing, and we entrust it with more and more responsibilities.

Thus, the central issue with software is no longer to program, but to convince: convince designers, developers, end-users, regulation or certification authorities, perhaps a court of justice (if something bad happens) that the software is correct and harmless.

In other words, our struggle to define “data” as a label or term, which in itself is a historical process, has since acquired a fresh intensity throughout numerous sectors and industries.

This particular intersection, between humans and machines, of our relationship with software and technology, is nothing new.

But as humans have found a newfound power and self-confidence in using everyday modern technology, this has set new standards of human rights and vigorous regulatory measures.

At first, such standards were set at a local level, now they are set at an international level.

In this respect, the importance of defining terms or labels, especially when extracted from advanced technical literature has great universal importance, as was said by the current Secretary of State for Culture, Media and Sport of the UK, Oliver Dowden:

it is often the case that regulation that starts with the best of intentions can, in its interpretation if you do not get it right, have a life of its own...

This particular statement was made in response to several points raised in the Online Harms White Paper of the 8th April 2019, released by the UK government.

This was a ground-breaking document and proposal, not just because of its overall value but more importantly, it reignited tensions with rich corporations who were aggravated by the tight control exercised throughout the Online Harms White Paper and its subsequent enforcement. This was affirmed under section 4, para 3 whereby:

... companies of all sizes will be in the scope of the regulatory framework. The scope will include… social media companies, public discussion forums, retailers that allow users to review products online, along with non-profit organisations, file sharing sites and cloud hosting providers.

Protection of the Processing of Personal Data

Over the last 5 - 8 years, stories of appalling atrocities have circulated with regards to how technology corporations have been using personal data.

There is no better way of bringing the issue forward by referencing the recent European Court of Justice (ECJ) court case Maximillian Schrems v Data Protection Commissioner [C-362/14], which brought up the all-to-familiar narrative on the use and interpretation of people’s data across geographical boundaries, primarily known as Data Retention.

In this particular case, the complaint was on as to why there was a need for Facebook user data, when residing in the EU, in particular Ireland, to be transferred to the USA? In proposing such a question, the EU courts appear to have concluded that nothing was be expected from Digital Rights Ireland Case Law and that there appeared to be no legal safeguards at the time present in the USA.

The first judgement was issued in October 2015 and it clearly stated that it is:

a duty on national DPAs and EU institutions to [ensure the protection] of fundamental rights of privacy.

The judgement prevailed and broke down several obstacles.

Most notably, it overturned the EU/US “safe harbour” mechanism, which had up until now, governed data transfer between the two regions, it operated under the EU/US “privacy shield” treaty which was activated in 2015.

Yet, even though this was an intensive and heavy-handed manoeuvre by the courts, there were still loopholes at play. Corporations like Facebook adroitly used “standard contractual clauses” (SCCs) to transfer personal data between the EU and the USA, “allowing companies to seek specific consent from users for data to be exported”.

Mr Schrems and co sought to remedy this through the act of law and once again, the European Courts rightly acknowledged that SCCs appear to have been devised to circumnavigate the then prohibited EU/US “privacy shield”.

Mr Schrems exalted:

I am very happy about the judgment. It seems the Court has followed us in all aspects. This is a total blow to the Irish DPC and Facebook. It is clear that the US will have to seriously change their surveillance laws if US companies want to continue to play a major role [in] the EU market.

Although such cases are great accessions in the realm of Human Rights, the uneasy stand-off between the European courts and large Technological corporations will most likely continue.

Anyone reasonably well-read in the necessary historical literature or moreover, a child of the 20th century, would be quick to notice that the practice of a state keeping personal files on individuals was gravely abused by authoritarian regimes throughout continental Europe.

Although the jurisprudence of both the ECJ and European Court of Human Rights (ECHR) has been sensitive on such a topic it has until only recently had great difficulty in establishing a rational framework of principle for distinguishing the respective spheres all at play with regards to data protection and online privacy.

The Requirement of Foreseeability and Discretionary Power

To establish such a framework, the courts both at the local and national levels have opted for different measures of both creation and integration.

One of the most important measures is the requirement of "Accessibility and Foreseeability".

This is in substance what the courts would utilise to identify when and how the law can be applied under certain contexts.

It is with such a mechanism the courts can potentially decide whether "data is no longer data" and if so, does it subsequently fall outside of law and regulation.

In such cases like those of Schrems, the ECJ emphasised the significance of the law enabling citizens to regulate his or their conduct and therefore to foresee the consequences with which they have decided to act upon and have effective control over one’s data.

The difficulty in such a question though is that “data” as we know it is susceptible to constant modification regardless of one’s discretion. This is due to the speed and intensity at which data is captured and applied throughout various sectors of the world economy.

This technicality is genuinely difficult for the courts to grasp as it currently stands.

Therefore, the degree of precision required of the law in such a case where personal data is breached will depend upon the particular subject matter. This was reiterated in the Malone v United Kingdom (1985) 7 EHRR 14 case.

The context was telephone tapping and on as to whether “communication passing through the services of the Post Office might be intercepted, for police purposes”. The court summarised the following at para 79:

in its present state, the law in England and Wales governing interception of communications for police purposes is somewhat obscure and open to differing interpretations. … it cannot be said with any reasonable certainty what elements of the powers to intercept are incorporated in legal rules and what elements remain within the discretion of the executive [...]

In the opinion of the court, the law of England and Wales does not indicate with reasonable clarity the scope and manner of exercise of the relevant discretion conferred on the public authorities. To that extent, the minimum degree of legal protection to which citizens are entitled under the rule of law in a democratic society is lacking.

In three other notable later cases, Amann v Switzerland (2000) 30 EHRR 843, Rotaru v Romania 8 BHRC 449 and S v the United Kingdom (2009) 48 EHRR 50, the same jurisprudence was applied to the retention in police records.

For Amann v Switzerland (2000), which happened to be another case on phone tapping, the decision was expressed on similar grounds to both Rotaru v Romania 8 BHRC 449 and S v United Kingdom (2009) 48 EHRR 50.

In the judgement, what was striking was:

Swiss law does not indicate with sufficient clarity the scope and conditions of exercise of the authorities’ discretionary power in the area under consideration.

See para 62 for more.

As can be seen from the cases put forward so far, from the outset the ECJ and ECHR courts have treated the need for safeguards as part of the requirement of foreseeability.

It has applied it as part of the principle of legality in cases where a discretionary power would otherwise be unconstrained and if one were to have a lack of effective control over one’s discretionary power.

Cases such as the ones listed may not hold up by today’s standards as “cutting-edge” when compared with the more recent Schrems v Data Protection Commissioner [C-362/14] but it is the principles of such cases that have caused great collateral across several sectors and industries which entirely depend on consumers entrusting these services; and to which “data” in particular, has become the fundamental pillar of which such services stand upon.

All of this was carefully evaluated by the courts when the Data Protection Act 1998 was first created in England and Wales.

Although the act was prevalent at the time, it was not adopted wholeheartedly in Northern Ireland, as the MM v United Kingdom (Application No 24029/07), [2012] ECHR 1906, 29 April 2013 case highlighted, which was concerned with data retention and disclosure by the police when holding records of citizen cautions.

As was highlighted in the court, the applicant had received a caution for child abduction in 2000 and this was disclosed and subsequently, the applicant was unable to apply for employment involving care work.

The applicant had accepted the caution on an assurance that it would be deleted from police records after five years, which was the practice at the time.

From 2006 to 2007 the applicants attempt to have the caution deleted were not successful. The cautions were still retained and therefore, the applicant went to the courts in order to challenge why the police were retaining such information and why such procedures were not regulated.

This case was elevated to the Strasbourg courts and they were primarily concerned with:

  • (i) collection of data,
  • (ii) its retention in the records of the authorities, and
  • (iii) its disclosure to third parties.

It turned out that Northern Ireland was adopting the Data Protection Act 1998 as a statutory Code of Practice under certain convictions in Northern Ireland but the applicant’s caution was retained and disclosed under common law powers.

On further reflection, this particular case has many parallels with the current crisis surrounding the definition of the label “data” and what it means according to the law. All of this continues to have dramatic effects upon many worldwide sectors and services we depend upon greatly.

In considering this issue, there appears to me to be two central questions.

Does it make sense to have a universal definition for “data” that must be adopted throughout different contexts? And in addition, is the way to which we as a society incorporate, curate and retain data to be conducted as a matter of principle or of law?

Both are genuinely difficult questions.

Perhaps adopting a universal consensus as to what we truly mean by the label “data” can help us.

However, it may be easier to acknowledge its characteristics and amorphous nature, to which as long as “users” obtain complete discretionary power and foreseeability over “our data”, this avoids various uncomfortable questions on both proportionality and application when it comes to the sectors and services of which depend on the curation and implementation of data.

Yet, perhaps I have spoken too soon?

This may be because our common general knowledge as to how we use data as an economy and society is relatively immature and disproportionate to the questions raised already in the courts.

But I have no doubt such questions will continue to take centre stage for some time.