Google has begun the process of removing links to “outdated or irrelevant” content in European search returns.

Since the European court of Justice ruling in May 2014, Google has established a “removals team” and private services have even sprung up to offer support in bids to be forgotten.

Widely hailed as the right to be forgotten, the concept is a bit of a misnomer. Sure, search engines in Europe are compelled to forget certain sites as they relate to the applicant, but the site itself can retain the offending information. The ruling has simply made retrieving such information more difficult. You know, sort of how it was to find something before the internet age, when one had to know what they were looking for and not just a search name.

For example, a newspaper that covered a crime in the past is not compelled to remove an archived article from their website, even if the person has record of that article removed from Google indexing. It stands to reason, then, that searching a news website for mention of a person would still return that article. One would simply need to have a sense of what sources might have records.

Never mind the irony that Mario Costeja González was fighting to have mention of his past debt problems removed from the internet – and yet now his prior troubles are mentioned in every article referencing his case. Image Source: The Guardian

Moreover, with the use of a VPN, a user can appear to be searching from elsewhere in the world and still retrieve such information through Google in the US, for example.

The Big Grey Search Area

The inconsistencies of enforcing virtual forgetting aside, the ruling itself has left me conflicted.

On one hand, all people, being humans, are bound to make mistakes at some point during their lives – should they suffer forever with a trace of some stupid past transgression appearing in search returns for their name?

On the other hand, what if all a person seems to do is make “mistakes”, the sort that harm other people and are more a pattern than a fluke in someone’s behaviour – having a digital trail might prevent others from falling victim.

What happens if the information in question involves more than the applicant? Let’s say it is a culprit and the respective victim: does the victim have the right to insist the information remain indexed in Google? Would other parties even be consulted?

The other problem with this new regulation is context. Search engines are now required to forget information that is “inadequate, irrelevant or no longer relevant, or excessive in relation to those purposes“. Yet who determines what is relevant or excessive? For now that seems to be Google, who, as The Guardian reports, “has set up an online form where people can request removal of links, and says people can appeal to data protection authorities if they disagree with its decision.”

GoogleWhile privacy advocates applaud the ruling – and push Google to adopt the outcome internationally – it all seems a bit reactionary, with minimal thinking through on implementation and how to measure what is a legitimate claim and what isn’t. Just another symptom of society’s inability to catch up with the changes brought by digital technology.

It will be interesting to see how it all unfolds. Given that this is happening through Google, though, I suspect little will be revealed. Unlike a public institution, they will not be compelled to release information – and indeed, given privacy regulations, they might not be at liberty to disclose much at all.

This, of course, is but one interpretation. What are your thoughts? Is this good? What about the moral implications? Is it something that can be reasonably implemented? Care to share your predictions?

About Author

La Generalista is the online identity of Alicia Wanless – a researcher and practitioner of strategic communications for social change in a Digital Age. Alicia is the director of the Partnership for Countering Influence Operations at the Carnegie Endowment for International Peace. With a growing international multi-stakeholder community, the Partnership aims to foster evidence-based policymaking to counter threats within the information environment. Wanless is currently a PhD Researcher at King’s College London exploring how the information environment can be studied in similar ways to the physical environment. She is also a pre-doctoral fellow at Stanford University’s Center for International Security and Cooperation, and was a tech advisor to Aspen Institute’s Commission on Information Disorder. Her work has been featured in Lawfare, The National Interest, Foreign Policy, and CBC.