HOW THE BUSINESS MODEL OF GOOGLE AND FACEBOOK
THREATENS HUMAN RIGHTS
1. EXECUTIVE SUMMARY
The internet has revolutionised our world on a scale not seen since the invention of electricity. Over half of the world’s population now relies on the web to read the news, message a loved one, find a job, or seek answers to an urgent question. It has opened social and economic opportunities at a scale and speed that few imagined fifty years ago.
Recognising this shift, it is now firmly acknowledged that access to the internet is vital to enable the enjoyment of human rights. For more than 4 billion people, the internet has become central to how they communicate, learn, participate in the economy, and organise socially and politically. Yet when these billions participate in life online, most of them rely heavily on the services of just two corporations. Two companies control the primary channels that people rely on to engage with the internet. They provide services so integral that it is difficult to imagine the internet without them. Facebook is the world’s dominant social media company. If you combine users of its social platform, its messenger services, WhatsApp and Messenger, and applications such as Instagram, a third of humans on Earth use a Facebook-owned service every day. Facebook sets terms for much of human connection in the digital age.
A second company, Google, occupies an even larger share of the online world. Search engines are a crucial source of information; Google accounts for around ninety percent of global search engine use. Its browser, Chrome, is the world’s dominant web browser. Its video platform, YouTube, is the world’s second largest search engine as well as the world’s largest video platform. Google’s mobile operating system, Android, underpins the vast majority of the world’s smartphones.
Android’s dominance is particularly important because smartphones have replaced the desktop computer as the primary way people access and use the internet. Smartphones reveal information about us beyond our online browsing habits—such as our physical travel patterns and our location. They often contain thousands of intimate emails and text messages, photographs, contacts, and calendar entries. Google and Facebook have helped to connect the world and provided crucial services to billions. To participate meaningfully in today’s economy and society, and to realise their human rights, people rely on access to the internet—and the tools Google and Facebook offer.
But despite the real value of the services they provide, Google and Facebook’s platforms come at a systemic cost. The companies’ surveillance-based business model forces people to make a Faustian bargain, whereby they are only able to enjoy their human rights online by submitting to a system predicated on human rights abuse. Firstly, an assault on the right to privacy on an unprecedented scale, and then a series of knock-on effects that pose a serious risk to a range of other rights, from freedom of expression and opinion, to freedom of thought and the right to non-discrimination.
This isn’t the internet people signed up for. When Google and Facebook were first starting out two decades ago, both companies had radically different business models that did not depend on ubiquitous surveillance. The gradual erosion of privacy at the hands of Google and Facebook is a direct result of the companies establishing dominant market power and control over the global “public square”. In Chapter 1, ‘the Business of Surveillance’, this report sets out how the surveillance-based business model works: Google and Facebook offer services to billions of people without asking them to pay a financial fee. Instead, citizens pay for the services with their intimate personal data. After collecting this data, Google and Facebook use it to analyse people, aggregate them into groups, and to make predictions about their interests, characteristics, and ultimately behaviour - primarily so they can use these insights to generate advertising revenue.
This surveillance machinery reaches well beyond the Google search bar or the Facebook platform itself. People are tracked across the web, through the apps on their phones, and in the physical world as well, as they go about their day-to-day affairs.
These two companies collect extensive data on what we search; where we go; who we talk to; what we say; what we read; and, through the analysis made possible by computing advances, have the power to infer what our moods, ethnicities, sexual orientation, political opinions, and vulnerabilities may be. Some of these categories—including characteristics protected under human rights law—are made available to others for the purpose of targeting internet users with advertisements and other information. In Chapter 2, ‘Assault on Privacy’, we set out how this ubiquitous surveillance has undermined the very essence of the right to privacy. Not only does it represent an intrusion into billions of people’s private lives that can never be necessary or proportionate, but the companies have conditioned access to their services on “consenting” to processing and sharing of their personal data for marketing and advertising, directly countering the right to decide when and how our personal data can be shared with others. Finally, the companies’ use of algorithmic systems to create and infer detailed profiles on people interferes with our ability to shape our own identities within a private sphere.
Advertisers were the original beneficiaries of these insights, but once created, the companies’ data vaults served as an irresistible temptation for governments as well. This is for a simple reason: Google and Facebook achieved a degree of data extraction from their billions of users that would have been intolerable had governments carried it out directly. Both companies have stood up to states’ efforts to obtain information on their users; nevertheless, the opportunity to access such data has created a powerful disincentive for governments to regulate corporate surveillance.
The abuse of privacy that is core to Facebook and Google’s surveillance-based business model is starkly demonstrated by the companies’ long history of privacy scandals. Despite the companies’ assurances over their commitment to privacy, it is difficult not to see these numerous privacy infringements as part of the normal functioning of their business, rather than aberrations. In Chapter 3, ‘Data Analytics at Scale: Human Rights Risks Beyond Privacy’, we look at how Google and Facebook’s platforms rely not only on extracting vast amounts of people’s data, but on drawing further insight and information from that data using sophisticated algorithmic systems. These systems are designed to find the best way to achieve outcomes in the companies’ interests, including finely- tuned ad targeting and delivery, and behavioural nudges that keep people engaged on the platforms. As a result, people’s data, once aggregated, boomerangs back on them in a host of unforeseen ways. These algorithmic systems have been shown to have a range of knock-on effects that pose a serious threat to people’s rights, including freedom of expression and opinion, freedom of thought, and the right to equality and non-discrimination. These risks are greatly heightened by the size and reach of Google and Facebook’s platforms, enabling human rights harm at a population scale. Moreover, systems that rely on complex data analytics can be opaque even to computer scientists, let alone the billions of people whose data is being processed.
The Cambridge Analytica scandal, in which data from 87 million people’s Facebook profiles were harvested and used to micro-target and manipulate people for political campaigning purposes, opened the world’s eyes to the capabilities such platforms possess to influence people at scale – and the risk that they could be abused by other actors. However, although shocking, the incident was the tip of the iceberg, stemming from the very same model of data extraction and analysis inherent to both Facebook and Google’s business.
Finally, in Chapter 4, ‘Concentration of Power Obstructs Accountability’, we show how vast data reserves and powerful computational capabilities have made Google and Facebook two of the most valuable and powerful companies in the world today. Google’s market capitalization is more than twice the GDP of Ireland (both companies’ European headquarters); Facebook’s is larger by a third. The companies’ business model has helped concentrate their power, including financial clout, political influence, and the ability to shape the digital experience of billions of people, leading to an unprecedented asymmetry of knowledge between the companies and internet users – as scholar Shoshana Zuboff states “They know everything about us; we know almost nothing about them.” This concentrated power goes hand in hand with the human rights impacts of the business model and has created an accountability gap in which it is difficult for governments to hold the companies to account, or for individuals who are affected to access justice.
Governments have an obligation to protect people from human rights abuses by corporations. But for the past two decades, technology companies have been largely left to self-regulate – in 2013, former Google CEO Eric Schmidt described the online world as “the world’s largest ungoverned space”. However, regulators and national authorities across various jurisdictions have begun to take a more confrontational approach to the concentrated power of Google and Facebook—investigating the companies for competition violations, issuing fines for infringing Europe’s General Data Protection Regulation (GDPR), or introducing new tax regimes for big technology companies.
Businesses have a responsibility to respect human rights in the context of their business operations that requires them to carry out “human rights due diligence” to identify and address their human rights impacts. Google and Facebook have established policies and processes to address their impacts on privacy and freedom of expression – but evidently, given that their surveillance-based business model undermines the very essence of the right to privacy and poses a serious risk to a range of other rights, the companies are not taking a holistic approach, nor are they questioning whether their current business models themselves can be compliant with their responsibility to respect human rights. Amnesty International gave both Google and Facebook an opportunity to respond to the findings of this report in advance of publication. Facebook’s letter in response is appended in the annex below. Amnesty International had a conversation with senior Google staff, who subsequently provided information around its relevant policies and practices. Both responses are incorporated throughout the report.
Ultimately, it is now evident that the era of self-regulation in the tech sector is coming to an end: further state-based regulation will be necessary, but it is vital that whatever form future regulation of the technology sector takes, governments follow a human rights-based approach. In the short-term, there is an immediate need for stronger enforcement of existing regulation. Governments must take positive steps to reduce the harms of the surveillance-based business model—to adopt digital public policies that have the objective of universal access and enjoyment of human rights at their core, to reduce or eliminate pervasive private surveillance, and to enact reforms, including structural ones, sufficient to restore confidence and trust in the internet.
Index: POL 30/1404/2019, Original language: English