The Fight for the Future of YouTube | The New Yorker
Earlier this year, executives at YouTube began mulling, once again, the problem of online speech. On grounds of freedom of expression and ideological neutrality, the platform has long allowed users to upload videos endorsing noxious ideas, from conspiracy theories to neo-Nazism. Now it wanted to reverse course. “There are no sacred cows,” Susan Wojcicki, the C.E.O. of YouTube, reportedly told her team. Wojcicki had two competing goals: she wanted to avoid accusations of ideological bias while also affirming her company’s values. In the course of the spring, YouTube drafted a new policy that would ban videos trafficking in historical “denialism” (of the Holocaust, 9/11, Sandy Hook) and “supremacist” views (lauding the “white race,” arguing that men were intellectually superior to women). YouTube planned to roll out its new policy as early as June. In May, meanwhile, it started preparing for Pride Month, turning its red logo rainbow-colored and promoting popular L.G.B.T.Q. video producers on Instagram.
Francesca Tripodi, a media scholar at James Madison University, has studied how right-wing conspiracy theorists perpetuate false ideas online. Essentially, they find unfilled rabbit holes and then create content to fill them. “When there is limited or no metadata matching a particular topic,” she told a Senate committee in April, “it is easy to coördinate around keywords to guarantee the kind of information Google will return.” Political provocateurs can take advantage of data vacuums to increase the likelihood that legitimate news clips will be followed by their videos. And, because controversial or outlandish videos tend to be riveting, even for those who dislike them, they can register as “engaging” to a recommendation system, which would surface them more often. The many automated systems within a social platform can be co-opted and made to work at cross purposes.
Technological solutions are appealing, in part, because they are relatively unobtrusive. Programmers like the idea of solving thorny problems elegantly, behind the scenes. For users, meanwhile, the value of social-media platforms lies partly in their appearance of democratic openness. It’s nice to imagine that the content is made by the people, for the people, and that popularity flows from the grass roots.
In fact, the apparent democratic neutrality of social-media platforms has always been shaped by algorithms and managers. In its early days, YouTube staffers often cultivated popularity by hand, choosing trending videos to highlight on its home page; if the site gave a leg up to a promising YouTuber, that YouTuber’s audience grew. By spotlighting its most appealing users, the platform attracted new ones. It also shaped its identity: by featuring some kinds of content more than others, the company showed YouTubers what kind of videos it was willing to boost. “They had to be super family friendly, not copyright-infringing, and, at the same time, compelling,” Schaffer recalled, of the highlighted videos.
Today, YouTube employs scores of “partner managers,” who actively court and promote celebrities, musicians, and gamers—meeting with individual video producers to answer questions about how they can reach bigger audiences,
Last year, YouTube paid forty-seven ambassadors to produce socially conscious videos and attend workshops. The program’s budget, of around five million dollars—it also helps fund school programs designed to improve students’ critical-thinking skills when they are confronted with emotionally charged videos—is a tiny sum compared to the hundreds of millions that the company reportedly spends on YouTube Originals, its entertainment-production arm. Still, one YouTube representative told me, “We saw hundreds of millions of views on ambassadors’ videos last year—hundreds of thousands of hours of watch time.” Most people encountered the Creators for Change clips as automated advertisements before other videos.
On a channel called AsapScience, Gregory Brown, a former high-school teacher, and his boyfriend, Mitchell Moffit, make animated clips about science that affects their viewers’ everyday lives; their most successful videos address topics such as the science of coffee or masturbation. They used their Creators for Change dollars to produce a video about the scientifically measurable effects of racism, featuring the Black Lives Matter activist DeRay Mckesson. While the average AsapScience video takes a week to make, the video about racism had taken seven or eight months: the level of bad faith and misinformation surrounding the topic, Brown said, demanded extra precision. “You need to explain the study, explain the parameters, and explain the result so that people can’t argue against it,” he said. “And that doesn’t make the video as interesting, and that’s a challenge.” (Toxic content proliferates, in part, because it is comparatively easy and cheap to make; it can shirk the burden of being true.)
One way to make counterspeech more effective is to dampen the speech that it aims to counter. In March, after a video of a white-supremacist mass shooting at a mosque in Christchurch, New Zealand, went viral, Hunter Walk, a former YouTube executive, tweeted that the company should protect “freedom of speech” but not “freedom of reach.” He suggested that YouTube could suppress toxic videos by delisting them as candidates for its recommendation engine—in essence, he wrote, this would “shadowban” them. (Shadow-banning is so-called because a user might not know that his reach has been curtailed, and because the ban effectively pushes undesirable users into the “shadows” of an online space.) Ideally, people who make such shadow-banned videos could grow frustrated by their limited audiences and change their ways; videos, Walk explained, could be shadow-banned if they were linked to by a significant number of far-right Web havens, such as 8chan and Gab. (Walk’s tweets, which are set to auto-delete, have since disappeared.)
Shadow-banning is an age-old moderation tool: the owners of Internet discussion forums have long used it to keep spammers and harassers from bothering other users. On big social-media platforms, however, this kind of moderation doesn’t necessarily focus on individuals; instead, it affects the way that different kinds of content surface algorithmically. YouTube has published a lengthy list of guidelines that its army of raters can use to give some types of content—clips that contain “extreme gore or violence, without a beneficial purpose,” for example, or that advocate hateful ideas expressed in an “emotional,” “polite,” or even “academic-sounding” way—a low rating. YouTube’s A.I. learns from the ratings to make objectionable videos less likely to appear in its automated recommendations. Individual users won’t necessarily know how their videos have been affected. The ambiguities generated by this system have led some to argue that political shadow-banning is taking place. President Trump and congressional Republicans, in particular, are alarmed by the idea that some version of the practice could be widely employed against conservatives. In April, Ted Cruz held a Senate subcommittee hearing called “Stifling Free Speech: Technological Censorship and the Public Discourse.” In his remarks, he threatened the platforms with regulation; he also brought in witnesses who accused them of liberal bias. (YouTube denies that its raters evaluate recommendations along political lines, and most experts agree that there is no evidence for such a bias.)
Engineers at YouTube and other companies are hesitant to detail their algorithmic tweaks for many reasons; among them is the fact that obscure algorithms are harder to exploit. But Serge Abiteboul, a computer-science professor who was tasked by the French government to advise legislators on online hate speech, argues that verifiable solutions are preferable to hidden ones. YouTube has claimed that, since tweaking its systems in January, it has reduced the number of views for recommended videos containing borderline content and harmful misinformation by half. Without transparency and oversight, however, it’s impossible for independent observers to confirm that drop. “Any supervision that’s accepted by society would be better than regulation done in an opaque manner, by the platforms, themselves, alone,” Abiteboul said.
The company featured videos it liked, banned others outright, and kept borderline videos off the home page. Still, it allowed some toxic speech to lurk in the corners. “We thought, if you just quarantine the borderline stuff, it doesn’t spill over to the decent people,” he recalled. “And, even if it did, it seemed like there were enough people who would just immediately recognize it was wrong, and it would be O.K.” The events of the past few years have convinced Schaffer that this was an error. The increasing efficiency of the recommendation system drew toxic content into the light in ways that YouTube’s early policymakers hadn’t anticipated. In the end, borderline content changed the tenor and effect of the platform as a whole. “Our underlying premises were flawed,” Schaffer said. “We don’t need YouTube to tell us these people exist. And counterspeech is not a fair burden. Bullshit is infinitely more difficult to combat than it is to spread. YouTube should have course-corrected a long time ago.”
Some experts point out that algorithmic tweaks and counterspeech don’t change the basic structure of YouTube—a structure that encourages the mass uploading of videos from unvetted sources. It’s possible that this structure is fundamentally incompatible with a healthy civic discourse.
There are commercial reasons, it turns out, for fighting hate speech: according to a survey by the Anti-Defamation League, fifty-three per cent of Americans reported experiencing online hate or harassment in 2018—rates of bigoted harassment were highest among people who identified as L.G.B.T.Q.—and, in response, many spent less time online or deleted their apps. A study released last year, by Google and Stanford University, identified toxic speech as a “rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion.” As part of the Creators for Change program, YouTube has drawn up lesson plans for teachers which encourage students to “use video to find your voice and bring people together.” Teen-agers posting videos disputing toxic ideas are engaged users, too.
I asked YouTube’s representatives why they didn’t use the Redirect Method to serve Creators for Change videos to people who search for hate speech. If they valued what their ambassadors had to say, why wouldn’t they disseminate those messages as effectively as possible? A representative explained that YouTube doesn’t want to “pick winners.” I brought that message back to Libby Hemphill, the computer-science professor. “I wish they would recognize that they already do pick winners,” she said. “Algorithms make decisions we teach them to make, even deep-learning algorithms. They should pick different winners on purpose.” Schaffer suggested that YouTube’s insistence on the appearance of neutrality is “a kind of Stockholm syndrome. I think they’re afraid of upsetting their big creators, and it has interfered with their ability to be aggressive about implementing their values.”
Brown, for his part, wanted the platform to choose a point of view. But, he told me, “If they make decisions about who they’re going to prop up in the algorithm, and make it more clear, I think they would lose money. I think they might lose power.” He paused. “That’s a big test for these companies right now. How are they going to go down in history?”