• Meet YouTube’s Hidden Laborers Toiling to Keep Ads Off Hateful Videos
    https://www.wired.com/2017/04/zerochaos-google-ads-quality-raters

    Taken together, the scope of the work and nuance required in assessing videos shows Google still needs human help in dealing with YouTube’s ad problems. “We have many sources of information, but one of our most important sources is people like you,” Google tells raters in a document describing the purpose of their ad-rating work. But while only machine intelligence can grapple with YouTube’s scale, as company execs and representatives have stressed again and again, until Google’s machines—or anyone else’s—get smart enough to distinguish, say, truly offensive speech from other forms of expression on its own, such efforts will still need to rely on people.

    “We have always relied on a combination of technology and human reviews to analyze content that has been flagged to us because understanding context in video can be subjective,” says Chi Hea Cho, a spokesperson for Google. “Recently we added more people to accelerate the reviews. These reviews help train our algorithms so they keep improving over time.”

    #digital_labor #google #publicité #IA

    • They read comment sections to flag abusive banter between users. They check all kinds of websites served by Google’s ad network to ensure they meet the company’s standards of quality. They classify sites by category, such as retail or news, and click links in ads to see if they work. And, as their name suggests, they rate the quality of ads themselves.

      (…) In March, however, in the wake of advertiser boycotts, Google asked raters to set that other work aside in favor of a “high-priority rating project” that would consume their workloads “for the foreseeable future,” according to an email the company sent them. This new project meant focusing almost exclusively on YouTube—checking the content of videos or entire channels against a list of things that advertisers find objectionable. “It’s been a huge change,” says one ad rater.

      Raters say their workload suggests that volume and speed are more of a priority than accuracy. In some cases, they’re asked to review hours-long videos in less than two minutes. On anonymous online forums, raters swap time-saving techniques—for instance, looking up rap video lyrics to scan quickly for profanity, or skipping through a clip in 10-second chunks instead of watching the entire thing. A timer keeps track of how long they spend on each video, and while it is only a suggested deadline, raters say it adds a layer of pressure. “I’m worried if I take too long on too many videos in a row I’ll get fired,” one rater tells WIRED.

      (…) “We won’t always be able to tell you what [each] task is for, but it’s always something we consider important,” the company explains in orientation materials for ad raters. “You won’t often hear about the results of your work. In fact, it sometimes might seem like your work just flows into a black hole … Even though you don’t always see the impact, your work is very important, and many people at Google review it very, very closely.”

      (…) To be sure, not all ad raters find fault with the issues raised by some of their fellow workers. The $15-per-hour rate is still above most cities’ minimum wages. One ad rater told me he was grateful for the opportunity ZeroChaos gave him. “[ZeroChaos] didn’t care about a criminal background when even McDonald’s turned me down,” the rater said. Multiple raters said they’d been close to homelessness or needing to go on food stamps when this job came along. [mais dans le même temps ne sont pas assurés de faire suffisamment d’heures dans la semaine (minimum de 10h/semaine et jusqu’à 29h/ possible) et interdits de bosser pour une autre boîte)

      (…) But churning through human ad raters may just reflect best practices for making AI smarter. Artificial intelligence researchers and industry experts say a regular rotation of human trainers inputting data is better for training AI. “AI needs many perspectives, especially in areas like offensive content,” says Jana Eggers, CEO of AI startup Nara Logics. Even the Supreme Court could not describe obscenity, she points out, citing the “I know it when I see it” threshold test. “Giving ‘the machine’ more eyes to see is going to be a better result.”