• How the AI industry profits from catastrophe | MIT Technology Review
    https://www.technologyreview.com/2022/04/20/1050392/ai-industry-appen-scale-data-labels

    Appen is among dozens of companies that offer data-labeling services for the AI industry. If you’ve bought groceries on Instacart or looked up an employer on Glassdoor, you’ve benefited from such labeling behind the scenes. Most profit-maximizing algorithms, which underpin e-commerce sites, voice assistants, and self-driving cars, are based on deep learning, an AI technique that relies on scores of labeled examples to expand its capabilities. 

    The insatiable demand has created a need for a broad base of cheap labor to manually tag videos, sort photos, and transcribe audio. The market value of sourcing and coordinating that “ghost work,” as it was memorably dubbed by anthropologist Mary Gray and computational social scientist Siddharth Suri, is projected to reach $13.7 billion by 2030.

    Venezuela’s crisis has been a boon for these companies, which suddenly gained some of the cheapest labor ever available. But for Venezuelans like Fuentes, the rise of this fast-growing new industry in her country has been a mixed blessing. On one hand, it’s been a lifeline for those without any other options. On the other, it’s left them vulnerable to exploitation as corporations have lowered their pay, suspended their accounts, or discontinued programs in an ongoing race to offer increasingly low-cost services to Silicon Valley.

    “There are huge power imbalances,” says Julian Posada, a PhD candidate at the University of Toronto who studies data annotators in Latin America. “Platforms decide how things are done. They make the rules of the game.”

    To a growing chorus of experts, the arrangement echoes a colonial past when empires exploited the labor of more vulnerable countries and extracted profit from them, further impoverishing them of the resources they needed to grow and develop.

    It was, of all things, the old-school auto giants that caused the data-labeling industry to explode.

    German car manufacturers, like Volkswagen and BMW, were panicked that the Teslas and Ubers of the world threatened to bring down their businesses. So they did what legacy companies do when they encounter fresh-faced competition: they wrote blank checks to keep up.

    The tech innovation of choice was the self-driving car. The auto giants began pouring billions into their development, says Schmidt, pushing the needs for data annotation to new levels.

    Like all AI models built on deep learning, self-driving cars need millions, if not billions, of labeled examples to be taught to “see.” These examples come in the form of hours of video footage: every frame is carefully annotated to identify road markings, vehicles, pedestrians, trees, and trash cans for the car to follow or avoid. But unlike AI models that might categorize clothes or recommend news articles, self-driving cars require the highest levels of annotation accuracy. One too many mislabeled frames can be the difference between life and death.

    For over a decade, Amazon’s crowdworking platform Mechanical Turk, or MTurk, had reigned supreme. Launched in 2005, it was the de facto way for companies to access low-wage labor willing to do piecemeal work. But MTurk was also a generalist platform: as such, it produced varied results and couldn’t guarantee a baseline of quality.

    For some tasks, Scale first runs client data through its own AI systems to produce preliminary labels before posting the results to Remotasks, where human workers correct the errors. For others, according to company training materials reviewed by MIT Technology Review, the company sends the data straight to the platform. Typically, one layer of human workers takes a first pass at labeling; then another reviews the work. Each worker’s pay is tied to speed and accuracy, which eggs them on to complete tasks more quickly yet fastidiously.

    Initially, Scale sought contractors in the Philippines and Kenya. Both were natural fits, with histories of outsourcing, populations that speak excellent English and, crucially, low wages. However, around the same time, competitors such as Appen, Hive Micro, and Mighty AI’s Spare5 began to see a dramatic rise in signups from Venezuela, according to Schmidt’s research. By mid-2018, an estimated 200,000 Venezuelans had registered for Hive Micro and Spare5, making up 75% of their respective workforces.

    The group now pools tasks together. Anytime a task appears in one member’s queue, that person copies the task-specific URL to everyone else. Anyone who clicks it can then claim the task as their own, even if it never showed up in their own queue. The system isn’t perfect. Each task has a limited number of units, such as the number of images that need to be labeled, which disappear faster when multiple members claim the same task in parallel. But Fuentes says so long as she’s clicked the link before it goes away, the platform will let her complete whatever units are left, and Appen will pay. “We all help each other out,” she says.

    The group also keeps track of which client IDs should be avoided. Some clients are particularly harsh in grading task performance, which can cause a devastating account suspension. Nearly every member of the group has experienced at least one, Fuentes says. When it happens, you lose your access not only to new tasks but to any earnings that haven’t been withdrawn.

    The time it happened to Fuentes, she received an email saying she had completed a task with “dishonest answers.” When she appealed, customer service confirmed it was an administrative error. But it still took months of pleading, using Google Translate to write messages in English, before her account was reinstated, according to communications reviewed by MIT Technology Review. (“We … have several initiatives in place to increase the response time,” Golden says. “The reality is that we have thousands of requests a day and respond based on priority.”)

    Simala Leonard, a computer science student at the University of Nairobi who studies AI and worked several months on Remotasks, says the pay for data annotators is “totally unfair.” Google’s and Tesla’s self-driving-car programs are worth billions, he says, and algorithm developers who work on the technology are rewarded with six-figure salaries.

    In parallel with the rise of platforms like Scale, newer data-labeling companies have sought to establish a higher standard for working conditions. They bill themselves as ethical alternatives, offering stable wages and benefits, good on-the-job training, and opportunities for career growth and promotion.

    But this model still accounts for only a tiny slice of the market. “Maybe it improves the lives of 50 workers,” says Milagros Miceli, a PhD candidate at the Technical University of Berlin who studies two such companies, “but it doesn’t mean that this type of economy as it’s structured works in the long run.”

    Such companies are also constrained by players willing to race to the bottom. To keep their prices competitive, the firms similarly source workers from impoverished and marginalized populations—low-income youth, refugees, people with disabilities—who remain just as vulnerable to exploitation, Miceli says.

    #Intelligence_artificelle #Annotation #Tags #Etiquetage #Nouvelle_exploitation #Data_colonialisme