/2020

  • Brazil is sliding into techno-authoritarianism | MIT Technology Review
    https://www.technologyreview.com/2020/08/19/1007094/brazil-bolsonaro-data-privacy-cadastro-base/?truid=a497ecb44646822921c70e7e051f7f1a

    For many years, Latin America’s largest democracy was a leader on data governance. In 1995, it created the Brazilian Internet Steering Committee, a multi-stakeholder body to help the country set principles for internet governance. In 2014, Dilma Rousseff’s government pioneered the Marco Civil (Civil Framework), an internet “bill of rights” lauded by Tim Berners-Lee, the inventor of the World Wide Web. Four years later, Brazil’s congress passed a data protection law, the LGPD, closely modeled on Europe’s GDPR.

    Recently, though, the country has veered down a more authoritarian path. Even before the pandemic, Brazil had begun creating an extensive data-collection and surveillance infrastructure. In October 2019, President Jair Bolsonaro signed a decree compelling all federal bodies to share most of the data they hold on Brazilian citizens, from health records to biometric information, and consolidate it in a vast master database, the Cadastro Base do Cidadão (Citizen’s Basic Register). With no debate or public consultation, the measure took many people by surprise.

    In lowering barriers to the exchange of information, the government says, it hopes to increase the quality and consistency of data it holds. This could—according to the official line—improve public services, cut down on voter fraud, and reduce bureaucracy. In a country with some 210 million people, such a system could speed up the delivery of social welfare and tax benefits, and make public policies more efficient.

    But critics have warned that under Bolsonaro’s far-right leadership, this concentration of data will be used to abuse personal privacy and civil liberties. And the covid-19 pandemic appears to be accelerating the country’s slide toward a surveillance state. Read the full story.

    #Brésil #Surveillance #Vie_privée #Législation

  • Facebook is training robot assistants to hear as well as see
    https://www.technologyreview.com/2020/08/21/1007523/facebook-ai-robot-assistants-hear-and-see

    The company’s AI lab is pushing the boundaries of its virtual simulation platform to train AI agents to carry out tasks like “Get my ringing phone.” In June 2019, Facebook’s AI lab, FAIR, released AI Habitat, a new simulation platform for training AI agents. It allowed agents to explore various realistic virtual environments, like a furnished apartment or cubicle-filled office. The AI could then be ported into a robot, which would gain the smarts to navigate through the real world without (...)

    #Facebook #algorithme #robotique #son #écoutes #surveillance

  • The problems AI has today go back centuries
    https://www.technologyreview.com/2020/07/31/1005824/decolonial-ai-for-everyone

    Algorithmic discrimination and “ghost work” didn’t appear by accident. Understanding their long, troubling history is the first step toward fixing them. In March of 2015, protests broke out at the University of Cape Town in South Africa over the campus statue of British colonialist Cecil Rhodes. Rhodes, a mining magnate who had gifted the land on which the university was built, had committed genocide against Africans and laid the foundations for apartheid. Under the rallying banner of “Rhodes (...)

    #CambridgeAnalytica/Emerdata #DeepMind #algorithme #éthique #racisme #discrimination #GigEconomy (...)

    ##CambridgeAnalytica/Emerdata ##travail

  • The UK exam debacle reminds us that algorithms can’t fix broken systems
    https://www.technologyreview.com/2020/08/20/1007502/uk-exam-algorithm-cant-fix-broken-system

    The problem began when the exam regulator lost sight of the ultimate goal—and pushed for standardization above all else. When the UK first set out to find an alternative to school leaving qualifications, the premise seemed perfectly reasonable. Covid-19 had derailed any opportunity for students to take the exams in person, but the government still wanted a way to assess them for university admission decisions. Chief among its concerns was an issue of fairness. Teachers had already made (...)

    #algorithme #biais #discrimination #enseignement #pauvreté

    ##pauvreté

  • Israeli phone hacking company faces court fight over sales to Hong Kong
    https://www.technologyreview.com/2020/08/25/1007617/israeli-phone-hacking-company-faces-court-fight-over-sales-to-hong

    “The workers inside the company didn’t join to help the Chinese dictatorship,” says one human rights lawyer. Human rights advocates filed a new court petition against the Israeli phone hacking company Cellebrite, urging Israel’s ministry of defense to halt the firm’s exports to Hong Kong, where security forces have been using the technology in crackdowns against dissidents as China takes greater control. In July, police court filings revealed that Cellebrite’s phone hacking technology has (...)

    #Apple #Cellebrite #iPhone #smartphone #activisme #hacking #surveillance #écoutes

  • Podcast : Want consumer privacy ? Try China
    https://www.technologyreview.com/2020/08/19/1007425/data-privacy-china-gdpr

    Forget the idea that China doesn’t care about privacy—its citizens will soon have much greater consumer privacy protections than Americans. The narrative in the US that the Chinese don’t care about data privacy is simply misguided. It’s true that the Chinese government has built a sophisticated surveillance apparatus (with the help of Western companies), and continues to spy on its citizenry. But when it comes to what companies can do with people’s information, China is rapidly moving toward a (...)

    #Alibaba #Apple #ByteDance #Cisco #Google #Nokia_Siemens #Nortel_Networks #TikTok #Facebook #WeChat #Weibo #QRcode #smartphone #censure #BHATX #BigData #COVID-19 #GAFAM #santé #surveillance (...)

    ##santé ##[fr]Règlement_Général_sur_la_Protection_des_Données__RGPD_[en]General_Data_Protection_Regulation__GDPR_[nl]General_Data_Protection_Regulation__GDPR_

  • The UK exam debacle reminds us that algorithms can’t fix broken systems | MIT Technology Review
    https://www.technologyreview.com/2020/08/20/1007502/uk-exam-algorithm-cant-fix-broken-system/?truid=a497ecb44646822921c70e7e051f7f1a

    Nearly 40% of students ended up receiving exam scores downgraded from their teachers’ predictions, threatening to cost them their university spots. Analysis of the algorithm also revealed that it had disproportionately hurt students from working-class and disadvantaged communities and inflated the scores of students from private schools. On August 16, hundreds chanted “Fuck the algorithm” in front of the UK’s Department of Education building in London to protest the results. By the next day, Ofqual had reversed its decision. Students will now be awarded either their teacher’s predicted scores or the algorithm’s—whichever is higher.

    The debacle feels like a textbook example of algorithmic discrimination. Those who have since dissected the algorithm have pointed out how predictable it was that things would go awry; it was trained, in part, not just on each student’s past academic performance but also on the past entrance-exam performance of the student’s school. The approach could only have led to punishment of outstanding outliers in favor of a consistent average.

    But the root of the problem runs deeper than bad data or poor algorithmic design. The more fundamental errors were made before Ofqual even chose to pursue an algorithm. At bottom, the regulator lost sight of the ultimate goal: to help students transition into university during anxiety-ridden times. In this unprecedented situation, the exam system should have been completely rethought.

    “There was just a spectacular failure of imagination,” says Hye Jung Han, a researcher at Human Rights Watch in the US, who focuses on children’s rights and technology. “They just didn’t question the very premise of so many of their processes even when they should have.”

    The objective completely shaped the way Ofqual went about pursuing the problem. The need for standardization overruled everything else. The regulator then logically chose one of the best standardization tools, a statistical model, for predicting a distribution of entrance-exam scores for 2020 that would match the distribution from 2019.

    Had Ofqual chosen the other objective, things would have gone quite differently. It likely would have scrapped the algorithm and worked with universities to change how the exam grades are weighted in their admissions processes. “If they just looked one step past their immediate problem and looked at what are the purpose of grades—to go to university, to be able to get jobs—they could have flexibly worked with universities and with workplaces to say, ‘Hey, this year grades are going to look different, which means that any important decisions that traditionally were made based off of grades also need to flexible and need to be changed,” says Han.

    Ofqual’s failures are not unique. In a report published last week by the Oxford Internet Institute, researchers found that one of the most common traps organizations fall into when implementing algorithms is the belief that they will fix really complex structural issues. These projects “lend themselves to a kind of magical thinking,” says Gina Neff, an associate professor at the institute, who coauthored the report. “Somehow the algorithm will simply wash away any teacher bias, wash away any attempt at cheating or gaming the system.”

    But the truth is, algorithms cannot fix broken systems. They inherit the flaws of the systems in which they’re placed. In this case, the students and their futures ultimately bore the brunt of the harm. “ I think it’s the first time that an entire nation has felt the injustice of an algorithm simultaneously ,” says Fry.

    #Algorithme #Ofqual #Fuck_the_algorithm #Inégalités #Grande_Bretagne #Education

  • Is a successful contact tracing app possible? These countries think so. | MIT Technology Review
    https://www.technologyreview.com/2020/08/10/1006174/covid-contract-tracing-app-germany-ireland-success/?truid=a497ecb44646822921c70e7e051f7f1a

    If contact tracing apps are following Gartner’s famous hype cycle, it’s hard to avoid the conclusion they are now firmly in the “trough of disillusionment.” Initial excitement that they could be a crucial part of the arsenal against covid-19 has given way to fears it could all come to nothing, despite large investments of money and time. Country after country has seen low take-up, and in the case of Norway and the UK, apps were even abandoned.

    The US, meanwhile, is very late to the party. Singapore launched its app, TraceTogether, back in March, and Switzerland became the first country to release an app using Google and Apple’s exposure notification system in May.

    It took until last week—that is, three months later—for Virginia to become the first US state to launch an app using the Apple-Google system. A nationwide app in the United States seems out of the question given the lack of a coordinated federal response, but at least three more states are planning to launch similar services.

    3. Work in the open (or you won’t gain public trust)

    Both Ireland and Germany have made the source code for their apps open for anyone to inspect. “We did that right from the start, so community feedback could go into the code before it went live,” says Thomas Klingbeil, who is responsible for the architecture of the Corona-Warn-App.

    “The stance was that we’d use every tool available, including testing, distancing, masks, but we’d combine it with technology.”
    Peter Lorenz, Germany’s Corona-Warn-App

    Privacy and security concerns loom large for teams building these systems. Germans are particularly savvy about data protection, and developers there were conscious of the example of Norway, which had to suspend use of its app after criticism from its data privacy watchdog. Germany switched from building its own centralized app to one based on the Apple-Google API almost immediately, which proved to be a wise decision. Ireland did the same. And they both designed their apps with privacy in mind from the start, following a principle of “collect as little data as possible.” All of the information gathered by the apps stays on people’s phones rather than being sent to central servers. It is encrypted and automatically deleted after 14 days.

    #Contact_tracing #COVID-19

  • The human cost of a WeChat ban: severing a hundred million ties | MIT Technology Review
    https://www.technologyreview.com/2020/08/13/1006631/wechat-ban-severs-a-hundred-million-ties/?truid=a497ecb44646822921c70e7e051f7f1a

    The US hurting itself

    There’s a reason why WeChat is the only platform still available for communicating with people in China. It’s because the Chinese government banned everything else. First it was Facebook and Google, then Telegram and WhatsApp. “It’s not as if there’s no fault on the Chinese side for this,” Webster says.

    But retaliating in turn is also not the solution. “If you think about what the US is doing, it’s basically learning from China,” says Youyou Zhou, a Chinese national who works as a journalist in the US and relies on WeChat to talk to sources and loved ones. “It’s establishing cyber sovereignty and claiming to protect user data in the US by using political action and legal means to fend off competition. It’s just not what you would expect a liberal and free country would do.”

    Over time, both Webster and Zhou worry that this cleaving will hurt the US. What’s happening in China right now, Webster says, is “legitimately very dark,” including the escalating oppression of Muslim Uyghurs in Xinjiang and the passage of the National Security Law in Hong Kong. But the Trump administration’s actions are against the US’s self-interests, he says. “If we set ourselves up for a new cold war and there’s no ability to monitor actual events in China, I think we could very well miss opportunities to have better outcomes in the long term. Essentially tearing down any connection between the two places is a recipe for enduring conflict.”

    #WeChat #USA-Chine #Fin_du_global_internet #Culture_numérique

  • 8 million people, 14 alerts : why some covid-19 apps are staying silent
    https://www.technologyreview.com/2020/07/10/1005027/8-million-people-14-alerts-why-some-covid-19-apps-are-staying-sile

    Critics have rounded on contact tracing apps in France and Australia for sending out almost no virus notifications. But experts say it’s not a total failure—as long as we learn what went wrong. When France launched its app for digital contact tracing, it looked like a possible breakthrough for the virus-ravaged country. After going live in June, StopCovid was downloaded by 2 million people in a short time, and digital affairs minister Cédric O said that “from the first downloads, the app helps (...)

    #Apple #Google #algorithme #Android #Bluetooth #Corona-Warn-App #COVIDSafe_ #iPhone #smartphone #StopCovid #contactTracing #technologisme #COVID-19 (...)

    ##santé

  • Human rights activists want to use AI to help prove war crimes in court
    https://www.technologyreview.com/2020/06/25/1004466/ai-could-help-human-rights-activists-prove-war-crimes

    It would take years for humans to scour the tens of thousands of hours of footage that document violations in Yemen. With machine learning, it takes just days. In 2015, alarmed by an escalating civil war in Yemen, Saudi Arabia led an air campaign against the country to defeat what it deemed a threatening rise of Shia power. The intervention, launched with eight other largely Sunni Arab states, was meant to last only a few weeks, Saudi officials had said. Nearly five years later, it still (...)

    #algorithme #activisme #criminalité #arme #reconnaissance #Amnesty #GlobalLegalActionNetwork-GLAN_

    ##criminalité

  • Are we making spacecraft too autonomous ? | MIT Technology Review
    https://www.technologyreview.com/2020/07/03/1004788/spacecraft-spacefight-autonomous-software-ai/?truid=a497ecb44646822921c70e7e051f7f1a

    Le syndrome Neil Armstrong ne leur a pas suffit ?

    When SpaceX’s Crew Dragon took NASA astronauts to the ISS near the end of May, the launch brought back a familiar sight. For the first time since the space shuttle was retired, American rockets were launching from American soil to take Americans into space.

    Inside the vehicle, however, things couldn’t have looked more different. Gone was the sprawling dashboard of lights and switches and knobs that once dominated the space shuttle’s interior. All of it was replaced with a futuristic console of multiple large touch screens that cycle through a variety of displays. Behind those screens, the vehicle is run by software that’s designed to get into space and navigate to the space station completely autonomously.

    “Growing up as a pilot, my whole career, having a certain way to control a vehicle—this is certainly different,” Doug Hurley told NASA TV viewers shortly before the SpaceX mission. Instead of calling for a hand on the control stick, navigation is now a series of predetermined inputs. The SpaceX astronauts may still be involved in decision-making at critical junctures, but much of that function has moved out of their hands.

    But overrelying on software and autonomous systems in spaceflight creates new opportunities for problems to arise. That’s especially a concern for many of the space industry’s new contenders, who aren’t necessarily used to the kind of aggressive and comprehensive testing needed to weed out problems in software and are still trying to strike a good balance between automation and manual control.

    Nowadays, a few errors in over one million lines of code could spell the difference between mission success and mission failure. We saw that late last year, when Boeing’s Starliner capsule (the other vehicle NASA is counting on to send American astronauts into space) failed to make it to the ISS because of a glitch in its internal timer. A human pilot could have overridden the glitch that ended up burning Starliner’s thrusters prematurely. NASA administrator Jim Bridenstine remarked soon after Starliner’s problems arose: “Had we had an astronaut on board, we very well may be at the International Space Station right now.”

    But it was later revealed that many other errors in the software had not been caught before launch, including one that could have led to the destruction of the spacecraft. And that was something human crew members could easily have overridden.

    Boeing is certainly no stranger to building and testing spaceflight technologies, so it was a surprise to see the company fail to catch these problems before the Starliner test flight. “Software defects, particularly in complex spacecraft code, are not unexpected,” NASA said when the second glitch was made public. “However, there were numerous instances where the Boeing software quality processes either should have or could have uncovered the defects.” Boeing declined a request for comment.

    Space, however, is a unique environment to test for. The conditions a spacecraft will encounter aren’t easy to emulate on the ground. While an autonomous vehicle can be taken out of the simulator and eased into lighter real-world conditions to refine the software little by little, you can’t really do the same thing for a launch vehicle. Launch, spaceflight, and a return to Earth are actions that either happen or they don’t—there is no “light” version.

    This, says Schreier, is why AI is such a big deal in spaceflight nowadays—you can develop an autonomous system that is capable of anticipating those conditions, rather than requiring the conditions to be learned during a specific simulation. “You couldn’t possibly simulate on your own all the corner cases of the new hardware you’re designing,” he says.

    Raines adds that in contrast to the slower approach NASA takes for testing, private companies are able to move much more rapidly. For some, like SpaceX, this works out well. For others, like Boeing, it can lead to some surprising hiccups.

    Ultimately, “the worst thing you can do is make something fully manual or fully autonomous,” says Nathan Uitenbroek, another NASA engineer working on Orion’s software development. Humans have to be able to intervene if the software is glitching up or if the computer’s memory is destroyed by an unanticipated event (like a blast of cosmic rays). But they also rely on the software to inform them when other problems arise.

    NASA is used to figuring out this balance, and it has redundancy built into its crewed vehicles. The space shuttle operated on multiple computers using the same software, and if one had a problem, the others could take over. A separate computer ran on entirely different software, so it could take over the entire spacecraft if a systemic glitch was affecting the others. Raines and Uitenbroek say the same redundancy is used on Orion, which also includes a layer of automatic function that bypasses the software entirely for critical functions like parachute release.

    On the Crew Dragon, there are instances where astronauts can manually initiate abort sequences, and where they can override software on the basis of new inputs. But the design of these vehicles means it’s more difficult now for the human to take complete control. The touch-screen console is still tied to the spacecraft’s software, and you can’t just bypass it entirely when you want to take over the spacecraft, even in an emergency.

    #Espace #Logiciel #Intelligence_artificielle #Sécurité

  • A Caribbean beach could offer a crucial test in the fight to slow climate change | MIT Technology Review
    https://www.technologyreview.com/2020/06/22/1004218/how-green-sand-could-capture-billions-of-tons-of-carbon-dioxide

    Scientists are taking a harder look at using carbon-capturing rocks to counteract climate change, but lots of uncertainties remain.

    Aux Caraïbes, un plage de #sable_vert qui absorbe le #CO2
    https://www.linfodurable.fr/environnement/aux-caraibes-un-plage-de-sable-vert-qui-absorbe-le-co2-18673

    Pour procéder à leur expérience, les chercheurs ont eu recours à une méthode appelée ”altération forcée.” Un processus qui permet à l’#olivine de transformer le dioxyde de carbone en coraux ou en rochers calcaires, et qui s’explique principalement par la désagrégation de ce minerai volcanique au contact des vagues. Une solution peu coûteuse, à hauteur de 10 dollars par tonne de carbone traitée, que l’ONG ambitionne de développer à grande échelle, comme l’expliquent les fondateurs de Project Vesta sur leur site internet. ”Notre vision consiste à aider à renverser le changement climatique en transformant 1000 milliards de tonnes de CO2 en rochers.”

    #climat

  • Why tech didn’t save us from covid-19
    https://www.technologyreview.com/2020/06/17/1003312/why-tech-didnt-save-us-from-covid-19

    America’s paralysis reveals a deep and fundamental flaw in how the nation thinks about innovation. Technology has failed the US and much of the rest of the world in its most important role : keeping us alive and healthy. As I write this, more than 380,000 are dead, the global economy is in ruins, and the covid-19 pandemic is still raging. In an age of artificial intelligence, genomic medicine, and self-driving cars, our most effective response to the outbreak has been mass quarantines, a (...)

    #technologisme #COVID-19 #santé

    ##santé

  • A new US bill would ban the police use of facial recognition
    https://www.technologyreview.com/2020/06/26/1004500/a-new-us-bill-would-ban-the-police-use-of-facial-recognition/?truid=e240178e6fc656e71bbee1dbf6ce3de7

    The news : US Democratic lawmakers have introduced a bill that would ban the use of facial recognition technology by federal law enforcement agencies. Specifically, it would make it illegal for any federal agency or official to “acquire, possess, access, or use” biometric surveillance technology in the US. It would also require state and local law enforcement to bring in similar bans in order to receive federal funding. The Facial Recognition and Biometric Technology Moratorium Act was (...)

    #Microsoft #IBM #Amazon #algorithme #CCTV #Rekognition #biométrie #facial #législation (...)

    ##reconnaissance

  • An elegy for cash : the technology we might never replace
    https://www.technologyreview.com/2020/01/03/131029/an-elegy-for-cash-the-technology-we-might-never-replace

    Cash is gradually dying out. Will we ever have a digital alternative that offers the same mix of convenience and freedom ? Think about the last time you used cash. How much did you spend ? What did you buy, and from whom ? Was it a one-time thing, or was it something you buy regularly ? Was it legal ? If you’d rather keep all that to yourself, you’re in luck. The person in the store (or on the street corner) may remember your face, but as long as you didn’t reveal any identifying (...)

    #Alibaba #Apple #Facebook #WeChat #cryptage #cryptomonnaie #bitcoin #Libra #QRcode #WeChatPay #technologisme #BigData (...)

    ##discrimination

  • The two-year fight to stop Amazon from selling face recognition to the police
    https://www.technologyreview.com/2020/06/12/1003482/amazon-stopped-selling-police-face-recognition-fight/?truid=e240178e6fc656e71bbee1dbf6ce3de7

    This week’s moves from Amazon, Microsoft, and IBM mark a major milestone for researchers and civil rights advocates in a long and ongoing fight over face recognition in law enforcement. In the summer of 2018, nearly 70 civil rights and research organizations wrote a letter to Jeff Bezos demanding that Amazon stop providing face recognition technology to governments. As part of an increased focus on the role that tech companies were playing in enabling the US government’s tracking and (...)

    #Megvii #Microsoft #Ring #IBM #Amazon #Flickr #algorithme #CCTV #Rekognition #sonnette #biométrie #police #racisme #consentement #facial #reconnaissance #sexisme #vidéo-surveillance #BlackLivesMatter #discrimination #scraping #surveillance (...)

    ##ACLU

  • Protest misinformation is riding on the success of pandemic hoaxes | MIT Technology Review
    https://www.technologyreview.com/2020/06/10/1002934/protest-propaganda-is-riding-on-the-success-of-pandemic-hoaxes

    Misinformation about police brutality protests is being spread by the same sources as covid-19 denial. The troubling results suggest what might come next.

    by Joan Donovan
    June 10, 2020

    Police confront Black Lives Matter protesters in Los Angeles
    JOSEPH NGABO ON UNSPLASH
    After months spent battling covid-19, the US is now gripped by a different fever. As the video of George Floyd being murdered by Derek Chauvin circulated across social media, the streets around America—and then the world—have filled with protesters. Floyd’s name has become a public symbol of injustice in a spiraling web of interlaced atrocities endured by Black people, including Breonna Taylor, who was shot in her home by police during a misdirected no-knock raid, and Ahmaud Arbery, who was murdered by a group of white vigilantes. 

    Meanwhile, on the digital streets, a battle over the narrative of protest is playing out in separate worlds, where truth and disinformation run parallel. 

    Related Story

    How to protect yourself online from misinformation right now
    In times of crisis it’s easy to become a spreader of incorrect information online. We asked the experts for tips on how to stay safe—and protect others.

    In one version, tens of thousands of protesters are marching to force accountability on the US justice system, shining a light on policing policies that protect white lives and property above anything else—and are being met with the same brutality and indifference they are protesting against. In the other, driven by Donald Trump, US attorney general Bill Barr, and the MAGA coalition, an alternative narrative contends that anti-fascist protesters are traveling by bus and plane to remote cities and towns to wreak havoc. This notion is inspiring roving gangs of mostly white vigilantes to take up arms. 

    These armed activists are demographically very similar to those who spread misinformation and confusion about the pandemic; the same Facebook groups have spread hoaxes about both; it’s the same older Republican base that shares most fake news. 

    The fact that those who accept protest misinformation also rose up to challenge stay-at-home orders through “reopen” rallies is no coincidence: these audiences have been primed by years of political misinformation and then driven to a frenzy by months of pandemic conspiracy theories. The infodemic helped reinforce routes for spreading false stories and rumors; it’s been the perfect breeding ground for misinformation.

    How it happened
    When covid-19 hit like a slow-moving hurricane, most people took shelter and waited for government agencies to create a plan for handling the disease. But as the weeks turned into months, and the US still struggled to provide comprehensive testing, some began to agitate. Small groups, heavily armed with rifles and misinformation, held “reopen” rallies that were controversial for many reasons. They often relied on claims that the pandemic was a hoax perpetrated by the Democratic Party, which was colluding with the billionaire donor class and the World Health Organization. The reopen message was amplified by the anti-vaccination movement, which exploited the desire for attention among online influencers and circulated rampant misinformation suggesting that a potential coronavirus vaccine was part of a conspiracy in which Bill Gates planned to implant microchips in recipients. 

    These rallies did not gain much legitimacy in the eyes of politicians, press, or the public, because they seemed unmoored from the reality of covid-19 itself. 

    But when the Black Lives Matter protests emerged and spread, it opened a new political opportunity to muddy the waters. President Trump laid the foundation by threatening to invade cities with the military after applying massive force in DC as part of a staged television event. The cinema of the state was intended to counter the truly painful images of the preceding week of protests, where footage of the police firing rubber bullets, gas, and flash grenades dominated media coverage of US cities on fire. Rather than acknowledge the pain and anguish of Black people in the US, Trump went on to blame “Antifa” for the unrest. 

    @Antifa_US was suspended by Twitter, but this screenshot continues to circulate among right wing groups on Facebook.
    For many on the left, antifa simply means “anti-fascist.” For many on the right, however, “Antifa” has become a stand-in moniker for the Democratic Party. In 2017, we similarly saw right-wing pundits and commentators try to rebrand their political opponents as the “alt-left,” but that failed to stick. 

    Shortly after Trump’s declaration, several Twitter accounts outed themselves as influence operations bent on calling for violence and collecting information about anti-fascists. Twitter, too, confirmed that an “Antifa” account, running for three years, was tied to a now-defunct white nationalist organization that had helped plan the Unite the Right rally that killed Heather Heyer and injured hundreds more. Yet the “alt-right” and other armed militia groups that planned this gruesome event in Charlottesville have not drawn this level of concern from federal authorities.

    @OCAntifa Posted this before the account was suspended on Twitter for platform manipulation.
    Disinformation stating that the protests were being inflamed by Antifa quickly traveled up the chain from impostor Twitter accounts and throughout the right-wing media ecosystem, where it still circulates among calls for an armed response. This disinformation, coupled with widespread racism, is why armed groups of white vigilantes are lining the streets in different cities and towns. Simply put, when disinformation mobilizes, it endangers the public.

    What next?
    As researchers of disinformation, we have seen this type of attack play out before. It’s called “source hacking”: a set of tactics where media manipulators mimic the patterns of their opponents, try to obfuscate the sources of their information, and then slowly become more and more dangerous in their rhetoric. Now that Trump says he will designate Antifa a domestic terror group, investigators will have to take a hard look at social-media data to discern who was actually calling for violence online. They will surely unearth this widespread disinformation campaign of far-right agitators.

    That doesn’t mean that every call to action is suspect: all protests are poly-vocal and many tactics and policy issues remain up for discussion, including the age-old debate on reform vs. revolution. But what is miraculous about public protest is how easy it is to perceive and document the demands of protesters on the ground. 

    Moments like this call for careful analysis. Journalists, politicians, and others must not waver in their attention to the ways Black organizers are framing the movement and its demands. As a researcher of disinformation, I am certain there will be attempts to co-opt or divert attention from the movement’s messaging, attack organizers, and stall the progress of this movement. Disinformation campaigns tend to proceed cyclically as media manipulators learn to adapt to new conditions, but the old tactics still work—such as impostor accounts, fake calls to action (like #BaldForBLM), and grifters looking for a quick buck. 

    Crucially, there is an entire universe of civil society organizations working to build this movement for the long haul, and they must learn to counter misinformation on the issues they care about. More than just calling for justice, the Movement for Black Lives and Color of Change are organizing actions to move police resources into community services. Media Justice is doing online trainings under the banner of #defendourmovements, and Reclaim the Block is working to defund the police in Minneapolis. 

    Through it all, one thing remains true: when thousands of people show up to protest in front of the White House, it is not reducible to fringe ideologies or conspiracy theories about invading outside agitators. People are protesting during a pandemic because justice for Black lives can’t wait for a vaccine.

    —Joan Donovan, PhD, is research director Research Director of the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School.

    #Fake_news #Extrême_droite #Etats_unis

  • How to turn filming the police into the end of police brutality | MIT Technology Review
    https://www.technologyreview.com/2020/06/10/1002913/how-to-end-police-brutality-filming-witnessing-legislation

    Of all the videos that were released after George Floyd’s murder, the one recorded by 17-year-old Darnella Frazier on her phone is the most jarring. It shows Officer Derek Chauvin kneeling on Floyd’s neck as Floyd pleads, “Please, please, please, I can’t breathe,” and it shows Chauvin refusing to budge. A criminal complaint later states that Chauvin pinned Floyd’s neck for 8 minutes and 46 seconds, past the point where Floyd fell unconscious. In the footage, Chauvin lifts his head and locks eyes with Frazier, unmoved—a chilling and devastating image.

    Documentation like this has galvanized millions of people to flood the streets in over 450 protests in the US and hundreds more in dozens of countries around the world. It’s not just this killing, either. Since the protests have broken out, videos capturing hundreds more incidents of police brutality have been uploaded to social media. A mounted officer tramples a woman. Cop cars accelerate into a crowd. Officers shove an elderly man, who bashes his head when he hits the pavement, and walk away as his blood pools on the ground. One supercut of 14 videos, titled “This Is a Police State,” has been viewed nearly 50 million times.

    Once again, footage taken on a smartphone is catalyzing action to end police brutality once and for all. But Frazier’s video also demonstrates the challenge of turning momentum into lasting change. Six years ago, the world watched as Eric Garner uttered the same words—“I can’t breathe”—while NYPD Officer Daniel Pantaleo strangled him in a chokehold. Four years ago, we watched again as Philando Castile, a 15-minute drive from Minneapolis, bled to death after being shot five times by Officer Jeronimo Yanez at a traffic stop. Both incidents also led to mass protests, and yet we’ve found ourselves here again.

    So how do we turn all this footage into something more permanent—not just protests and outrage, but concrete policing reform? The answer involves three phases: first, we must bear witness to these injustices; second, we must legislate at the local, state, and federal levels to dismantle systems that protect the police when they perpetrate such acts; and finally, we should organize community-based “copwatching” programs to hold local police departments accountable.

    I. Witnessing

    For example, during the first half of the 1800s, freed slaves like Frederick Douglass relied on newspapers and the spoken word to paint graphic depictions of bondage and galvanize the formation of abolitionist groups. During the early 1900s, investigative journalist Ida B. Wells carefully tabulated statistics on the pervasiveness of lynching and worked with white photographers to capture gruesome images of these attacks in places she couldn’t go. Then in the mid-1950s, black civil rights leaders like Martin Luther King Jr. strategically attracted broadcast television cameras to capture the brutal scenes of police dogs and water cannons being turned on peaceful demonstrations.

    Witnessing, in other words, played a critical role in shocking the majority-white public and eliciting international attention. Whites and others allied with black Americans until the support for change reached critical mass.

    Today smartphone witnessing serves the same purpose. It uses imagery to prove widespread, systemic abuse and provoke moral outrage. But compared with previous forms of witnessing, smartphones are also more accessible, more prevalent, and—most notably—controlled in many cases by the hands of black witnesses. “That was a real transition,” says Richardson—“from black people who were reliant upon attracting the gaze of mainstream media to us not needing that mainstream middleman and creating the media for ourselves.”

    II. Legislation

    But filming can’t solve everything. The unfortunate reality is that footage of one-off instances of police brutality rarely leads to the conviction of the officers involved. Analysis by Witness suggests that it usually leads, at most, to victims’ being acquitted of false charges, if they are still alive.

    Some of this can be changed with better tactics: Witness has found, for example, that it can be more effective to withhold bystander footage until after the police report is released. That way police don’t have an opportunity to write their report around the evidence and justify their actions by claiming events off screen. This is what the witness Feiden Santana did after the fatal shooting of Walter Scott, which played a crucial role in getting the police officer charged with second-degree murder.

    But then again, this doesn’t always work. The deeper problem is the many layers of entrenched legal protections afforded the police in the US, which limit how effective video evidence can be.

    That’s why smartphone witnessing must be coupled with clear policy changes, says Kayyali. Fortunately, given the broad base of support that has coalesced thanks to smartphone witnessing, passing such legislation has also grown more possible.

    Since Floyd’s death, a coalition of activists from across the political spectrum, described by a federal judge as “perhaps the most diverse amici ever assembled,” has asked the US Supreme Court to revisit qualified immunity.

    III. Copwatching

    So we enter phase three: thinking about how to actually change police behavior. An answer may be found with Andrea Pritchett, who has been documenting local police misconduct in Berkeley, California, for 30 years.

    Pritchett is the founder of Berkeley Copwatch, a community-based, volunteer-led organization that aims to increase local police accountability. Whereas bystander videos rely on the coincidental presence of filmers, Copwatch members monitor police activity through handheld police scanners and coordinate via text groups to show up and record at a given scene.

    Over the decades, Copwatch has documented not just the most severe instances of police violence but also less publicized daily violations, from illegal searches to racial profiling to abuse of unhoused people. Strung together, the videos intimately track the patterns of abuse across the Berkeley police department and in the conduct of specific officers.

    In September of last year, armed with such footage, Copwatch launched a publicity campaign against a particularly abusive officer, Sean Aranas. The group curated a playlist of videos of his misconduct and linked it with a QR code posted on flyers around the community. Within two months of the campaign, the officer retired.

    Pritchett encourages more local organizations to adopt a similar strategy, and Copwatch has launched a toolkit for groups that want to create similar databases. Ultimately, she sees it not just as an information collection mechanism but also as an early warning system. “If communities are documenting—if we can keep up with uploading and tagging the videos properly—then somebody like Chauvin would have been identified long ago,” she says. “Then the community could take action before they kill again.”

    #Police #Violences_policières #Vidéos #Témoignages

  • Facebook needs 30,000 of its own content moderators, says a new report | MIT Technology Review
    https://www.technologyreview.com/2020/06/08/1002894/facebook-needs-30000-of-its-own-content-moderators-says-a-new-repo

    Imagine if Facebook stopped moderating its site right now. Anyone could post anything they wanted. Experience seems to suggest that it would quite quickly become a hellish environment overrun with spam, bullying, crime, terrorist beheadings, neo-Nazi texts, and images of child sexual abuse. In that scenario, vast swaths of its user base would probably leave, followed by the lucrative advertisers.

    But if moderation is so important, it isn’t treated as such. The overwhelming majority of the 15,000 people who spend all day deciding what can and can’t be on Facebook don’t even work for Facebook. The whole function of content moderation is farmed out to third-party vendors, who employ temporary workers on precarious contracts at over 20 sites worldwide. They have to review hundreds of posts a day, many of which are deeply traumatizing. Errors are rife, despite the company’s adoption of AI tools to triage posts according to which require attention. Facebook has itself admitted to a 10% error rate, whether that’s incorrectly flagging posts to be taken down that should be kept up or vice versa. Given that reviewers have to wade through three million posts per day, that equates to 300,000 mistakes daily. Some errors can have deadly effects. For example, members of Myanmar’s military used Facebook to incite genocide against the mostly Muslim Rohingya minority in 2016 and 2017. The company later admitted it failed to enforce its own policies banning hate speech and the incitement of violence.

    If we want to improve how moderation is carried out, Facebook needs to bring content moderators in-house, make them full employees, and double their numbers, argues a new report from New York University’s Stern Center for Business and Human Rights.

    “Content moderation is not like other outsourced functions, like cooking or cleaning,” says report author Paul M. Barrett, deputy director of the center. “It is a central function of the business of social media, and that makes it somewhat strange that it’s treated as if it’s peripheral or someone else’s problem.”

    Why is content moderation treated this way by Facebook’s leaders? It comes at least partly down to cost, Barrett says. His recommendations would be very costly for the company to enact—most likely in the tens of millions of dollars (though to put this into perspective, it makes billions of dollars of profit every year). But there’s a second, more complex, reason. “The activity of content moderation just doesn’t fit into Silicon Valley’s self-image. Certain types of activities are very highly valued and glamorized—product innovation, clever marketing, engineering … the nitty-gritty world of content moderation doesn’t fit into that,” he says.

    He thinks it’s time for Facebook to treat moderation as a central part of its business. He says that elevating its status in this way would help avoid the sorts of catastrophic errors made in Myanmar, increase accountability, and better protect employees from harm to their mental health.

    It seems an unavoidable reality that content moderation will always involve being exposed to some horrific material, even if the work is brought in-house. However, there is so much more the company could do to make it easier: screening moderators better to make sure they are truly aware of the risks of the job, for example, and ensuring they have first-rate care and counseling available. Barrett thinks that content moderation could be something all Facebook employees are required to do for at least a year as a sort of “tour of duty” to help them understand the impact of their decisions.

    The report makes eight recommendations for Facebook:

    Stop outsourcing content moderation and raise moderators’ station in the workplace.
    Double the number of moderators to improve the quality of content review.
    Hire someone to oversee content and fact-checking who reports directly to the CEO or COO.
    Further expand moderation in at-risk countries in Asia, Africa, and elsewhere.
    Provide all moderators with top-quality, on-site medical care, including access to psychiatrists.
    Sponsor research into the health risks of content moderation, in particular PTSD.
    Explore narrowly tailored government regulation of harmful content.
    Significantly expand fact-checking to debunk false information.

    The proposals are ambitious, to say the least. When contacted for comment, Facebook would not discuss whether it would consider enacting them. However, a spokesperson said its current approach means “we can quickly adjust the focus of our workforce as needed,” adding that “it gives us the ability to make sure we have the right language expertise—and can quickly hire in different time zones—as new needs arise or when a situation around the world warrants it.”

    But Barrett thinks a recent experiment conducted in response to the coronavirus crisis shows change is possible. Facebook announced that because many of its content moderators were unable to go into company offices, it would shift responsibility to in-house employees for checking certain sensitive categories of content.

    “I find it very telling that in a moment of crisis, Zuckerberg relied on the people he trusts: his full-time employees,” he says. “Maybe that could be seen as the basis for a conversation within Facebook about adjusting the way it views content moderation.”

    #Facebook #Moderation #Travail #Digital_labour #Modérateurs