https://www.technologyreview.com

  • Are we making spacecraft too autonomous ? | MIT Technology Review
    https://www.technologyreview.com/2020/07/03/1004788/spacecraft-spacefight-autonomous-software-ai/?truid=a497ecb44646822921c70e7e051f7f1a

    Le syndrome Neil Armstrong ne leur a pas suffit ?

    When SpaceX’s Crew Dragon took NASA astronauts to the ISS near the end of May, the launch brought back a familiar sight. For the first time since the space shuttle was retired, American rockets were launching from American soil to take Americans into space.

    Inside the vehicle, however, things couldn’t have looked more different. Gone was the sprawling dashboard of lights and switches and knobs that once dominated the space shuttle’s interior. All of it was replaced with a futuristic console of multiple large touch screens that cycle through a variety of displays. Behind those screens, the vehicle is run by software that’s designed to get into space and navigate to the space station completely autonomously.

    “Growing up as a pilot, my whole career, having a certain way to control a vehicle—this is certainly different,” Doug Hurley told NASA TV viewers shortly before the SpaceX mission. Instead of calling for a hand on the control stick, navigation is now a series of predetermined inputs. The SpaceX astronauts may still be involved in decision-making at critical junctures, but much of that function has moved out of their hands.

    But overrelying on software and autonomous systems in spaceflight creates new opportunities for problems to arise. That’s especially a concern for many of the space industry’s new contenders, who aren’t necessarily used to the kind of aggressive and comprehensive testing needed to weed out problems in software and are still trying to strike a good balance between automation and manual control.

    Nowadays, a few errors in over one million lines of code could spell the difference between mission success and mission failure. We saw that late last year, when Boeing’s Starliner capsule (the other vehicle NASA is counting on to send American astronauts into space) failed to make it to the ISS because of a glitch in its internal timer. A human pilot could have overridden the glitch that ended up burning Starliner’s thrusters prematurely. NASA administrator Jim Bridenstine remarked soon after Starliner’s problems arose: “Had we had an astronaut on board, we very well may be at the International Space Station right now.”

    But it was later revealed that many other errors in the software had not been caught before launch, including one that could have led to the destruction of the spacecraft. And that was something human crew members could easily have overridden.

    Boeing is certainly no stranger to building and testing spaceflight technologies, so it was a surprise to see the company fail to catch these problems before the Starliner test flight. “Software defects, particularly in complex spacecraft code, are not unexpected,” NASA said when the second glitch was made public. “However, there were numerous instances where the Boeing software quality processes either should have or could have uncovered the defects.” Boeing declined a request for comment.

    Space, however, is a unique environment to test for. The conditions a spacecraft will encounter aren’t easy to emulate on the ground. While an autonomous vehicle can be taken out of the simulator and eased into lighter real-world conditions to refine the software little by little, you can’t really do the same thing for a launch vehicle. Launch, spaceflight, and a return to Earth are actions that either happen or they don’t—there is no “light” version.

    This, says Schreier, is why AI is such a big deal in spaceflight nowadays—you can develop an autonomous system that is capable of anticipating those conditions, rather than requiring the conditions to be learned during a specific simulation. “You couldn’t possibly simulate on your own all the corner cases of the new hardware you’re designing,” he says.

    Raines adds that in contrast to the slower approach NASA takes for testing, private companies are able to move much more rapidly. For some, like SpaceX, this works out well. For others, like Boeing, it can lead to some surprising hiccups.

    Ultimately, “the worst thing you can do is make something fully manual or fully autonomous,” says Nathan Uitenbroek, another NASA engineer working on Orion’s software development. Humans have to be able to intervene if the software is glitching up or if the computer’s memory is destroyed by an unanticipated event (like a blast of cosmic rays). But they also rely on the software to inform them when other problems arise.

    NASA is used to figuring out this balance, and it has redundancy built into its crewed vehicles. The space shuttle operated on multiple computers using the same software, and if one had a problem, the others could take over. A separate computer ran on entirely different software, so it could take over the entire spacecraft if a systemic glitch was affecting the others. Raines and Uitenbroek say the same redundancy is used on Orion, which also includes a layer of automatic function that bypasses the software entirely for critical functions like parachute release.

    On the Crew Dragon, there are instances where astronauts can manually initiate abort sequences, and where they can override software on the basis of new inputs. But the design of these vehicles means it’s more difficult now for the human to take complete control. The touch-screen console is still tied to the spacecraft’s software, and you can’t just bypass it entirely when you want to take over the spacecraft, even in an emergency.

    #Espace #Logiciel #Intelligence_artificielle #Sécurité

  • A Caribbean beach could offer a crucial test in the fight to slow climate change | MIT Technology Review
    https://www.technologyreview.com/2020/06/22/1004218/how-green-sand-could-capture-billions-of-tons-of-carbon-dioxide

    Scientists are taking a harder look at using carbon-capturing rocks to counteract climate change, but lots of uncertainties remain.

    Aux Caraïbes, un plage de #sable_vert qui absorbe le #CO2
    https://www.linfodurable.fr/environnement/aux-caraibes-un-plage-de-sable-vert-qui-absorbe-le-co2-18673

    Pour procéder à leur expérience, les chercheurs ont eu recours à une méthode appelée ”altération forcée.” Un processus qui permet à l’#olivine de transformer le dioxyde de carbone en coraux ou en rochers calcaires, et qui s’explique principalement par la désagrégation de ce minerai volcanique au contact des vagues. Une solution peu coûteuse, à hauteur de 10 dollars par tonne de carbone traitée, que l’ONG ambitionne de développer à grande échelle, comme l’expliquent les fondateurs de Project Vesta sur leur site internet. ”Notre vision consiste à aider à renverser le changement climatique en transformant 1000 milliards de tonnes de CO2 en rochers.”

    #climat

  • Why tech didn’t save us from covid-19
    https://www.technologyreview.com/2020/06/17/1003312/why-tech-didnt-save-us-from-covid-19

    America’s paralysis reveals a deep and fundamental flaw in how the nation thinks about innovation. Technology has failed the US and much of the rest of the world in its most important role : keeping us alive and healthy. As I write this, more than 380,000 are dead, the global economy is in ruins, and the covid-19 pandemic is still raging. In an age of artificial intelligence, genomic medicine, and self-driving cars, our most effective response to the outbreak has been mass quarantines, a (...)

    #technologisme #COVID-19 #santé

    ##santé

  • A new US bill would ban the police use of facial recognition
    https://www.technologyreview.com/2020/06/26/1004500/a-new-us-bill-would-ban-the-police-use-of-facial-recognition/?truid=e240178e6fc656e71bbee1dbf6ce3de7

    The news : US Democratic lawmakers have introduced a bill that would ban the use of facial recognition technology by federal law enforcement agencies. Specifically, it would make it illegal for any federal agency or official to “acquire, possess, access, or use” biometric surveillance technology in the US. It would also require state and local law enforcement to bring in similar bans in order to receive federal funding. The Facial Recognition and Biometric Technology Moratorium Act was (...)

    #Microsoft #IBM #Amazon #algorithme #CCTV #Rekognition #biométrie #facial #législation (...)

    ##reconnaissance

  • An elegy for cash : the technology we might never replace
    https://www.technologyreview.com/2020/01/03/131029/an-elegy-for-cash-the-technology-we-might-never-replace

    Cash is gradually dying out. Will we ever have a digital alternative that offers the same mix of convenience and freedom ? Think about the last time you used cash. How much did you spend ? What did you buy, and from whom ? Was it a one-time thing, or was it something you buy regularly ? Was it legal ? If you’d rather keep all that to yourself, you’re in luck. The person in the store (or on the street corner) may remember your face, but as long as you didn’t reveal any identifying (...)

    #Alibaba #Apple #Facebook #WeChat #cryptage #cryptomonnaie #bitcoin #Libra #QRcode #WeChatPay #technologisme #BigData (...)

    ##discrimination

  • The two-year fight to stop Amazon from selling face recognition to the police
    https://www.technologyreview.com/2020/06/12/1003482/amazon-stopped-selling-police-face-recognition-fight/?truid=e240178e6fc656e71bbee1dbf6ce3de7

    This week’s moves from Amazon, Microsoft, and IBM mark a major milestone for researchers and civil rights advocates in a long and ongoing fight over face recognition in law enforcement. In the summer of 2018, nearly 70 civil rights and research organizations wrote a letter to Jeff Bezos demanding that Amazon stop providing face recognition technology to governments. As part of an increased focus on the role that tech companies were playing in enabling the US government’s tracking and (...)

    #Megvii #Microsoft #Ring #IBM #Amazon #Flickr #algorithme #CCTV #Rekognition #sonnette #biométrie #police #racisme #consentement #facial #reconnaissance #sexisme #vidéo-surveillance #BlackLivesMatter #discrimination #scraping #surveillance (...)

    ##ACLU

  • Protest misinformation is riding on the success of pandemic hoaxes | MIT Technology Review
    https://www.technologyreview.com/2020/06/10/1002934/protest-propaganda-is-riding-on-the-success-of-pandemic-hoaxes

    Misinformation about police brutality protests is being spread by the same sources as covid-19 denial. The troubling results suggest what might come next.

    by Joan Donovan
    June 10, 2020

    Police confront Black Lives Matter protesters in Los Angeles
    JOSEPH NGABO ON UNSPLASH
    After months spent battling covid-19, the US is now gripped by a different fever. As the video of George Floyd being murdered by Derek Chauvin circulated across social media, the streets around America—and then the world—have filled with protesters. Floyd’s name has become a public symbol of injustice in a spiraling web of interlaced atrocities endured by Black people, including Breonna Taylor, who was shot in her home by police during a misdirected no-knock raid, and Ahmaud Arbery, who was murdered by a group of white vigilantes. 

    Meanwhile, on the digital streets, a battle over the narrative of protest is playing out in separate worlds, where truth and disinformation run parallel. 

    Related Story

    How to protect yourself online from misinformation right now
    In times of crisis it’s easy to become a spreader of incorrect information online. We asked the experts for tips on how to stay safe—and protect others.

    In one version, tens of thousands of protesters are marching to force accountability on the US justice system, shining a light on policing policies that protect white lives and property above anything else—and are being met with the same brutality and indifference they are protesting against. In the other, driven by Donald Trump, US attorney general Bill Barr, and the MAGA coalition, an alternative narrative contends that anti-fascist protesters are traveling by bus and plane to remote cities and towns to wreak havoc. This notion is inspiring roving gangs of mostly white vigilantes to take up arms. 

    These armed activists are demographically very similar to those who spread misinformation and confusion about the pandemic; the same Facebook groups have spread hoaxes about both; it’s the same older Republican base that shares most fake news. 

    The fact that those who accept protest misinformation also rose up to challenge stay-at-home orders through “reopen” rallies is no coincidence: these audiences have been primed by years of political misinformation and then driven to a frenzy by months of pandemic conspiracy theories. The infodemic helped reinforce routes for spreading false stories and rumors; it’s been the perfect breeding ground for misinformation.

    How it happened
    When covid-19 hit like a slow-moving hurricane, most people took shelter and waited for government agencies to create a plan for handling the disease. But as the weeks turned into months, and the US still struggled to provide comprehensive testing, some began to agitate. Small groups, heavily armed with rifles and misinformation, held “reopen” rallies that were controversial for many reasons. They often relied on claims that the pandemic was a hoax perpetrated by the Democratic Party, which was colluding with the billionaire donor class and the World Health Organization. The reopen message was amplified by the anti-vaccination movement, which exploited the desire for attention among online influencers and circulated rampant misinformation suggesting that a potential coronavirus vaccine was part of a conspiracy in which Bill Gates planned to implant microchips in recipients. 

    These rallies did not gain much legitimacy in the eyes of politicians, press, or the public, because they seemed unmoored from the reality of covid-19 itself. 

    But when the Black Lives Matter protests emerged and spread, it opened a new political opportunity to muddy the waters. President Trump laid the foundation by threatening to invade cities with the military after applying massive force in DC as part of a staged television event. The cinema of the state was intended to counter the truly painful images of the preceding week of protests, where footage of the police firing rubber bullets, gas, and flash grenades dominated media coverage of US cities on fire. Rather than acknowledge the pain and anguish of Black people in the US, Trump went on to blame “Antifa” for the unrest. 

    @Antifa_US was suspended by Twitter, but this screenshot continues to circulate among right wing groups on Facebook.
    For many on the left, antifa simply means “anti-fascist.” For many on the right, however, “Antifa” has become a stand-in moniker for the Democratic Party. In 2017, we similarly saw right-wing pundits and commentators try to rebrand their political opponents as the “alt-left,” but that failed to stick. 

    Shortly after Trump’s declaration, several Twitter accounts outed themselves as influence operations bent on calling for violence and collecting information about anti-fascists. Twitter, too, confirmed that an “Antifa” account, running for three years, was tied to a now-defunct white nationalist organization that had helped plan the Unite the Right rally that killed Heather Heyer and injured hundreds more. Yet the “alt-right” and other armed militia groups that planned this gruesome event in Charlottesville have not drawn this level of concern from federal authorities.

    @OCAntifa Posted this before the account was suspended on Twitter for platform manipulation.
    Disinformation stating that the protests were being inflamed by Antifa quickly traveled up the chain from impostor Twitter accounts and throughout the right-wing media ecosystem, where it still circulates among calls for an armed response. This disinformation, coupled with widespread racism, is why armed groups of white vigilantes are lining the streets in different cities and towns. Simply put, when disinformation mobilizes, it endangers the public.

    What next?
    As researchers of disinformation, we have seen this type of attack play out before. It’s called “source hacking”: a set of tactics where media manipulators mimic the patterns of their opponents, try to obfuscate the sources of their information, and then slowly become more and more dangerous in their rhetoric. Now that Trump says he will designate Antifa a domestic terror group, investigators will have to take a hard look at social-media data to discern who was actually calling for violence online. They will surely unearth this widespread disinformation campaign of far-right agitators.

    That doesn’t mean that every call to action is suspect: all protests are poly-vocal and many tactics and policy issues remain up for discussion, including the age-old debate on reform vs. revolution. But what is miraculous about public protest is how easy it is to perceive and document the demands of protesters on the ground. 

    Moments like this call for careful analysis. Journalists, politicians, and others must not waver in their attention to the ways Black organizers are framing the movement and its demands. As a researcher of disinformation, I am certain there will be attempts to co-opt or divert attention from the movement’s messaging, attack organizers, and stall the progress of this movement. Disinformation campaigns tend to proceed cyclically as media manipulators learn to adapt to new conditions, but the old tactics still work—such as impostor accounts, fake calls to action (like #BaldForBLM), and grifters looking for a quick buck. 

    Crucially, there is an entire universe of civil society organizations working to build this movement for the long haul, and they must learn to counter misinformation on the issues they care about. More than just calling for justice, the Movement for Black Lives and Color of Change are organizing actions to move police resources into community services. Media Justice is doing online trainings under the banner of #defendourmovements, and Reclaim the Block is working to defund the police in Minneapolis. 

    Through it all, one thing remains true: when thousands of people show up to protest in front of the White House, it is not reducible to fringe ideologies or conspiracy theories about invading outside agitators. People are protesting during a pandemic because justice for Black lives can’t wait for a vaccine.

    —Joan Donovan, PhD, is research director Research Director of the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School.

    #Fake_news #Extrême_droite #Etats_unis

  • How to turn filming the police into the end of police brutality | MIT Technology Review
    https://www.technologyreview.com/2020/06/10/1002913/how-to-end-police-brutality-filming-witnessing-legislation

    Of all the videos that were released after George Floyd’s murder, the one recorded by 17-year-old Darnella Frazier on her phone is the most jarring. It shows Officer Derek Chauvin kneeling on Floyd’s neck as Floyd pleads, “Please, please, please, I can’t breathe,” and it shows Chauvin refusing to budge. A criminal complaint later states that Chauvin pinned Floyd’s neck for 8 minutes and 46 seconds, past the point where Floyd fell unconscious. In the footage, Chauvin lifts his head and locks eyes with Frazier, unmoved—a chilling and devastating image.

    Documentation like this has galvanized millions of people to flood the streets in over 450 protests in the US and hundreds more in dozens of countries around the world. It’s not just this killing, either. Since the protests have broken out, videos capturing hundreds more incidents of police brutality have been uploaded to social media. A mounted officer tramples a woman. Cop cars accelerate into a crowd. Officers shove an elderly man, who bashes his head when he hits the pavement, and walk away as his blood pools on the ground. One supercut of 14 videos, titled “This Is a Police State,” has been viewed nearly 50 million times.

    Once again, footage taken on a smartphone is catalyzing action to end police brutality once and for all. But Frazier’s video also demonstrates the challenge of turning momentum into lasting change. Six years ago, the world watched as Eric Garner uttered the same words—“I can’t breathe”—while NYPD Officer Daniel Pantaleo strangled him in a chokehold. Four years ago, we watched again as Philando Castile, a 15-minute drive from Minneapolis, bled to death after being shot five times by Officer Jeronimo Yanez at a traffic stop. Both incidents also led to mass protests, and yet we’ve found ourselves here again.

    So how do we turn all this footage into something more permanent—not just protests and outrage, but concrete policing reform? The answer involves three phases: first, we must bear witness to these injustices; second, we must legislate at the local, state, and federal levels to dismantle systems that protect the police when they perpetrate such acts; and finally, we should organize community-based “copwatching” programs to hold local police departments accountable.

    I. Witnessing

    For example, during the first half of the 1800s, freed slaves like Frederick Douglass relied on newspapers and the spoken word to paint graphic depictions of bondage and galvanize the formation of abolitionist groups. During the early 1900s, investigative journalist Ida B. Wells carefully tabulated statistics on the pervasiveness of lynching and worked with white photographers to capture gruesome images of these attacks in places she couldn’t go. Then in the mid-1950s, black civil rights leaders like Martin Luther King Jr. strategically attracted broadcast television cameras to capture the brutal scenes of police dogs and water cannons being turned on peaceful demonstrations.

    Witnessing, in other words, played a critical role in shocking the majority-white public and eliciting international attention. Whites and others allied with black Americans until the support for change reached critical mass.

    Today smartphone witnessing serves the same purpose. It uses imagery to prove widespread, systemic abuse and provoke moral outrage. But compared with previous forms of witnessing, smartphones are also more accessible, more prevalent, and—most notably—controlled in many cases by the hands of black witnesses. “That was a real transition,” says Richardson—“from black people who were reliant upon attracting the gaze of mainstream media to us not needing that mainstream middleman and creating the media for ourselves.”

    II. Legislation

    But filming can’t solve everything. The unfortunate reality is that footage of one-off instances of police brutality rarely leads to the conviction of the officers involved. Analysis by Witness suggests that it usually leads, at most, to victims’ being acquitted of false charges, if they are still alive.

    Some of this can be changed with better tactics: Witness has found, for example, that it can be more effective to withhold bystander footage until after the police report is released. That way police don’t have an opportunity to write their report around the evidence and justify their actions by claiming events off screen. This is what the witness Feiden Santana did after the fatal shooting of Walter Scott, which played a crucial role in getting the police officer charged with second-degree murder.

    But then again, this doesn’t always work. The deeper problem is the many layers of entrenched legal protections afforded the police in the US, which limit how effective video evidence can be.

    That’s why smartphone witnessing must be coupled with clear policy changes, says Kayyali. Fortunately, given the broad base of support that has coalesced thanks to smartphone witnessing, passing such legislation has also grown more possible.

    Since Floyd’s death, a coalition of activists from across the political spectrum, described by a federal judge as “perhaps the most diverse amici ever assembled,” has asked the US Supreme Court to revisit qualified immunity.

    III. Copwatching

    So we enter phase three: thinking about how to actually change police behavior. An answer may be found with Andrea Pritchett, who has been documenting local police misconduct in Berkeley, California, for 30 years.

    Pritchett is the founder of Berkeley Copwatch, a community-based, volunteer-led organization that aims to increase local police accountability. Whereas bystander videos rely on the coincidental presence of filmers, Copwatch members monitor police activity through handheld police scanners and coordinate via text groups to show up and record at a given scene.

    Over the decades, Copwatch has documented not just the most severe instances of police violence but also less publicized daily violations, from illegal searches to racial profiling to abuse of unhoused people. Strung together, the videos intimately track the patterns of abuse across the Berkeley police department and in the conduct of specific officers.

    In September of last year, armed with such footage, Copwatch launched a publicity campaign against a particularly abusive officer, Sean Aranas. The group curated a playlist of videos of his misconduct and linked it with a QR code posted on flyers around the community. Within two months of the campaign, the officer retired.

    Pritchett encourages more local organizations to adopt a similar strategy, and Copwatch has launched a toolkit for groups that want to create similar databases. Ultimately, she sees it not just as an information collection mechanism but also as an early warning system. “If communities are documenting—if we can keep up with uploading and tagging the videos properly—then somebody like Chauvin would have been identified long ago,” she says. “Then the community could take action before they kill again.”

    #Police #Violences_policières #Vidéos #Témoignages

  • Facebook needs 30,000 of its own content moderators, says a new report | MIT Technology Review
    https://www.technologyreview.com/2020/06/08/1002894/facebook-needs-30000-of-its-own-content-moderators-says-a-new-repo

    Imagine if Facebook stopped moderating its site right now. Anyone could post anything they wanted. Experience seems to suggest that it would quite quickly become a hellish environment overrun with spam, bullying, crime, terrorist beheadings, neo-Nazi texts, and images of child sexual abuse. In that scenario, vast swaths of its user base would probably leave, followed by the lucrative advertisers.

    But if moderation is so important, it isn’t treated as such. The overwhelming majority of the 15,000 people who spend all day deciding what can and can’t be on Facebook don’t even work for Facebook. The whole function of content moderation is farmed out to third-party vendors, who employ temporary workers on precarious contracts at over 20 sites worldwide. They have to review hundreds of posts a day, many of which are deeply traumatizing. Errors are rife, despite the company’s adoption of AI tools to triage posts according to which require attention. Facebook has itself admitted to a 10% error rate, whether that’s incorrectly flagging posts to be taken down that should be kept up or vice versa. Given that reviewers have to wade through three million posts per day, that equates to 300,000 mistakes daily. Some errors can have deadly effects. For example, members of Myanmar’s military used Facebook to incite genocide against the mostly Muslim Rohingya minority in 2016 and 2017. The company later admitted it failed to enforce its own policies banning hate speech and the incitement of violence.

    If we want to improve how moderation is carried out, Facebook needs to bring content moderators in-house, make them full employees, and double their numbers, argues a new report from New York University’s Stern Center for Business and Human Rights.

    “Content moderation is not like other outsourced functions, like cooking or cleaning,” says report author Paul M. Barrett, deputy director of the center. “It is a central function of the business of social media, and that makes it somewhat strange that it’s treated as if it’s peripheral or someone else’s problem.”

    Why is content moderation treated this way by Facebook’s leaders? It comes at least partly down to cost, Barrett says. His recommendations would be very costly for the company to enact—most likely in the tens of millions of dollars (though to put this into perspective, it makes billions of dollars of profit every year). But there’s a second, more complex, reason. “The activity of content moderation just doesn’t fit into Silicon Valley’s self-image. Certain types of activities are very highly valued and glamorized—product innovation, clever marketing, engineering … the nitty-gritty world of content moderation doesn’t fit into that,” he says.

    He thinks it’s time for Facebook to treat moderation as a central part of its business. He says that elevating its status in this way would help avoid the sorts of catastrophic errors made in Myanmar, increase accountability, and better protect employees from harm to their mental health.

    It seems an unavoidable reality that content moderation will always involve being exposed to some horrific material, even if the work is brought in-house. However, there is so much more the company could do to make it easier: screening moderators better to make sure they are truly aware of the risks of the job, for example, and ensuring they have first-rate care and counseling available. Barrett thinks that content moderation could be something all Facebook employees are required to do for at least a year as a sort of “tour of duty” to help them understand the impact of their decisions.

    The report makes eight recommendations for Facebook:

    Stop outsourcing content moderation and raise moderators’ station in the workplace.
    Double the number of moderators to improve the quality of content review.
    Hire someone to oversee content and fact-checking who reports directly to the CEO or COO.
    Further expand moderation in at-risk countries in Asia, Africa, and elsewhere.
    Provide all moderators with top-quality, on-site medical care, including access to psychiatrists.
    Sponsor research into the health risks of content moderation, in particular PTSD.
    Explore narrowly tailored government regulation of harmful content.
    Significantly expand fact-checking to debunk false information.

    The proposals are ambitious, to say the least. When contacted for comment, Facebook would not discuss whether it would consider enacting them. However, a spokesperson said its current approach means “we can quickly adjust the focus of our workforce as needed,” adding that “it gives us the ability to make sure we have the right language expertise—and can quickly hire in different time zones—as new needs arise or when a situation around the world warrants it.”

    But Barrett thinks a recent experiment conducted in response to the coronavirus crisis shows change is possible. Facebook announced that because many of its content moderators were unable to go into company offices, it would shift responsibility to in-house employees for checking certain sensitive categories of content.

    “I find it very telling that in a moment of crisis, Zuckerberg relied on the people he trusts: his full-time employees,” he says. “Maybe that could be seen as the basis for a conversation within Facebook about adjusting the way it views content moderation.”

    #Facebook #Moderation #Travail #Digital_labour #Modérateurs

  • How Google Docs became the social media of the resistance | MIT Technology Review
    https://www.technologyreview.com/2020/06/06/1002546/google-docs-social-media-resistance

    In the week after George Floyd’s murder, hundreds of thousands of people joined protests across the US and around the globe, demanding education, attention, and justice. But one of the key tools for organizing these protests is a surprising one: it’s not encrypted, doesn’t rely on signing in to a social network, and wasn’t even designed for this purpose. It’s Google Docs.

    In just the last week, Google Docs has emerged as a way to share everything from lists of books on racism to templates for letters to family members and representatives to lists of funds and resources that are accepting donations. Shared Google Docs that anyone can view and anyone can edit, anonymously, have become a valuable tool for grassroots organizing during both the coronavirus pandemic and the police brutality protests sweeping the US. It’s not the first time. In fact, activists and campaigners have been using the word processing software for years as a more efficient and accessible protest tool than either Facebook or Twitter.

    It wasn’t until the 2016 elections, when misinformation campaigns were rampant, that the software came into its own as a political tool. Melissa Zimdars, an assistant professor of communication at Merrimack College, used it to create a 34-page document titled “False, Misleading, Clickbaity-y, and/or Satirical ‘News’ Sources.’”

    Zimdars inspired a slew of political Google Docs, written by academics as ad hoc ways of campaigning for Democrats for the 2018 midterm elections. By the time the election passed, Google Docs were also being used to protest immigration bans and advance the #MeToo movement.

    Now, in the wake of George Floyd’s murder on Memorial Day weekend, communities are using the software to organize. One of the most popular Google Docs to emerge in the past week is “Resources for Accountability and Actions for Black Lives,” which features clear steps people can take to support victims of police brutality. It is organized by Carlisa Johnson, a 28-year-old graduate journalism student at Georgia State University.

    Indigo said accessibility and live editing were the primary advantages of a Google Doc over social media: “It’s important to me that the people on the ground can access these materials, especially those seeking legal counsel, jail support, and bail support. This is a medium that everyone I’ve organized with uses and many others use.”

    “What’s special about a Google Doc versus a newsfeed is its persistence and editability,” says Clay Shirky, the vice provost for educational technology at New York University. In 2008, Shirky wrote Here Comes Everybody: The Power of Organizing Without Organizations, detailing how the internet and social media helped shape modern protest movements.

    Shirky says that while social media has been great for publicizing movements, it’s far less efficient at creating stable shelves of information that a person can return to. What makes Google Docs especially attractive is that they are at once dynamic and static, he says. They’re editable and can be viewed simultaneously on countless screens, but they are easily shareable via tweet or post.

    “People want a persistent artifact,” Shirky says. “If you are in an action-oriented network, you need an artifact to coordinate with those outside of the conversation and the platform you’re using, so you can actually go outside of the feed and do something.”

    It helps that Google Docs are fairly straightforward to access and simple to use. But anonymity is an important advantage over Twitter or Facebook. Users who click on a publicly shareable link are assigned an animal avatar, hiding their identity. “No one can put you on blast on Google Docs,” says Shirky. “Google Docs allows for a wider breadth of participation for people who are not looking to get into a high-stakes political argument in front of millions of people.”

    Un passage intéressant sur l’anonymat dans GD. Car nous savons que Google a les moyens de nous pister... mais l’important n’est pas ici que Google sache qui écrit, mais que chacun n’ait pas à subir les trolls. Une situation à méditer.

    For both Johnson and Indigo, the overall experience of creating Google Docs has been a surprisingly positive one; Indigo does receive the occasional “nasty DM” but shrugs it off. At any given moment, anywhere between 70 and 90 people are in Johnson’s and Indigo’s documents, and both spend significant time editing and fact-checking them.

    Shirky says it’s a common misconception that protesters are seeking privacy from the state. “Most of them are concerned with activism, not privacy,” he says. In fact, Johnson says that for her and other activists, the goal is to disseminate as much information as accurately as possible.

    “Google Docs lets me put it in one place and across social-media platforms,” she says. “Reach is what’s important at this time. A Facebook post can only go so far. An Instagram post can only go so far. But this? This is accessible. Nothing else is as immediate.”

    #Google_doc #Activisme #Document_numérique

  • Of course technology perpetuates racism. It was designed that way. | MIT Technology Review
    https://www.technologyreview.com/2020/06/03/1002589/technology-perpetuates-racism-by-design-simulmatics-charlton-mcilw

    We often call on technology to help solve problems. But when society defines, frames, and represents people of color as “the problem,” those solutions often do more harm than good. We’ve designed facial recognition technologies that target criminal suspects on the basis of skin color. We’ve trained automated risk profiling systems that disproportionately identify Latinx people as illegal immigrants. We’ve devised credit scoring algorithms that disproportionately identify black people as risks and prevent them from buying homes, getting loans, or finding jobs.

    So the question we have to confront is whether we will continue to design and deploy tools that serve the interests of racism and white supremacy,

    Of course, it’s not a new question at all.

    As part of a DARPA project aimed at turning the tide of the Vietnam War, Pool’s company had been hard at work preparing a massive propaganda and psychological campaign against the Vietcong. President Johnson was eager to deploy Simulmatics’s behavioral influence technology to quell the nation’s domestic threat, not just its foreign enemies. Under the guise of what they called a “media study,” Simulmatics built a team for what amounted to a large-scale surveillance campaign in the “riot-affected areas” that captured the nation’s attention that summer of 1967.

    Three-member teams went into areas where riots had taken place that summer. They identified and interviewed strategically important black people. They followed up to identify and interview other black residents, in every venue from barbershops to churches. They asked residents what they thought about the news media’s coverage of the “riots.” But they collected data on so much more, too: how people moved in and around the city during the unrest, who they talked to before and during, and how they prepared for the aftermath. They collected data on toll booth usage, gas station sales, and bus routes. They gained entry to these communities under the pretense of trying to understand how news media supposedly inflamed “riots.” But Johnson and the nation’s political leaders were trying to solve a problem. They aimed to use the information that Simulmatics collected to trace information flow during protests to identify influencers and decapitate the protests’ leadership.

    They didn’t accomplish this directly. They did not murder people, put people in jail, or secretly “disappear” them.

    But by the end of the 1960s, this kind of information had helped create what came to be known as “criminal justice information systems.” They proliferated through the decades, laying the foundation for racial profiling, predictive policing, and racially targeted surveillance. They left behind a legacy that includes millions of black and brown women and men incarcerated.

    #Racisme #Intelligence_artificielle #capitalisme_surveillance #surveillance

  • First the trade war, then the pandemic. Now Chinese manufacturers are turning inward. | MIT Technology Review
    https://www.technologyreview.com/2020/06/03/1002573/pandemic-us-china-trade-war-impact-on-manufacturers

    Then, before his business had fully recovered, covid-19 ripped through the world. Exports tanked, saddling Zhu with a stream of order cancellations worth an estimated $4 to $5 million. Domestic sales also suffered as physical stores shuttered under pandemic control restrictions. “The impact could’ve been huge,” he says. “My factory is really big; I have so many workers to support.”

    But Zhu fortunately had another sales channel. In 2018, Pinduoduo, an e-commerce giant targeted at consumers in China’s smaller cities, launched an initiative to connect manufacturers with the domestic market. Under a so-called “consumer-to-manufacturer,” or C2M, model, the platform began using its massive pools of data and AI algorithms to help Chinese manufacturers predict consumer preferences and develop brands specifically for a domestic audience.

    Pinduoduo told manufacturers not only how to customize their products—down to the wash of a jean or the length of a sock. It also advised them on how to redesign their packaging, how to set their prices, and how to market their goods online. In this way, manufacturers could improve the efficiency of their production, which in turn made the products cheaper for consumers. And platforms could monetize new users with advertising. This helped both the platform and manufacturers alike tap into a rapidly growing middle-class consumer base. Whereas upper-class consumers care more about international brands, this newer wave of consumers care more about quality products at lower prices.

    When the pandemic hit, Pinduoduo quickly expanded its initiative. It added new incentives for affected manufacturers to join its platform, welcoming them to adopt its live-streaming service (link in Chinese) and holding promotional sales events.

    As China’s access to international markets has grown more unreliable—with a possible trade fight renewal looming on the horizon—the country has increasingly sought to ramp up domestic consumption in an effort to stave off a greater economic recession.

    “The problem is China is losing [overseas] demand,” says Derek Scissors, a resident scholar at the American Enterprise Institute, where he researches trade policy and US-China relations. “You want to replace it with Chinese demand.”

    As well as Pinduoduo, other Chinese e-commerce giants, including Alibaba-owned Taobao and JD, are now offering C2M services. Since the start of this year, all three have set new goals for expanding their C2M initiatives. Pinduoduo, which helped launch 106 manufacturer-owned brands in 2019, aims to establish 1,000 more. It also signed a strategic partnership in April with the government of Dongguan, where Zhu’s factory is based, one of China’s largest manufacturing hubs.

    As the partnerships have produced promising results, manufacturers have also doubled down on their domestic brand strategies. Chen Zhuoyue, the owner of a toy manufacturing company based in Chenghai, Guangdong, joined JD’s C2M program in 2018. After JD helped him customize his products and develop a new pricing strategy, the platform quickly grew to account for 50% of his domestic sales. When the pandemic hit and his exports sharply declined from 30% to less than 5% of his revenue, he took it as a sign to open up two new JD stores and launch more domestic brands.

    It’s not that Chen will stop working with foreign brands. “As a businessman, I’m always thinking about how to expand into more markets,” he says. If exports were to return back to normal and his long-term foreign collaborators came knocking on his door, he would gladly continue fulfilling their orders. At the same time, now that he’s launched his own brand, he sees it as an important source of growth and stability. “My plan is to expand our domestic presence,” he says. “This year I want to increase our investment in this area.”

    It’s not clear whether domestic markets alone will be able to compensate for China’s variable access to international markets over the long run. On one hand, the country’s middle class has rapidly increased their spending power and is expected to grow to a market size of 1,008 billion RMB ($141 million) by 2022, according to iResearch. On the other, even before the pandemic, the manufacturing industry was already struggling with too much supply, says Scissors, and it relied on the US and other overseas markets to “dump their manufacturing excess,” he says. As a result, he’s unconvinced that a new model like C2M would resolve such deeply-entrenched macroeconomic issues. If anything, he sees C2M instead as a savvy push from e-commerce giants to grow their own profits.

    #Chine #Commerce_électronique #C2M #Consumer-to-manufacturer #Marché_interieur #Mondialisation

  • This startup is using AI to give workers a “productivity score” | MIT Technology Review
    https://www.technologyreview.com/2020/06/04/1002671/startup-ai-workers-productivity-score-bias-machine-learning-busine

    Dire qu’il y a des naïfs pour croire que le crédit social est uniquement chinois... surveiller et noter les travailleurs, c’est le nouveau modèle du capitalisme international, en Chine comme ailleurs, en télétravail comme dans les locaux de l’entreprise. Et ça va vite, vite...

    In the last few months, millions of people around the world stopped going into offices and started doing their jobs from home. These workers may be out of sight of managers, but they are not out of mind. The upheaval has been accompanied by a reported spike in the use of surveillance software that lets employers track what their employees are doing and how long they spend doing it.

    Companies have asked remote workers to install a whole range of such tools. Hubstaff is software that records users’ keyboard strokes, mouse movements, and the websites that they visit. Time Doctor goes further, taking videos of users’ screens. It can also take a picture via webcam every 10 minutes to check that employees are at their computer. And Isaak, a tool made by UK firm Status Today, monitors interactions between employees to identify who collaborates more, combining this data with information from personnel files to identify individuals who are “change-makers.”

    Now, one firm wants to take things even further. It is developing machine-learning software to measure how quickly employees complete different tasks and suggest ways to speed them up. The tool also gives each person a productivity score, which managers can use to identify those employees who are most worth retaining—and those who are not.

    How you feel about this will depend on how you view the covenant between employer and employee. Is it okay to be spied on by people because they pay you? Do you owe it to your employer to be as productive as possible, above all else?

    Critics argue that workplace surveillance undermines trust and damages morale. Workers’ rights groups say that such systems should only be installed after consulting employees. “It can create a massive power imbalance between workers and the management,” says Cori Crider, a UK-based lawyer and cofounder of Foxglove, a nonprofit legal firm that works to stop governments and big companies from misusing technology. “And the workers have less ability to hold management to account.”

    Whatever your views, this kind of software is here to stay—in part because remote work is normalizing it. “I think workplace monitoring is going to become mainstream,” says Tommy Weir, CEO of Enaible, the startup based in Boston that is developing the new monitoring software. “In the next six to 12 months it will become so pervasive it disappears.”

    Weir thinks most tools on the market don’t go far enough. “Imagine you’re managing somebody and you could stand and watch them all day long, and give them recommendations on how to do their job better,” says Weir. “That’s what we’re trying to do. That’s what we’ve built.”

    Why the sudden uptick in interest? “Bosses have been seeking to wring every last drop of productivity and labor out of their workers since before computers,” says Crider. “But the granularity of the surveillance now available is like nothing we’ve ever seen.”

    It’s no surprise that this level of detail is attractive to employers, especially those looking to keep tabs on a newly remote workforce. But Enaible’s software, which it calls the AI Productivity Platform, goes beyond tracking things like email, Slack, Zoom, or web searches. None of that shows a full picture of what a worker is doing, says Weir⁠—it’s just checking if you are working or not.

    Once set up, the software runs in the background all the time, monitoring whatever data trail a company can provide for each of its employees. Using an algorithm called Trigger-Task-Time, the system learns the typical workflow for different workers: what triggers, such as an email or a phone call, lead to what tasks and how long those tasks take to complete.

    Once it has learned a typical pattern of behavior for an employee, the software gives that person a “productivity score” between 0 and 100. The AI is agnostic to tasks, says Weir. In theory, workers across a company can still be compared by their scores even if they do different jobs. A productivity score also reflects how your work increases or decreases the productivity of other people on your team. There are obvious limitations to this approach. The system works best with employees who do a lot of repetitive tasks in places like call centers or customer service departments rather than those in more complex or creative roles.

    But the idea is that managers can use these scores to see how their employees are getting on, rewarding them if they get quicker at doing their job or checking in with them if performance slips. To help them, Enaible’s software also includes an algorithm called Leadership Recommender, which identifies specific points in an employee’s workflow that could be made more efficient.

    #Travail #Surveillance #Droit_travail #Crédit_social #Productivity_score

  • Nearly 40% of Icelanders are using a covid app—and it hasn’t helped much
    https://www.technologyreview.com/2020/05/11/1001541/iceland-rakning-c19-covid-contact-tracing

    The country has the highest penetration of any automated contact tracing app in the world, but one senior figure says it “wasn’t a game changer.” When Iceland got its first case of covid-19 on February 28, an entire apparatus sprang into action. The country had already been testing some people at high risk of catching the virus, thanks to DeCode genetics, a local biotech company. Once the arrival of the disease was confirmed, it began rapidly rolling out public testing on a much wider scale. (...)

    #Apple #Google #algorithme #Bluetooth #smartphone #GPS #contactTracing #technologisme #consentement #COVID-19 (...)

    ##santé

  • India is forcing people to use its covid app, unlike any other democracy
    https://www.technologyreview.com/2020/05/07/1001360/india-aarogya-setu-covid-app-mandatory

    Millions of Indians have no choice but to download the country’s tracking technology if they want to keep their jobs or avoid reprisals. The world has never seen anything quite like Aarogya Setu. Two months ago, India’s app for coronavirus contact tracing didn’t exist ; now it has nearly 100 million users. Prime Minister Narendra Modi boosted it on release by urging every one of the country’s 1.3 billion people to download it, and the result was that within two weeks of launch it became the (...)

    #algorithme #AarogyaSetu_ #Bluetooth #smartphone #GPS #contactTracing #géolocalisation #technologisme #consentement #BigData #COVID-19 (...)

    ##santé

  • Podcast : Who watches the pandemic watchers ? We do
    https://www.technologyreview.com/2020/05/20/1001927/podcast-who-watches-the-pandemic-watchers-we-do/?truid=e240178e6fc656e71bbee1dbf6ce3de7

    No sooner had the stay-at-home orders come down than mobile app developers around the world began to imagine how our smartphones could make it safer for everyone to venture back out. Dozens of countries and a handful of US states are now urging citizens to download government-blessed apps that use GPS-based location tracking, the Bluetooth wireless standard, or a combination of both to alert us when we’ve crossed paths with an infected individual—information that could tell us when we need to (...)

    #Apple #Google #algorithme #Bluetooth #smartphone #GPS #contactTracing #géolocalisation #BigData #COVID-19 #santé #ACLU (...)

    ##santé ##technologisme

  • Why contact tracing may be a mess in America
    https://www.technologyreview.com/2020/05/16/1001787/why-contact-tracing-may-be-a-mess-in-america/?truid=e240178e6fc656e71bbee1dbf6ce3de7

    High caseloads, low testing, and American attitudes toward government authority could pose serious challenges for successful efforts to track and contain coronavirus cases. Technology can certainly supplement human contact tracing. Smartphone apps that flag when someone may have been in close contact with an infected person helped China, which required citizens in many cities to download the software, to flatten the curve of its outbreak. Similarly, South Korean officials have made use of (...)

    #MIT #Bluetooth #smartphone #GPS #contactTracing #géolocalisation #consentement #BigData #COVID-19 #santé (...)

    ##santé ##technologisme

  • Our weird behavior during the pandemic is messing with AI models
    https://www.technologyreview.com/2020/05/11/1001563/covid-pandemic-broken-ai-machine-learning-amazon-retail-fraud-huma

    Our weird behavior during the pandemic is messing with AI models Machine-learning models trained on normal behavior are showing cracks —forcing humans to step in to set them straight. In the week of April 12-18, the top 10 search terms on Amazon.com were : toilet paper, face mask, hand sanitizer, paper towels, Lysol spray, Clorox wipes, mask, Lysol, masks for germ protection, and N95 mask. People weren’t just searching, they were buying too—and in bulk. The majority of people looking for (...)

    #Amazon #algorithme #technologisme #fraude #consommation #COVID-19 #profiling #santé (...)

    ##santé ##bug

  • The secret to why some people get so sick from covid could lie in their genes | MIT Technology Review
    https://www.technologyreview.com/2020/05/13/1001653/23andme-looks-for-covid-19-genetic-clues

    23andMe, le compagnie de l’ex-femme de Sergey Brin et largement promue par Google a décidé de profiter de l’aubaine pour augmenter sa place de number one dans le traçage génétique. Pour le bien commun, évidemment. Le capitalisme génomique dans toute sa splendeur.

    Some people die from covid-19, and others who are infected don’t even show symptoms. But scientists still don’t know why.

    Now consumer genomics company 23andMe is going to offer free genetic tests to 10,000 people who’ve been hospitalized with the disease, hoping to turn up genetic factors that could point to an answer.

    While it’s known that older people and those with health conditions such as diabetes are most at risk, there could be hidden genetic reasons why some young, previously healthy people are also dying.

    23andMe operates a large gene database with more than 8 million customers, many of whom have agreed to let their data be used for research. The company has previously used consumer data to power searches for the genetic roots of insomnia, homosexuality, and other traits.

    Ouh là la, que d’affirmations génomiques :

    Scientists hope to find a gene that strongly influences, or even determines, how badly people are affected by the coronavirus. There are well-known examples of such genetic effects on other diseases: for example, sickle-cell genes confer resistance to malaria, and variants of other genes are known to protect people from HIV or to norovirus, an intestinal germ.

    L’étude anti-scientifique et non-éthique sur l’homosexualité ne leur a pas suffit. Ces gens sont vraiment des rapaces du nouveau monde.

    #23andMe #Capitalisme_génomique

  • Facebook’s AI is still largely baffled by covid misinformation | MIT Technology Review
    https://www.technologyreview.com/2020/05/12/1001633/ai-is-still-largely-baffled-by-covid-misinformation

    Tiens, l’IA ne serait pas à la hauteur pour assurer la modération de contenu. Il faut des humains pour comprendre l’humanité. Quelle découverte miraculeuse. On est vraiment au XXIe siècle, je crois.

    The news: In its latest Community Standards Enforcement Report, released today, Facebook detailed the updates it has made to its AI systems for detecting hate speech and disinformation. The tech giant says 88.8% of all the hate speech it removed this quarter was detected by AI, up from 80.2% in the previous quarter. The AI can remove content automatically if the system has high confidence that it is hate speech, but most is still checked by a human being first.

    Behind the scenes: The improvement is largely driven by two updates to Facebook’s AI systems. First, the company is now using massive natural-language models that can better decipher the nuance and meaning of a post. These models build on advances in AI research within the last two years that allow neural networks to be trained on language without any human supervision, getting rid of the bottleneck caused by manual data curation.

    The second update is that Facebook’s systems can now analyze content that consists of images and text combined, such as hateful memes. AI is still limited in its ability to interpret such mixed-media content, but Facebook has also released a new data set of hateful memes and launched a competition to help crowdsource better algorithms for detecting them.

    Covid lies: Despite these updates, however, AI hasn’t played as big a role in handling the surge of coronavirus misinformation, such as conspiracy theories about the virus’s origin and fake news of cures. Facebook has instead relied primarily on human reviewers at over 60 partner fact-checking organizations. Only once a person has flagged something, such as an image with a misleading headline, do AI systems take over to search for identical or similar items and automatically add warning labels or take them down. The team hasn’t yet been able to train a machine-learning model to find new instances of disinformation itself. “Building a novel classifier for something that understands content it’s never seen before takes time and a lot of data,” Mike Schroepfer, Facebook’s CTO, said on a press call.

    Why it matters: The challenge reveals the limitations of AI-based content moderation. Such systems can detect content similar to what they’ve seen before, but they founder when new kinds of misinformation appear. In recent years, Facebook has invested heavily in developing AI systems that can adapt more quickly, but the problem is not just the company’s: it remains one of the biggest research challenges in the field.

    #Intelligence_artificielle #Facebook #Modération