• What Happened to Urban Dictionary? | WIRED

    In time, however, the site began to espouse the worst of the internet—Urban Dictionary became something much uglier than perhaps what Peckham set out to create. It transformed into a harbor for hate speech. By allowing anyone to post definitions (users can up or down vote their favorite ones) Peckham opened the door for the most insidious among us. Racism, homophobia, xenophobia, and sexism currently serve as the basis for some of the most popular definitions on the site. In fact, one of the site’s definitions for sexism details it as “a way of life like welfare for black people. now stop bitching and get back to the kitchen.” Under Lady Gaga, one top entry describes her as the embodiment of “a very bad joke played on all of us by Tim Burton.” For LeBron James, it reads: “To bail out on your team when times get tough.”

    When I first discovered Urban Dictionary around 2004, I considered it a public good. The internet still carried an air of innocence then; the lion’s share of people who roamed chat forums and posted on LiveJournal had yet to adopt the mob instincts of cancel culture; Twitter was years away from warping our consumption habits and Facebook was only a fraction of the giant it is today. I was relatively new to what the internet could offer—its infinite landscapes dazzled my curious teenage mind—and found a strange solace in Urban Dictionary.

    My understanding of it hewed to a simple logic. Here was a place where words and phrases that friends, cousins, neighbors, and people I knew used with regularity found resonance and meaning. Before Urban Dictionary, I’d never seen words like hella or jawn defined anywhere other than in conversation. That they were afforded a kind of linguistic reverence was what awed me, what drew me in

    Urban Dictionary’s abandonment of that edict afforded it a rebel spirit. Early on, the beauty of the site was its deep insistence on showing how slang is socialized based on a range of factors: community, school, work. How we casually convey meaning is a direct reflection of our geography, our networks, our worldviews. At its best, Urban Dictionary crystallized that proficiency. Slang is often understood as a less serious form of literacy, as deficient or lacking. Urban Dictionary said otherwise. It let the cultivators of the most forward-looking expressions of language speak for themselves. It believed in the splendor of slang that was deemed unceremonious and paltry.

    But if the radiant array of terminology uploaded to the site was initially meant to function as a possibility of human speech, it is now mostly a repository of vile language. In its current form, Urban Dictionary is a cauldron of explanatory excess and raw prejudice. “The problem for Peckham’s bottom line is that derogatory content—not the organic evolution of language in the internet era—may be the site’s primary appeal,” Clio Chang wrote in The New Republic in 2017, as the site was taking on its present identity.

    Luckily, like language, the internet is stubbornly resistant to stasis. It is constantly reconfiguring and building anew. Today, other digital portals—Twitter, Instagram, gossip blogs like Bossip and The Shade Room, even group texts on our smartphones—function as better incubators of language than Urban Dictionary. Consider how Bossip’s headline mastery functions as a direct extension of black style—which is to say the site embraces, head on, the syntax and niche vernacular of a small community of people. The endeavor is both an acknowledgement of and a lifeline to a facet of black identity.

    That’s not to say Urban Dictionary is vacant any good, but its utility, as a window into different communities and local subcultures, as a tool that extends sharp and luminous insight, has been obscured by darker intentions. What began as a joke is no longer funny. Even those who operate on the site understand it for what it’s eroded into. The top definition for Urban Dictionary reads: “Supposed to [b]e a user-inputed dictionary for words. However, has become a mindless forum of jokes, view-points, sex, and basically anything but the real definition of a word.” Where Oxford and Merriam-Webster erected walls around language, essentially controlling what words and expressions society deemed acceptable, Urban Dictionary, in its genesis, helped to democratize that process. Only the republic eventually ate itself.

    #Urban_dictionnary #Langage #Evolution_internet #Culture_numérique

  • AI-Generated Text Is the Scariest Deepfake of All | WIRED

    When pundits and researchers tried to guess what sort of manipulation campaigns might threaten the 2018 and 2020 elections, misleading AI-generated videos often topped the list. Though the tech was still emerging, its potential for abuse was so alarming that tech companies and academic labs prioritized working on, and funding, methods of detection. Social platforms developed special policies for posts containing “synthetic and manipulated media,” in hopes of striking the right balance between preserving free expression and deterring viral lies. But now, with about three months to go until November 3, that wave of deepfaked moving images seems never to have broken. Instead, another form of AI-generated media is making headlines, one that is harder to detect and yet much more likely to become a pervasive force on the internet: deepfake text.

    Last month brought the introduction of GPT-3, the next frontier of generative writing: an AI that can produce shockingly human-sounding (if at times surreal) sentences. As its output becomes ever more difficult to distinguish from text produced by humans, one can imagine a future in which the vast majority of the written content we see on the internet is produced by machines. If this were to happen, how would it change the way we react to the content that surrounds us?

    This wouldn’t be the first such media inflection point where our sense of what’s real shifted all at once. When Photoshop, After Effects, and other image-editing and CGI tools began to emerge three decades ago, the transformative potential of these tools for artistic endeavors—as well as their impact on our perception of the world—was immediately recognized. “Adobe Photoshop is easily the most life-changing program in publishing history,” declared a Macworld article from 2000, announcing the launch of Photoshop 6.0. “Today, fine artists add finishing touches by Photoshopping their artwork, and pornographers would have nothing to offer except reality if they didn’t Photoshop every one of their graphics.”

    We came to accept that technology for what it was and developed a healthy skepticism. Very few people today believe that an airbrushed magazine cover shows the model as they really are. (In fact, it’s often un-Photoshopped content that attracts public attention.) And yet, we don’t fully disbelieve such photos, either: While there are occasional heated debates about the impact of normalizing airbrushing—or more relevant today, filtering—we still trust that photos show a real person captured at a specific moment in time. We understand that each picture is rooted in reality.

    Generated media, such as deepfaked video or GPT-3 output, is different. If used maliciously, there is no unaltered original, no raw material that could be produced as a basis for comparison or evidence for a fact-check. In the early 2000s, it was easy to dissect pre-vs-post photos of celebrities and discuss whether the latter created unrealistic ideals of perfection. In 2020, we confront increasingly plausible celebrity face-swaps on porn, and clips in which world leaders say things they’ve never said before. We will have to adjust, and adapt, to a new level of unreality. Even social media platforms recognize this distinction; their deepfake moderation policies distinguish between media content that is synthetic and that which is merely “modified”.

    To moderate deepfaked content, though, you have to know it’s there. Out of all the forms that now exist, video may turn out to be the easiest to detect. Videos created by AI often have digital tells where the output falls into the uncanny valley: “soft biometrics” such as a person’s facial movements are off; an earring or some teeth are poorly rendered; or a person’s heartbeat, detectable through subtle shifts in coloring, is not present. Many of these giveaways can be overcome with software tweaks. In 2018’s deepfake videos, for instance, the subjects’ blinking was often wrong; but shortly after this discovery was published, the issue was fixed. Generated audio can be more subtle—no visuals, so fewer opportunities for mistakes—but promising research efforts are underway to suss those out as well. The war between fakers and authenticators will continue in perpetuity.

    Perhaps most important, the public is increasingly aware of the technology. In fact, that knowledge may ultimately pose a different kind of risk, related to and yet distinct from the generated audio and videos themselves: Politicians will now be able to dismiss real, scandalous videos as artificial constructs simply by saying, “That’s a deepfake!” In one early example of this, from late 2017, the US president’s more passionate online surrogates suggested (long after the election) that the leaked Access Hollywood “grab ’em” tape could have been generated by a synthetic-voice product named Adobe Voco.

    But synthetic text—particularly of the kind that’s now being produced—presents a more challenging frontier. It will be easy to generate in high volume, and with fewer tells to enable detection. Rather than being deployed at sensitive moments in order to create a mini scandal or an October Surprise, as might be the case for synthetic video or audio, textfakes could instead be used in bulk, to stitch a blanket of pervasive lies. As anyone who has followed a heated Twitter hashtag can attest, activists and marketers alike recognize the value of dominating what’s known as “share of voice”: Seeing a lot of people express the same point of view, often at the same time or in the same place, can convince observers that everyone feels a certain way, regardless of whether the people speaking are truly representative—or even real. In psychology, this is called the majority illusion. As the time and effort required to produce commentary drops, it will be possible to produce vast quantities of AI-generated content on any topic imaginable. Indeed, it’s possible that we’ll soon have algorithms reading the web, forming “opinions,” and then publishing their own responses. This boundless corpus of new content and comments, largely manufactured by machines, might then be processed by other machines, leading to a feedback loop that would significantly alter our information ecosystem.

    Right now, it’s possible to detect repetitive or recycled comments that use the same snippets of text in order to flood a comment section, game a Twitter hashtag, or persuade audiences via Facebook posts. This tactic has been observed in a range of past manipulation campaigns, including those targeting US government calls for public comment on topics such as payday lending and the FCC’s network-neutrality policy. A Wall Street Journal analysis of some of these cases spotted hundreds of thousands of suspicious contributions, identified as such because they contained repeated, long sentences that were unlikely to have been composed spontaneously by different people. If these comments had been generated independently—by an AI, for instance—these manipulation campaigns would have been much harder to smoke out.

    In the future, deepfake videos and audiofakes may well be used to create distinct, sensational moments that commandeer a press cycle, or to distract from some other, more organic scandal. But undetectable textfakes—masked as regular chatter on Twitter, Facebook, Reddit, and the like—have the potential to be far more subtle, far more prevalent, and far more sinister. The ability to manufacture a majority opinion, or create a fake-commenter arms race—with minimal potential for detection—would enable sophisticated, extensive influence campaigns. Pervasive generated text has the potential to warp our social communication ecosystem: algorithmically generated content receives algorithmically generated responses, which feeds into algorithmically mediated curation systems that surface information based on engagement.

    Our trust in each other is fragmenting, and polarization is increasingly prevalent. As synthetic media of all types—text, video, photo, and audio—increases in prevalence, and as detection becomes more of a challenge, we will find it increasingly difficult to trust the content that we see. It may not be so simple to adapt, as we did to Photoshop, by using social pressure to moderate the extent of these tools’ use and accepting that the media surrounding us is not quite as it seems. This time around, we’ll also have to learn to be much more critical consumers of online content, evaluating the substance on its merits rather than its prevalence.

    Renee DiResta (@noUpside) is an Ideas contributor for WIRED, writing about discourse and the internet. She studies narrative manipulation as the technical research manager at Stanford Internet Observatory, and is a Mozilla fellow on media, misinformation and trust. In past lives she has been director of research at New Knowledge, on the founding team of supply chain logistics startup Haven, a venture capitalist at OATV, and a trader at Jane Street.

  • #Covid-19 Vaccines With ‘Minor Side Effects’ Could Still Be Pretty Bad | WIRED

    L’article reproche aux journalistes mainstream de se montrer peu curieux concernant les déclarations des sociétés pharmaceutiques sur leur #vaccin et de se contenter de répéter les propos lénifiants qu’elles égrènent, alors que ces derniers ont pour principal objectif de faire monter la #cotation en #bourse de leurs #actions et pour principale conséquence de conforter les a-priori des anti-#vaccins.

    The press release for Monday’s publication of results from the Oxford vaccine trials described an increased frequency of “minor side effects” among participants. A look at the actual paper, though, reveals this to be a marketing spin that has since been parroted in media reports. (The phrases “minor side effects” or “only minor side effects” appeared in writeups from The New York Times, The Wall Street Journal and Reuters, among other outlets.) Yes, mild reactions were far more common than worse ones. But moderate or severe harms—defined as being bad enough to interfere with daily life or needing medical care—were common too. Around one-third of people vaccinated with the Covid-19 vaccine without acetaminophen experienced moderate or severe chills, fatigue, headache, malaise, and/or feverishness. Close to 10 percent had a fever of at least 100.4 degrees, and just over one-fourth developed moderate or severe muscle aches. That’s a lot, in a young and healthy group of people—and the acetaminophen didn’t help much for most of those problems. The paper’s authors designated the vaccine as “acceptable” and “tolerated,” but we don’t yet know how acceptable this will be to most people. If journalists don’t start asking tougher questions, this will become the perfect setup for anti-vaccine messaging: Here’s what they forgot to tell you about the risks …

    There is another red flag. Clinical trials for other Covid-19 vaccines have placebo groups, where participants receive saline injections. Only one of the Oxford vaccine trials is taking this approach, however; the others instead compare the experimental treatment to an injected meningococcal vaccine. There can be good reasons to do this: Non-placebo injections may mimic telltale signs that you’ve received an active vaccine, such as a skin reaction, making the trial more truly “blind.” But their use also opens the door to doubt-sowing claims that any harms of the new vaccine are getting buried among the harms already caused by the control-group, “old” vaccines.

    Coverage of the Moderna vaccine reflects a different kind of pharma spin : the drip-feeding of selective data via press release. On May 18, Moderna put out some patchy, positive findings on interim outcomes from their first-in-human trial. The company followed that up with a stock offering—and company executives sold off nearly $30 million in shares into the feeding frenzy their press release created.

    With last week’s paper from Moderna, results from that same group of people finally had their formal publication. At the same time, the group registered a 30,000-person phase III clinical trial, specifying a pair of 100-microgram injections of the Covid-19 vaccine. According to the press release from May, there were no serious adverse events for the people in that particular dosage group. But last week’s paper shows the full results: By the time they’d had two doses, every single one was showing signs of headaches, chills, or fatigue, and for at least 80 percent, this could have been enough to interfere with their normal activities. A participant who had a severe reaction to a particularly high dose has talked in detail about how bad it was: If reactions even half as bad as this were to be common for some of these vaccines, they will be hard sells once they reach the community—and there could be a lot of people who are reluctant to get the second injection.

    #big_pharma #pharma #manipulations #MSM

  • Privacy Isn’t a Right You Can Click Away

    Senator Sherrod Brown wants to drastically scale back the permitted uses of your personal data—and ban facial recognition outright. Be honest—have you ever read a privacy disclosure ? Even once ? Facebook’s data privacy policy is more than 4,000 words. It contains dozens of links to hundreds of pages of complex terms and agreements. Even if you had the time to read it, you’d need a law degree and a data science background to understand which rights you’re signing away and what frightening (...)

    #Clearview #algorithme #CCTV #biométrie #conditions #consentement #données #facial #législation #reconnaissance #BigData #activisme #biais (...)


  • A thread written by gregggonsalves: "So, there is something strange going on in America, I keep getting asked “can we reopen schools,” [...]"

    It’s not rocket science. If your house is on fire, you DO NOT GO BACK IN THE HOUSE. 6/

    So. 1. Get your local epidemic under control. 2. Then ensure that schools and universities can test everyone 2x a week at least. 3. Make sure they can trace and isolate anyone infected and their close contacts. 4. Make sure everyone has adequate PPE... 7/

    5. Make sure social distancing can take place anywhere people are going to be together (classrooms, cafeterias, bathrooms). 6. Make sure surfaces are cleaned frequently, before/after activities. 8/

  • SUVs Are Worse for the Climate Than You Ever Imagined | WIRED

    It turns out that vehicles like mine—known as sport utility vehicles, or SUVs—are even worse for the climate than I had imagined. And I imagined they were pretty bad.
    A massive carbon footprint

    According to a summary analysis of a report by the International Energy Agency that was released on November 13, SUVs are the second-biggest cause of the rise in global carbon dioxide emissions during the past decade. Only the power sector is a bigger contributor.

    The analysis, which surprised even its own authors, found a dramatic shift toward SUVs. In 2010, one in five vehicles sold was an SUV; today it’s two in five. “As a result, there are now over 200 million SUVs around the world, up from about 35 million in 2010,” the agency reports.

    #automobile #voiture #pollution #climat

  • A New Map Shows the Inescapable Creep of Surveillance

    The Atlas of Surveillance shows which tech law enforcement agencies across the country have acquired. It’s a sobering look at the present-day panopticon. Over 1,300 partnerships with Ring. Hundreds of facial recognition systems. Dozens of cell-site simulator devices. The surveillance apparatus in the United States takes all kinds of forms in all kinds of places—a huge number of which populate a new map called the Atlas of Surveillance. A collaboration between the Electronic Frontier (...)

    #Ring #Amazon #algorithme #CCTV #drone #cartographie #vidéo-surveillance #surveillance #EFF

  • Body Cameras Haven’t Stopped Police Brutality. Here’s Why

    Amid worldwide protests over racism and police violence, lawmakers are once again turning to the devices as a tool for reform. After Michael Brown was killed by a police officer in Ferguson, Missouri, igniting the national Black Lives Matter movement, everyone from then president Barack Obama to members of Brown’s family embraced a relatively new solution for reform : Equip officers with body cameras. If police knew their every action was being recorded, the reasoning went, they would more (...)

    #CCTV #police #vidéo-surveillance #violence #surveillance

  • Plastic Rain Is the New Acid Rain | WIRED

    Writing today in the journal Science, researchers report a startling discovery: After collecting rainwater and air samples for 14 months, they calculated that over 1,000 metric tons of microplastic particles fall into 11 protected areas in the western US each year. That’s the equivalent of over 120 million plastic water bottles. “We just did that for the area of protected areas in the West, which is only 6 percent of the total US area,” says lead author Janice Brahney, an environmental scientist at Utah State University. “The number was just so large, it’s shocking.”

    It further confirms an increasingly hellish scenario: Microplastics are blowing all over the world, landing in supposedly pure habitats, like the Arctic and the remote French Pyrenees. They’re flowing into the oceans via wastewater and tainting deep-sea ecosystems, and they’re even ejecting out of the water and blowing onto land in sea breezes. And now in the American West, and presumably across the rest of the world given that these are fundamental atmospheric processes, they are falling in the form of plastic rain—the new acid rain.

    #plastique #déchets_plastiques #pollution

  • Anonymous Stole and Leaked a Megatrove of Police Documents

    It’s been the better part of a decade since the hacktivist group Anonymous rampaged across the internet, stealing and leaking millions of secret files from dozens of US organizations. Now, amid the global protests following the killing of George Floyd, Anonymous is back—and it’s returned with a dump of hundreds of gigabytes of law enforcement files and internal communications. On Friday of last week, the Juneteenth holiday, a leak-focused activist group known as Distributed Denial of Secrets (...)

    #FBI #activisme #police #données #Anonymous #BigData #BlueLeaks #hacking

    • On Friday of last week, the Juneteenth holiday, a leak-focused activist group known as Distributed Denial of Secrets published a 269-gigabyte collection of police data that includes emails, audio, video, and intelligence documents, with more than a million files in total. DDOSecrets founder Emma Best tells WIRED that the hacked files came from Anonymous—or at least a source self-representing as part of that group, given that under Anonymous’ loose, leaderless structure anyone can declare themselves a member. Over the weekend, supporters of DDOSecrets, Anonymous, and protesters worldwide began digging through the files to pull out frank internal memos about police efforts to track the activities of protesters. The documents also reveal how law enforcement has described groups like the antifascist movement Antifa.

      “It’s the largest published hack of American law enforcement agencies,” Emma Best, cofounder of DDOSecrets, wrote in a series of text messages. “It provides the closest inside look at the state, local, and federal agencies tasked with protecting the public, including [the] government response to COVID and the BLM protests.”
      The Hack

      The massive internal data trove that DDOSecrets published was originally taken from a web development firm called Netsential, according to a law enforcement memo obtained by Kreb On Security. That memo, issued by the National Fusion Center Association, says that much of the data belonged to law enforcement “fusion centers” across the US that act as information-sharing hubs for federal, state, and local agencies. Netsential did not immediately respond to a request for comment.

      Best declined to comment on whether the information was taken from Netsential, but noted that “some Twitter users accurately pointed out that a lot of the data corresponded to Netsential systems.” As for their source, Best would say only that the person self-represented as “capital A Anonymous,” but added cryptically that “people may wind up seeing a familiar name down the line.”

      #fuite #transparence

  • Airbnb Quietly Fired Hundreds of Contract Workers. I’m One of Them

    While the company touted generous severance packages for its terminated employees, it offered unequal help to its shadow workforce. In April, with travel halted worldwide and revenue plunging, the cofounders of Airbnb raised $2 billion in debt and equity financing. Two weeks later, I was laid off. For 13 months, I worked full-time as a contract copywriter on a social impact initiative at Airbnb—and before that, for four months on a marketing project for the company. My office life (...)

    #Airbnb #discrimination #GigEconomy #licenciement #travail

  • A Council of Citizens Should Regulate Algorithms

    Are machine-learning algorithms biased, wrong, and racist ? Let citizens decide. Essentially rule-based structures for making decisions, machine-learning algorithms play an increasingly large role in our lives. They suggest what we should read and watch, whom we should date, and whether or not we are detained while awaiting trial. Their promise is huge–they can better detect cancers. But they can also discriminate based on the color of our skin or the zip code we live in. Despite their (...)

    #algorithme #éthique #racisme #discrimination

  • How I Sold My Company to Twitter, Went to Facebook, and Screwed My Co-Founders | WIRED

    In October 2010, a mother in Florida had shaken her baby to death, as the baby would interrupt her FarmVille games with crying. A mother destroyed with her own hands what she’d been programmed over aeons to love, just to keep on responding to Facebook notifications triggered by some idiot game. Products that cause mothers to murder their infants in order to use them more, assuming they’re legal, simply cannot fail in the world. Facebook was legalized crack, and at Internet scale.

    #Facebook #internet #addictions

  • On Instagram, Black Squares Overtook Activist Hashtags | WIRED

    The posts had completely overtaken the #blacklivesmatter hashtag, “flooding out all of the resources that have been there for the last few years,” says Williams. “It’s really frustrating to have carved out this area of the internet where we can gather and then all of a sudden we see pages and pages and pages of black squares that don’t guide anyone to resources.” Around 1 am on the West Coast, Williams tweeted about it. “Do not post black squares with the hashtag #BlackLivesMatter. You’re [unintentionally] quite literally erasing the space organizers have been using to share resources. Stop it. Stop.”

    Social media has played a critical role in organizing against racism and police brutality in the US. Online, anyone can start a social movement; platforms like Twitter and Instagram have made it possible to broadcast messages to massive audiences and coordinate support across cities. Before the mainstream media reported on the shooting of Michael Brown in 2014, on-the-ground reports had already spread throughout Twitter. The police shooting of Philando Castile in 2016 was brought to light as soon as his girlfriend, Diamond Reynolds, broadcast a video to Facebook Live. The #blacklivesmatter hashtag itself originated with a Facebook post by Alicia Garza in 2013, after George Zimmerman was acquitted of fatally shooting Trayvon Martin.

    But the same megaphone that can amplify messages can also distort them. As recent protests have spread across American cities following the death of George Floyd, who died in police custody in Minneapolis, organizers have worked tirelessly to share images and information across social media, urging followers to take action. Now, activists say that all those black squares have drowned out the information that matters.

    Soon, though, the idea spread beyond the music industry. Kylie Jenner posted a black square to her Instagram feed. So did Fenty Beauty, Rihanna’s makeup brand, along with an announcement that the brand would not be conducting business on June 2. “This is not a day off. This is a day to reflect and find ways to make real change,” the company said in an Instagram post. Then it introduced a new hashtag: “This is a day to #pullup.”

    By Tuesday morning, thousands of people had begun garnishing their posts with the #blackoutday and #blacklivesmatter hashtags. Thousands of others used #blackouttuesday, or added it to their posts retrospectively, so as to avoid detracting from the information posted to #blacklivesmatter. Still, many have criticized the act of posting the black squares at all. “My Instagram feed this morning is just a wall of white people posting black screens,” the writer Jeanna Kadlec tweeted. “like ... that isn’t muting yourself, babe, that’s actually kind of the opposite!”

    Some activists have wondered if tagging the black square posts with #blacklivesmatter began as a coordinated effort to silence them, which other people failed to recognize when they jumped on the bandwagon. (As of Tuesday afternoon, WIRED has not independently confirmed the existence of any coordinated campaigns.)

    Williams, who noticed the flood of black squares as early as 1 am on Tuesday, also raised suspicions. “For it to jump from #theshowmustbepaused to #blackoutday to #blacklivesmatter is very, very odd to me,” they say. Whether or not the posts were coordinated or entirely spontaneous, “it’s clear to organizers and activists that this fucked us up,” says Williams. “Five or six years of work, all those resources, all that work and documentation—and now we have millions of black squares?”

    #Censure #Instagram #BlackLivesMatter #Memes #Culture_numérique

  • Walmart Employees Are Out to Show Its Anti-Shoplifting AI Doesn’t Work

    The retailer denies there is any widespread issue with the software, but a group expressed frustration—and public health concerns. In January, my coworker received a peculiar email. The message, which she forwarded to me, was from a handful of corporate Walmart employees calling themselves the “Concerned Home Office Associates.” (Walmart’s headquarters in Bentonville, Arkansas, is often referred to as the Home Office.) While it’s not unusual for journalists to receive anonymous tips, they don’t (...)

    #bug #Walmart #Amazon #algorithme #CCTV #consommation #supermarché #vidéo-surveillance #surveillance (...)

    ##supermarché ##travail

  • Facebook dévoile les premiers noms de son conseil indépendant de #surveillance

    #Facebook and the Folly of Self-Regulation | WIRED

    This board is also stacked with a disproportionate number of Americans who tend to view these issues through American legal history and conflicts. The original 20 includes five Americans, none of whom have any deep knowledge of how social media operate around the world.

    In contrast, the board has only one member from India—the country with more Facebook users than any other. India is home to more than 22 major languages and 700 dialects. The majority-Hindu nation has more Muslim citizens than any other country except Indonesia, along with millions of Buddhists, Christians, Jews, and Bahai. Facebook and #WhatsApp have been deployed by violent Hindu nationalists (aligned closely with the ruling BJP Party of Prime Minister Narendara Modi, the most popular politician on Facebook) to terrorize Muslims, Christians, journalists, scholars, and anyone who criticizes the central government’s efforts to make India a brutal, nationalistic theocracy.


    Ultimately, this board will influence none of the things that make Facebook Facebook: global scale (2.5 billion users in more than 100 languages), targeted ads (enabled by surveillance), and algorithmic amplification of some content rather than other content. The problem with Facebook is not that a photograph came down that one time. The problem with Facebook is Facebook.

  • Stewart Brand Is 81—and He Doesn’t Want to Go on a Ventilator | WIRED

    Un article émouvant sur l’intubation et le choix d’une personne vivante envers ce traitement. Par Steven Levy (l’historien des Hackers) à propos de Stewart Brand.

    Brand is a legendary writer and thinker, the founder of the Whole Earth Catalog and cofounder of the Long Now Foundation. He is also 81, and his tweet was a way of opening a conversation on a subject that was impossible for him to avoid during the Covid-19 pandemic: When is it time to say no to treatment?

    This end-of-life question didn’t arrive with the new coronavirus. For people who are older or have serious medical conditions, the possibility of having to make frightening health decisions in an emergency always lurks in the back of the mind. Covid-19 drives those dark thoughts to the foreground. While the virus is still a mystery in many ways, experts have been consistent on at least one point: It hits older people and those with preexisting medical conditions the hardest. And one of the worst complications—acute respiratory distress syndrome (ARDS)—can come on suddenly, rapidly accelerating to the point where treatment dictates admission to an intensive care unit.

    Brand now was posing a question: Should you just not go there? That’s when he opened it up to Twitter. “The main thing I’m looking for is data,” he wrote. “Anecdotes. Statistics. Video. INFORMATION … The stuff that good decisions are made of.”

    #Intubation #COVID-19 #Stewart_Brand #Steven_Levy

  • How to Set Your Facebook, Twitter, and Instagram to Control Who Sees What | WIRED

    12 règles pour protéger au mieux sa vie privée sur les médias sociaux (mais pas « des » médias sociaux).

    Social media can bring us together, and even distract us sometimes from our troubles—but it also can expose us to scammers, hackers, and...less than pleasant experiences.

    Don’t panic though: you can keep the balance towards the positive with just a few common-sense steps, and we have some of the most vital ones below. When it comes to staying safe on Facebook, Instagram and Twitter, a lot of it is common sense, with a sprinkling of extra awareness.

    #Médias_sociaux #Vie_privée

  • Inside the Early Days of China’s Coronavirus Coverup | WIRED

    Seasoned journalists in China often say “Cover China as if you were covering Snapchat”—in other words, screenshot everything, under the assumption that any given story could be deleted soon. For the past two and half months, I’ve been trying to screenshot every news article, social media post, and blog post that seems relevant to the coronavirus. In total, I’ve collected nearly 100 censored online posts: 40 published by major news organizations, and close to 60 by ordinary social media users like Yue. In total, the number of Weibo posts censored and WeChat accounts suspended would be virtually uncountable. (Despite numerous attempts, Weibo and WeChat could not be reached for comment.)

    Taken together, these deleted posts offer a submerged account of the early days of a global pandemic, and they indicate the contours of what Beijing didn’t want Chinese people to hear or see. Two main kinds of content were targeted for deletion by censors: Journalistic investigations of how the epidemic first started and was kept under wraps in late 2019 and live accounts of the mayhem and suffering inside Wuhan in the early days of the city’s lockdown, as its medical system buckled under the world’s first hammerstrike of patients.

    It’s not hard to see how these censored posts contradicted the state’s preferred narrative. Judging from these vanished accounts, the regime’s coverup of the initial outbreak certainly did not help buy the world time, but instead apparently incubated what some have described as a humanitarian disaster in Wuhan and Hubei Province, which in turn may have set the stage for the global spread of the virus. And the state’s apparent reluctance to show scenes of mass suffering and disorder cruelly starved Chinese citizens of vital information when it mattered most.

    On January 20, 2020, Zhong Nanshan, a prominent Chinese infectious disease expert, essentially raised the curtain on China’s official response to the coronavirus outbreak when he confirmed on state television that the pathogen could be transmitted from human to human. Zhong was, in many ways, an ideal spokesperson for the government’s effort; he had become famous for being a medical truth-teller during the 2003 SARS outbreak.

    Immediately following Zhong’s announcement, the Chinese government allowed major news organizations into Wuhan, giving them a surprising amount of leeway to report on the situation there. In another press conference on January 21, Zhong praised the government’s transparency. Two days after that, the government shut down virtually all transportation into and out of Wuhan, later extending the lockdown to other cities.

    The sequence of events had all the appearances of a strategic rollout: Zhong’s January 20 TV appearance marked the symbolic beginning of the crisis, to which the government responded swiftly, decisively, and openly.

    But shortly after opening the information floodgates, the state abruptly closed them again—particularly as news articles began to indicate a far messier account of the government’s response to the disease. “The last couple of weeks were the most open Weibo has ever been and [offered] the most freedom many media organizations have ever enjoyed,” one Chinese Weibo user wrote on February 2. “But it looks like this has come to an end.”

    On February 5, a Chinese magazine called China Newsweek published an interview with a doctor in Wuhan, who said that physicians were told by hospital heads not to share any information at the beginning of the outbreak. At the time, he said, the only thing that doctors could do was to urge patients to wear masks.

    Various frontline reports that were later censored supported this doctor’s descriptions: “Doctors were not allowed to wear isolation gowns because that might stoke fears,” said a doctor interviewed by the weekly publication Freezing Point. The interview was later deleted.

    By January, according to Caixin, a gene sequencing laboratory in Guangzhou had discovered that the novel virus in Wuhan shared a high degree of similarity with the virus that caused the SARS outbreak in 2003; but, according to an anonymous source, Hubei’s health commission promptly demanded that the lab suspend all testing and destroy all samples. On January 6, according to the deleted Caixin article, China’s National Center for Disease Control and Prevention initiated an “internal second-degree emergency response”—but did not alert the public. Caixin’s investigation disappeared from the Chinese internet only hours after it was published.

    Among journalists and social critics in China, the 404 error code, which announces that the content on a webpage is no longer available, has become a badge of honor. “At this point, if you haven’t had a 404 under your belt, can you even call yourself a journalist?” a Chinese reporter, who requested anonymity, jokingly asked me.

    However, the crackdown on reports out of Wuhan was even more aggressive against ordinary users of social media.

    On January 24, a resident posted that nurses at a Hubei province hospital were running low on masks and protective goggles. Soon after that post was removed, another internet user reposted it and commented: “Sina employees—I’m begging you to stop deleting accounts. Weibo is an effective way to offer help. Only when we are aware of what frontline people need can we help them.”

    Only minutes later, the post was taken down. The user’s account has since vanished.

    But the real war between China’s censors and its social media users began on February 7.

    That day, a Wuhan doctor named Li Wenliang—a whistleblower who had raised alarms about the virus in late December, only to be reprimanded for “spreading rumors”—died of Covid-19.

    Within hours, his death sparked a spectacular outpouring of collective grief on Chinese social media—an outpouring that was promptly snuffed out, post by post, minute by minute. With that, grief turned to wrath, and posts demanding freedom of speech erupted across China’s social media platforms as the night went on.

    A number of posts directly challenged the party’s handling of Li’s whistleblowing and the government’s relentless suppression of the freedom of speech in China. Some Chinese social media users started to post references to the 2019 Hong Kong protests, uploading clips of “Do You Hear People Sing” from Les Miserables, which became a protest anthem during last year’s mass demonstrations. Even more daringly, some posted photos from the 1989 Tiananmen Square protest and massacre, one of the most taboo subjects in China.

    One image that surfaced from Tiananmen was an image of a banner from the 1989 protest that reads: “We shall not let those murderers stand tall so they will block our wind of freedom from blowing.”

    The censors frantically kept pace. In the span of a quarter hour from 23:16 to around 23:30, over 20 million searches for information on the death of Li Wenliang were winnowed down to fewer than 2 million, according to a Hong Kong-based outlet The Initium. The #DrLiWenLiangDied topic was dragged from number 3 on the trending topics list to number 7 within roughly the same time period.

    Since the night of February 7, whole publications have fallen to the scythe. On January 27, an opinion blog called Dajia published an article titled “50 Days into the Outbreak, The Entire Nation is Bearing the Consequence of the Death of the Media.” By February 19, the entire site was shut down, never to resurface.

    On March 10, an article about another medical whistleblower in Wuhan—another potential Li—was published and then swiftly wiped off the internet, which began yet another vast cat-and-mouse game between censors and Chinese social media users. The story, published by People, profiled a doctor, who, as she put it, had “handed out the whistle” by alerting other physicians about the emergence of a SARS-like virus in late December. The article reported that she had been scolded by hospital management for not keeping the information a secret.

    Soon after it was deleted, Chinese social media users started to recreate the article in every way imaginable: They translated it into over 10 languages; transcribed the piece in Morse code; wrote it out in ancient Chinese script; incorporated its content into a scannable QR code; and even rewrote it in Klingon—all in an effort to evade the censorship machine. All of these efforts were eradicated from the internet.

    But it’s unlikely that the masses of people who watched posts being expunged from the internet will forget how they were governed in the pandemic. On March 17, I picked up my phone, opened my Weibo account, and typed out the following sentence: “You are waiting for their apology, and they are waiting for your appreciation.” The post promptly earned me a 404 badge.

    Shawn Yuan is a Beijing-based freelance journalist and photographer. He travels between the Middle East and China to report on human rights and politics issues.

    #Chine #Censure #Médias_sociaux #Journalisme

  • How Well Can Algorithms Recognize Your Masked Face? | WIRED

    Facial-recognition algorithms from Los Angeles startup TrueFace are good enough that the US Air Force uses them to speed security checks at base entrances. But CEO Shaun Moore says he’s facing a new question: How good is TrueFace’s technology when people are wearing face masks?

    “It’s something we don’t know yet because it’s not been deployed in that environment,” Moore says. His engineers are testing their technology on masked faces and are hurriedly gathering images of masked faces to tune their machine-learning algorithms for pandemic times.

    Some vendors and users of facial recognition say the technology works well enough on masked faces. “We can identify a person wearing a balaclava, or a medical mask and a hat covering the forehead,” says Artem Kuharenko, founder of NtechLab, a Russian company whose technology is deployed on 150,000 cameras in Moscow. He says that the company has experience with face masks through contracts in southeast Asia, where masks are worn to curb colds and flu. US Customs and Border Protection, which uses facial recognition on travelers boarding international flights at US airports, says its technology can identify masked faces.

    But Anil Jain, a professor at Michigan State University who works on facial recognition and biometrics, says such claims can’t be easily verified. “Companies can quote internal numbers, but we don’t have a trusted database or evaluation to check that yet,” says. “There’s no third-party validation.”

    Early in March, China’s SenseTime, which became the world’s most valuable AI startup in part through providing face recognition to companies and government agencies, said it had upgraded its product for controlling access to buildings and workplaces to work with face masks. The software attends to facial features left uncovered, such as eyes, eyebrows, and the bridge of the nose, a spokesperson said. The US restricted sales to SenseTime and other Chinese AI companies last year for allegedly supplying technology used to oppress Uighur Muslims in China’s northwest.

    Reports from China of the systems’ effectiveness with masks are mixed. One Beijing resident told WIRED she appreciated the convenience of not having to remove her mask to use Alipay, China’s leading mobile payments network, which has updated its facial-recognition system. But Daniel Sun, a Gartner analyst also in Beijing, says he has had to step out of crowds to pull down his mask to use facial recognition for payments. Still, he believes facial recognition will continue to grow in usage, perhaps helped by interest in more hygienic, touch-free transactions. “I don’t think Covid-19 will stop the increase in usage of this technology in China,” Sun says.