• #Calais : #maraudes et #ratonnades

    Les récentes images de #violences_policières nous rappellent celles vues à Calais lors du démantèlement de la jungle, ou, plus récemment encore, contre les réfugiés encore sur place.

    Le 15 Janvier 2018, lors de la visite d’Emmanuel Macron à Calais, deux associations d’aides aux réfugiés ont porté plainte contre X pour « destruction et #dégradation_de_biens ». Condamnés à dormir dehors, les réfugiés sont victimes de violences policières dénoncées depuis des mois par les associations : un jeune érythréen de 16 ans a perdu son oeil lors d’une opération de police...

    Un nationaliste repenti, des bénévoles à bout de souffle, des réfugiés épuisés : ils témoignent d’une histoire qui se répète désespérément, en plus grave et en plus rude.

    Patricia, qui participe bénévolement aux maraudes de Salam, témoigne de son incompréhension face à la #haine que certains habitants de Calais expriment à l’égard de ces réfugiés. Critiques venant de gens qui, parfois, connaissent eux aussi de grandes difficultés et doivent faire face à une autre forme de #misère.

    Romuald avait dans un premier temps trouvé sa place dans une association « anti-migrant » fréquentée par la sphère de l’extrême droite.

    « Qu’on gère l’immigration, qu’on ferme les #frontières, je suis pour, mais de là à gazer un mec parce qu’il n’est pas de la même couleur de peau, il y a tout un monde. Romuald, aujourd’hui bénévole pour l’#association_Salam. »

    Il quitte ce groupe, en désaccord avec sa radicalité. Quelque temps plus tard, Patricia l’a incité à se rendre à une maraude, puis à rejoindre l’association Salam dont il est aujourd’hui un des membres actifs.

    « Pour qu’un calaisien puisse gagner sa vie, il a intérêt à investir dans les barbelés ou les clôtures. Ici c’est grillagé dans tous les coins. Romuald »

    Youssef, lui, est membre d’#Utopia_56, une association venant en aide aux migrants de Calais. Il raconte les #dispersions, les #gaz_lacrymogènes et les violences policières.

    « On n’est pas équipés pour faire la guerre. Eux, ils ont des armes. »

    #asile #migrations #réfugiés #démantèlement #destruction #campement #audio #podcast #SDF #logement #hébergement #sans-abri #haine_des_réfugiés #extrême_droite #solidarité #violence #Salam #anti-migrants #islamophobie #fake_news #anti-musulmans #témoignage #distribution_de_repas


    Minute 25’10, témoignage d’un migrant, Abeba d’Ethiopie :
    « Je suis dubliné, je suis l’esclave de mes #empreintes »

    ping @isskein @karine4

  • The National-Security Case for Fixing Social Media | The New Yorker

    On Wednesday, July 15th, shortly after 3 P.M., the Twitter accounts of Barack Obama, Joe Biden, Jeff Bezos, Bill Gates, Elon Musk, Warren Buffett, Michael Bloomberg, Kanye West, and other politicians and celebrities began behaving strangely. More or less simultaneously, they advised their followers—around two hundred and fifty million people, in total—to send Bitcoin contributions to mysterious addresses. Twitter’s engineers were surprised and baffled; there was no indication that the company’s network had been breached, and yet the tweets were clearly unauthorized. They had no choice but to switch off around a hundred and fifty thousand verified accounts, held by notable people and institutions, until the problem could be identified and fixed. Many government agencies have come to rely on Twitter for public-service messages; among the disabled accounts was the National Weather Service, which found that it couldn’t send tweets to warn of a tornado in central Illinois. A few days later, a seventeen-year-old hacker from Florida, who enjoyed breaking into social-media accounts for fun and occasional profit, was arrested as the mastermind of the hack. The F.B.I. is currently investigating his sixteen-year-old sidekick.

    In its narrowest sense, this immense security breach, orchestrated by teen-agers, underscores the vulnerability of Twitter and other social-media platforms. More broadly, it’s a telling sign of the times. We’ve entered a world in which our national well-being depends not just on the government but also on the private companies through which we lead our digital lives. It’s easy to imagine what big-time criminals, foreign adversaries, or power-grabbing politicians could have done with the access the teen-agers secured. In 2013, the stock market briefly plunged after a tweet sent from the hacked account of the Associated Press reported that President Barack Obama had been injured in an explosion at the White House; earlier this year, hundreds of armed, self-proclaimed militiamen converged on Gettysburg, Virginia, after a single Facebook page promoted the fake story that Antifa protesters planned to burn American flags there.

    When we think of national security, we imagine concrete threats—Iranian gunboats, say, or North Korean missiles. We spend a lot of money preparing to meet those kinds of dangers. And yet it’s online disinformation that, right now, poses an ongoing threat to our country; it’s already damaging our political system and undermining our public health. For the most part, we stand defenseless. We worry that regulating the flow of online information might violate the principle of free speech. Because foreign disinformation played a role in the election of our current President, it has become a partisan issue, and so our politicians are paralyzed. We enjoy the products made by the tech companies, and so are reluctant to regulate their industry; we’re also uncertain whether there’s anything we can do about the problem—maybe the price of being online is fake news. The result is a peculiar mixture of apprehension and inaction. We live with the constant threat of disinformation and foreign meddling. In the uneasy days after a divisive Presidential election, we feel electricity in the air and wait for lightning to strike.

    In recent years, we’ve learned a lot about what makes a disinformation campaign effective. Disinformation works best when it’s consistent with an audience’s preconceptions; a fake story that’s dismissed as incredible by one person can appear quite plausible to another who’s predisposed to believe in it. It’s for this reason that, while foreign governments may be capable of more concerted campaigns, American disinformers are especially dangerous: they have their fingers on the pulse of our social and political divisions.

    As cyber wrongdoing has piled up, however, it has shifted the balance of responsibility between government and the private sector. The federal government used to be solely responsible for what the Constitution calls our “common defense.” Yet as private companies amass more data about us, and serve increasingly as the main forum for civic and business life, their weaknesses become more consequential. Even in the heyday of General Motors, a mishap at that company was unlikely to affect our national well-being. Today, a hack at Google, Facebook, Microsoft, Visa, or any of a number of tech companies could derail everyday life, or even compromise public safety, in fundamental ways.

    Because of the very structure of the Internet, no Western nation has yet found a way to stop, or even deter, malicious foreign cyber activity. It’s almost always impossible to know quickly and with certainty if a foreign government is behind a disinformation campaign, ransomware implant, or data theft; with attribution uncertain, the government’s hands are tied. China and other authoritarian governments have solved this problem by monitoring every online user and blocking content they dislike; that approach is unthinkable here. In fact, any regulation meant to thwart online disinformation risks seeming like a step down the road to authoritarianism or a threat to freedom of speech. For good reason, we don’t like the idea of anyone in the private sector controlling what we read, see, and hear. But allowing companies to profit from manipulating what we view online, without regard for its truthfulness or the consequences of its viral dissemination, is also problematic. It seems as though we are hemmed in on all sides, by our enemies, our technologies, our principles, and the law—that we have no choice but to learn to live with disinformation, and with the slow erosion of our public life.

    We might have more maneuvering room than we think. The very fact that the disinformation crisis has so many elements—legal, technological, and social—means that we have multiple tools with which to address it. We can tackle the problem in parts, and make progress. An improvement here, an improvement there. We can’t cure this chronic disease, but we can manage it.

    Online, the regulation of speech is governed by Section 230 of the Communications Decency Act—a law, enacted in 1996, that was designed to allow the nascent Internet to flourish without legal entanglements. The statute gives every Internet provider or user a shield against liability for the posting or transmission of user-generated wrongful content. As Anna Wiener wrote earlier this year, Section 230 was well-intentioned at the time of its adoption, when all Internet companies were underdogs. But today that is no longer true, and analysts and politicians on both the right and the left are beginning to think, for different reasons, that the law could be usefully amended.

    Technological progress is possible, too, and there are signs that, after years of resistance, social-media platforms are finally taking meaningful action. In recent months, Facebook, Twitter, and other platforms have become more aggressive about removing accounts that appear inauthentic, or that promote violence or lawbreaking; they have also moved faster to block accounts that spread disinformation about the coronavirus or voting, or that advance abhorrent political views, such as Holocaust denial. The next logical step is to decrease the power of virality. In 2019, after a series of lynchings in India was organized through the chat program WhatsApp, Facebook limited the mass forwarding of texts on that platform; a couple of months ago, it implemented similar changes in the Messenger app embedded in Facebook itself. As false reports of ballot fraud became increasingly elaborate in the days before and after Election Day, the major social media platforms did what would have been unthinkable a year ago, labelling as misleading messages from the President of the United States. Twitter made it slightly more difficult to forward tweets containing disinformation; an alert now warns the user about retweeting content that’s been flagged as untruthful. Additional changes of this kind, combined with more transparency about the algorithms they use to curate content, could make a meaningful difference in how disinformation spreads online. Congress is considering requiring such transparency.

    #Désinformation #Fake_news #Propositions_légales #Propositions_techniques #Médias_sociaux

  • Mundus vult decipi ergo decipiatur

    « Hold-Up » : « Pour démonter ce documentaire, il faudrait des heures, des jours de travail »

    Tous les ressorts traditionnels sont activés dans ce documentaire et c’est quelque chose d’inquiétant parce que tout le monde va pouvoir y voir ses préjugés, va pouvoir y projeter ses propres inquiétudes, ses suspicions. Dans une période particulière de fragilité de notre démocratie, ce genre de productions peut avoir un écho, une résonance auprès de beaucoup de personnes inquiètes, mécontentes ou frustrées par la situation actuelle."

    Donc en fait, Hold up est un peu comme le Yi Qing : une sorte d’oracle qui ne peut jamais être pris en défaut.

    Visionné le machin pendant dix minutes avec cette avalanche de témoignages ou de soit-disant analyses toutes plus biaisées les unes que les autres. On joue essentiellement sur les affects. Aucun recul ni mise à distance n’est proposée. Un « docu » qui reflète bien l’époque où nous vivons : pour lutter contre la #désinformation, rajoutons de la désinformation. 100 % merdasse ==> Poubelle.

  • Rassurer sur l’IVG, c’est important - Le Moment Meurice

    #misogynie #grossesse_forcée #ivg #fake_news
    #catholicisme #impunité
    IVG : l’Académie de médecine opposée à l’allongement du délai de 12 à 14 semaines

    L’allongement du délai légal pour une IVG voté, en première lecture, à l’Assemblée

    Les députés ont voté, jeudi 8 octobre en première lecture, en faveur de l’allongement de 12 à 14 semaines du délai légal permettant aux femmes d’avoir recours à une intervention volontaire de grossesse (IVG), avec 86 voix pour et 59 voix contre. Cette demande, émanant d’associations et débattue jeudi 7 octobre à l’Assemblée nationale, divise les praticiens. Politiquement, la proposition de loi est soutenue par de nombreux élus de la majorité, même si Matignon est plus circonspect.

    L’allongement de deux semaines supplémentaires du délai légal est issu d’une proposition de loi du groupe Ecologie Démocratie Solidarité (EDS) et a été étudié dans le cadre de la « niche » parlementaire (une journée réservée à l’examen de textes défendus par un groupe d’opposition) de ce groupe d’anciens « marcheurs ». Du fait d’un manque de praticiens et de la fermeture progressive de centres IVG, il s’écoule souvent plusieurs semaines entre le premier rendez-vous et l’intervention. Chaque année, entre 3 000 et 4 000 femmes « hors délai » partiraient avorter à l’étranger, selon un rapport parlementaire publié en 2000.

    Porté par la députée Albane Gaillot (ex-LRM, Val-de-Marne), le texte propose également de permettre aux sages-femmes de réaliser des IVG chirurgicales jusqu’à la 10e semaine de grossesse et de supprimer la clause de conscience spécifique à l’IVG pour les médecins, des demandes récurrentes d’associations féministes pour garantir « un égal accès à l’IVG » sur tout le territoire. « Ce n’est pas le texte d’un parti mais pour les droits des femmes », qui fait « consensus », a plaidé Mme Gaillot.

    Huées et applaudissements

    L’ambiance était électrique au Palais Bourbon jeudi. Huées, bronca ou à l’inverse salves d’applaudissements : les débats entre les partisans du texte et ses opposants, de la droite et d’ex-« marcheurs » comme Joachim Son-Forget et Agnès Thill, ont fait resurgir les fantômes des discussions sur la loi Veil, adoptée il y a quarante-cinq ans.

    De la gauche à la droite, tous les députés ou presque ont invoqué les mânes de Simone Veil, décédée en 2017, qui a fait adopter la loi dépénalisant le recours à l’interruption volontaire de grossesse, pour défendre leurs positions. Le ministre de la santé, Olivier Véran, avait d’emblée qualifié le thème de « sensible ».

    Avant de laisser la place à sa collègue Brigitte Bourguignon, le ministre avait choisi d’avancer avec prudence sur un terrain qu’il juge trop miné pour être débattu lors d’une « niche » parlementaire. Un avis largement partagé à droite.
    Article réservé à nos abonnés Lire aussi Les députés examinent l’allongement du délai pour avorter

    L’ensemble de la gauche est en soutien de la proposition de loi. C’est d’ailleurs du groupe La France insoumise (LFI) que les plaidoyers en faveur du texte furent portés avec le plus d’élan, à l’instar de Jean-Luc Mélenchon ou de Clémentine Autain qui a raconté avoir elle-même avorté mais en se heurtant à une question de « délai ».

    A droite, les opposants ont ferraillé à chaque article, et critiqué des dispositions qu’ils jugent venir « déséquilibrer » la loi Veil, comme l’a souligné Jean-Christophe Lagarde (UDI). Dans leur ligne de mire : la suppression du délai de réflexion de deux jours pour confirmer une IVG après un entretien psychosocial, et surtout la suppression de la clause de conscience spécifique pour les médecins et sages-femmes qui maintient l’IVG « dans un statut à part » alors que « c’est un acte de santé comme un autre », a avancé Mme Gaillot.

    Pour le gouvernement, la partition est délicate. Olivier Véran a rappelé qu’il était essentiel d’attendre l’avis du Comité consultatif national d’éthique (CCNE), que le gouvernement a saisi mardi, « pour faire un travail complet abouti » et éclairer les débats.

    Celui-ci doit rendre son avis courant novembre, probablement avant le passage de la proposition de loi au Sénat.


  • Déjouer les raccourcis mentaux - Avec l’humoriste Louis T ! | Agence Science-Presse

    Dans le cadre du projet Covid-19 : Dépister la désinfo/Track the facts, nous vous présentons une nouvelle série de 5 capsules sur les raccourcis mentaux avec l’humoriste Louis T.

    #vidéos #EMI #fake_news

  • Trump on QAnon Followers: ’These Are People That Love Our Country’ - The New York Times

    WASHINGTON — President Trump on Wednesday offered encouragement to proponents of QAnon, a viral conspiracy theory that has gained a widespread following among people who believe the president is secretly battling a criminal band of sex traffickers, and suggested that its proponents were patriots upset with unrest in Democratic cities.

    “I’ve heard these are people that love our country,” Mr. Trump said during a White House news conference ostensibly about the coronavirus. “So I don’t know really anything about it other than they do supposedly like me.”

    “Is that supposed to be a bad thing or a good thing?” the president said lightly, responding to a reporter who asked if he could support that theory. “If I can help save the world from problems, I am willing to do it. I’m willing to put myself out there.”

    Mr. Trump’s cavalier response was a remarkable public expression of support for conspiracy theorists who have operated in the darkest corners of the internet and have at times been charged with domestic terrorism and planned kidnapping.

    “QAnon conspiracy theorists spread disinformation and foster a climate of extremism and paranoia, which in some cases has led to violence. Condemning this movement should not be difficult,” said Jonathan A. Greenblatt, the chief executive of the Anti-Defamation League. “It’s downright dangerous when a leader not only refuses to do so, but also wonders whether what they are doing is ‘a good thing.’”

    QAnon is a larger and many-tentacled version of the Pizzagate conspiracy theory, which falsely claimed that Hillary Clinton was operating a child sex-trafficking ring out of the basement of a Washington, D.C., pizza restaurant. In December 2016, a man who said he was on the hunt for proof of child abuse was arrested after firing a rifle inside the restaurant.

    QAnon supporters often flood social media pages with memes and YouTube videos that target well-known figures — like Mrs. Clinton and her husband, former President Bill Clinton, and the actor Tom Hanks — with unfounded claims about their links to child abuse. Lately, activists have used anti-child-trafficking hashtags as a recruitment tool.

    “It’s not just a conspiracy theory, this is a domestic extremist movement,” said Travis View, a host of “QAnon Anonymous,” a podcast that seeks to explain the movement. Mr. View said that Twitter and Facebook pages exploded with comments from gleeful followers after Mr. Trump’s comments.

    Mr. View pointed out that the president answered the question by supporting the central premise of the QAnon theory — that he is battling a cabal of left-wing pedophiles — rather than addressing the lack of evidence behind the movement.

    In recent weeks, platforms including Twitter and Facebook have rushed to dismantle a mushrooming number of QAnon-related accounts and fan pages, a move that people who study the movement say is too little and too late. On Wednesday, after a record amount of QAnon-related growth on the site, Facebook said it removed 790 QAnon groups and was restricting another 1,950 groups, 440 pages and more than 10,000 Instagram accounts.

    On Facebook alone, activity on some of the largest QAnon groups rose 200 to 300 percent in the past six months, according to data gathered by The New York Times.

    “We have seen growing movements that, while not directly organizing violence, have celebrated violent acts, shown that they have weapons and suggest they will use them, or have individual followers with patterns of violent behavior,” Facebook said in a statement, adding that it would also block QAnon hashtags like #digitalarmy and #thestorm.

    But the movement made the jump from social media long ago: With dozens of QAnon supporters running this year for Congress — including several who have won Republican primaries in Oregon and Georgia — QAnon is knocking on the door of mainstream politics, and has done so with the president’s help.

    For his part, the president has often reposted QAnon-centric content into his Twitter feed. And QAnon followers have long interpreted messages from Dan Scavino, the White House director of social media, as promoting tongue-in-cheek symbols associated with the movement.

    “I’m not surprised at all by his reaction, and I don’t think QAnon conspirators are surprised either. It’s terrifying,” Vanessa Bouché, an associate professor of political science at Texas Christian University, said in an interview. “In a democratic society, we make decisions based on information. And if people are believing these lies, then we’re in a very dangerous position.”

    #Qanon #Trump #Fake_news #Culture_numérique #Mèmes #Extrême_droite

  • Facebook funnelling readers towards Covid misinformation - study | Technology | The Guardian

    Facebook had promised to crack down on conspiracy theories and inaccurate news early in the pandemic. But as its executives promised accountability, its algorithm appears to have fuelled traffic to a network of sites sharing dangerous false news, campaign group Avaaz has found.

    False medical information can be deadly; researchers led by Bangladesh’s International Centre for Diarrhoeal Disease Research, writing in The American Journal of Tropical Medicine and Hygiene, have directly linked a single piece of coronavirus misinformation to 800 deaths.

    Pages from the top 10 sites peddling inaccurate information and conspiracy theories about health received almost four times as many views on Facebook as the top 10 reputable sites for health information, Avaaz warned in a report.

    “This suggests that just when citizens needed credible health information the most, and while Facebook was trying to proactively raise the profile of authoritative health institutions on the platform, its algorithm was potentially undermining these efforts,” the report said.

    A relatively small but influential network is responsible for driving huge amounts of traffic to health misinformation sites. Avaaz identified 42 “super-spreader” sites that had 28m followers generating an estimated 800m views.

    A single article, which falsely claimed that the American Medical Association was encouraging doctors and hospitals to over-estimate deaths from Covid-19, was seen 160m times.

    This vast collective reach suggested that Facebook’s own internal systems are not capable of protecting users from misinformation about health, even at a critical time when the company has promised to keep users “safe and informed”.

    “Avaaz’s latest research is yet another damning indictment of Facebook’s capacity to amplify false or misleading health information during the pandemic,” said British MP Damian Collins, who led a parliamentary investigation into disinformation.

    “The majority of this dangerous content is still on Facebook with no warning or context whatsoever … The time for [Facebook CEO, Mark] Zuckerberg to act is now. He must clean up his platform and help stop this harmful infodemic.”

    Some of the false claims were directly harmful: one, suggesting that pure alcohol could kill the virus, has been linked to 800 deaths, as well as 60 people going blind after drinking methanol as a cure. “In India, 12 people, including five children, became sick after drinking liquor made from toxic seed Datura (ummetta plant in local parlance) as a cure to coronavirus disease,” the paper says. “The victims reportedly watched a video on social media that Datura seeds give immunity against Covid-19.”

    Beyond the specifically dangerous falsehoods, much misinformation is merely useless, but can contribute to the spread of coronavirus, as with one South Korean church which came to believe that spraying salt water could combat the virus.

    “They put the nozzle of the spray bottle inside the mouth of a follower who was later confirmed as a patient before they did likewise for other followers as well, without disinfecting the sprayer,” an official later said. More than 100 followers were infected as a result.

    Among Facebook’s tactics for fighting disinformation on the platform has been giving independent fact-checkers the ability to put warning labels on items they consider untrue.

    Zuckerberg has said fake news would be marginalised by the algorithm, which determines what content viewers see. “Posts that are rated as false are demoted and lose on average 80% of their future views,” he wrote in 2018.

    But Avaaz found that huge amounts of disinformation slips through Facebook’s verification system, despite having been flagged up by factcheck organisations.

    They analysed nearly 200 pieces of health misinformation which were shared on the site after being identified as problematic. Fewer than one in five carried a warning label, with the vast majority – 84% – slipping through controls after they were translated into other languages, or republished in whole or part.

    “These findings point to a gap in Facebook’s ability to detect clones and variations of fact-checked content – especially across multiple languages – and to apply warning labels to them,” the report said.

    Two simple steps could hugely reduce the reach of misinformation. The first would be proactively correcting misinformation that was seen before it was labelled as false, by putting prominent corrections in users feeds.

    Recent research has found corrections like these can halve belief in incorrect reporting, Avaaz said. The other step would be to improve the detection and monitoring of translated and cloned material, so that Zuckerberg’s promise to starve the sites of their audiences is actually made good.

    A Facebook spokesperson said: “We share Avaaz’s goal of limiting misinformation, but their findings don’t reflect the steps we’ve taken to keep it from spreading on our services. Thanks to our global network of fact-checkers, from April to June, we applied warning labels to 98m pieces of Covid-19 misinformation and removed 7mpieces of content that could lead to imminent harm. We’ve directed over 2bn people to resources from health authorities and when someone tries to share a link about Covid-19, we show them a pop-up to connect them with credible health information.”

    #Facebook #Fake_news #Désinformation #Infodemics #Promesses #Culture_de_l_excuse #Médias_sociaux

  • Kamala Harris and Disinformation: Debunking 3 Viral Falsehoods - The New York Times

    As Joseph R. Biden Jr. announced that he had selected Senator Kamala Harris of California as his vice-presidential running mate, internet trolls got to work.

    Since then, false and misleading information about Ms. Harris has spiked online and on TV. The activity has jumped from two dozen mentions per hour during a recent week to over 3,200 per hour in the last few days, according to the media insights company Zignal Labs, which analyzed global television broadcasts and social media.

    Much of that rise is fueled by fervent supporters of President Trump and adherents of the extremist conspiracy movement QAnon, as well as by the far left, according to a New York Times analysis of the most widespread falsehoods about Ms. Harris. On Thursday, Mr. Trump himself encouraged one of the most persistent falsehoods, a racist conspiracy theory that Ms. Harris is not eligible for the vice presidency or presidency because her parents were immigrants.

    “Sadly, this wave of misinformation was predictable and inevitable,” said Melissa Ryan, chief executive of Card Strategies, a consulting firm that researches disinformation.

    Many of the narratives are inaccurate accusations that first surged last year during Ms. Harris’s campaign to become the Democratic presidential nominee. Here are three false rumors about Ms. Harris that continue circulating widely online.

    #Fake_News #Kamala_Harris #Politique_USA

  • A new Trump campaign ad depicting a police officer being attacked by protesters is actually a 2014 photo of pro-democracy protests in Ukraine

    However, the image the Trump campaign used is not from the US – or from this year. It was uploaded on Wikimedia Commons, Wikipedia’s public-domain media archive, in 2014 with the label “a police officer attacked by protesters during clashes in Ukraine, Kyiv. Events of February 18, 2014.”

    The photographer, Mstyslav Chernov, confirmed to Business Insider that this was his photo from Ukraine in 2014.

    “Photography has always been used to manipulate public opinion. And with the rise of social media and the rise of populism, this is happening even more,” he said.


    A source close to Facebook told Business Insider that the company does not plan to remove the ad.

    #désinformation #fake_news #Trump #Facebook

  • With social media, Zimbabwean youth fight pandemic ’infodemic’

    JOHANNESBURG/BULAWAYO, Zimbabwe, July 23 (Thomson Reuters Foundation) - Drinking alcohol will kill the coronavirus. It is OK to share face masks. Africans cannot get COVID-19. The pandemic is not even real.

    These are some of the coronavirus myths that a team of 20 Zimbabwean youth have been busting online since the country’s lockdown began in late March, using social media and radio shows to reach an estimated 100,000 people to date.

    “There is a common saying that ’ignorance is bliss’. Well, in this instance, ignorance is not bliss, if anything ignorance is death,” said Bridget Mutsinze, 25, a volunteer based in the capital, Harare.

    Although relatively low compared to the rest of the continent, Zimbabwe is experiencing an uptick in the number of coronavirus infections, with more than 1,800 cases and at least 26 deaths, according to a tally by Johns Hopkins University.

    To stem the spread of the disease, Zimbabwean youth working with development charity Voluntary Service Overseas (VSO) have taken to Twitter, WhatsApp, Facebook and radio to comb through online comments, identify and correct COVID-19 misinformation.

    The spread of coronavirus misinformation has been a global issue, with the World Health Organization describing it as an “infodemic”.

    While tech giants WhatsApp and Facebook have teamed up with African governments to tackle fake news through interactive bots, adverts and push notifications, VSO volunteers are leading the battle within their communities.

    Across the continent, 86% of Africans aged 18-24 own a smartphone and nearly 90% use it for social media, according to a survey by the South African-based Ichikowitz Family Foundation.

    VSO volunteers are tapping into the informal conversations taking place on these platforms.

    “If we do not get facts out there, people will continue to live as they wish and the number of people who get the virus will continue to spread,” Mutsinze told the Thomson Reuters Foundation.

    #Désinformation #Fake_News #COVID-19 #Zimbabwe

  • A TikTok Twist on ‘PizzaGate’ - The New York Times

    One of social media’s early conspiracy theories is back, but remade in creatively horrible ways.

    “PizzaGate,” a baseless notion that a Washington pizza parlor was the center of a child sex abuse ring, leading to a shooting in 2016, is catching on again with younger people on TikTok and other online hangouts, my colleagues Cecilia Kang and Sheera Frenkel wrote.

    I talked to Sheera about how young people have tweaked this conspiracy and how internet sites help spread false ideas. (And, yes, our names are pronounced the same but spelled differently.)

    Shira: How has this false conspiracy changed in four years?

    Sheera: Younger people on TikTok have made PizzaGate more relatable for them. So a conspiracy that centered on Hillary Clinton and other politicians a few years ago now instead ropes in celebrities like Justin Bieber. Everyone is at home, bored and online more than usual. When I talked to teens who were spreading these conspiracy videos, many of them said it seemed like fun.

    If it’s for “fun,” is this version of the PizzaGate conspiracy harmless?

    It’s not. We’ve seen over and over that some people can get so far into conspiracies that they take them seriously and commit real-world harm. And for people who are survivors of sexual abuse, it can be painful to see people talking about it all over social media.

    Have the internet companies gotten better at stopping false conspiracies like this?

    They have, but people who want to spread conspiracies are figuring out workarounds. Facebook banned the PizzaGate hashtag, for example, but the hashtag is not banned on Instagram, even though it’s owned by Facebook. People also migrated to private groups where Facebook has less visibility into what’s going on.

    Tech companies’ automated recommendation systems also can suck people further into false ideas. I recently tried to join Facebook QAnon conspiracy groups, and Facebook immediately recommended I join PizzaGate groups, too. On TikTok, what you see is largely decided by computer recommendations. So I watched one video about PizzaGate, and the next videos I saw in the app were all about PizzaGate.

    TikTok is a relatively new place where conspiracies can spread. What is it doing to address this?

    TikTok is not proactively going out and looking for videos with potentially false and dangerous ideas and removing them. There were more than 80 million views of TikTok videos with PizzaGate-related hashtags.

    The New York Times reached out to TikTok about the videos, pointing out their spike. After we sent our email, TikTok removed many of the videos and seemed to limit their spread. Facebook and Twitter often do this, too — they frequently remove content only after journalists reach out and point it out.

    Do you worry that writing about baseless conspiracies gives them more oxygen?

    We worry about that all the time, and spend as much time debating whether to write about false conspiracies and misinformation as we do writing about them.

    We watch for ones that reach a critical mass; we don’t want to be the place where people first find out about conspiracies. When a major news organization writes about a conspiracy — even to debunk it — people who want to believe it will twist it to appear to validate their views.

    But to ignore them completely could also be dangerous.

    #Pizzagate #complotisme #fake_news #TikTok

  • Protest misinformation is riding on the success of pandemic hoaxes | MIT Technology Review

    Misinformation about police brutality protests is being spread by the same sources as covid-19 denial. The troubling results suggest what might come next.

    by Joan Donovan
    June 10, 2020

    Police confront Black Lives Matter protesters in Los Angeles
    After months spent battling covid-19, the US is now gripped by a different fever. As the video of George Floyd being murdered by Derek Chauvin circulated across social media, the streets around America—and then the world—have filled with protesters. Floyd’s name has become a public symbol of injustice in a spiraling web of interlaced atrocities endured by Black people, including Breonna Taylor, who was shot in her home by police during a misdirected no-knock raid, and Ahmaud Arbery, who was murdered by a group of white vigilantes. 

    Meanwhile, on the digital streets, a battle over the narrative of protest is playing out in separate worlds, where truth and disinformation run parallel. 

    Related Story

    How to protect yourself online from misinformation right now
    In times of crisis it’s easy to become a spreader of incorrect information online. We asked the experts for tips on how to stay safe—and protect others.

    In one version, tens of thousands of protesters are marching to force accountability on the US justice system, shining a light on policing policies that protect white lives and property above anything else—and are being met with the same brutality and indifference they are protesting against. In the other, driven by Donald Trump, US attorney general Bill Barr, and the MAGA coalition, an alternative narrative contends that anti-fascist protesters are traveling by bus and plane to remote cities and towns to wreak havoc. This notion is inspiring roving gangs of mostly white vigilantes to take up arms. 

    These armed activists are demographically very similar to those who spread misinformation and confusion about the pandemic; the same Facebook groups have spread hoaxes about both; it’s the same older Republican base that shares most fake news. 

    The fact that those who accept protest misinformation also rose up to challenge stay-at-home orders through “reopen” rallies is no coincidence: these audiences have been primed by years of political misinformation and then driven to a frenzy by months of pandemic conspiracy theories. The infodemic helped reinforce routes for spreading false stories and rumors; it’s been the perfect breeding ground for misinformation.

    How it happened
    When covid-19 hit like a slow-moving hurricane, most people took shelter and waited for government agencies to create a plan for handling the disease. But as the weeks turned into months, and the US still struggled to provide comprehensive testing, some began to agitate. Small groups, heavily armed with rifles and misinformation, held “reopen” rallies that were controversial for many reasons. They often relied on claims that the pandemic was a hoax perpetrated by the Democratic Party, which was colluding with the billionaire donor class and the World Health Organization. The reopen message was amplified by the anti-vaccination movement, which exploited the desire for attention among online influencers and circulated rampant misinformation suggesting that a potential coronavirus vaccine was part of a conspiracy in which Bill Gates planned to implant microchips in recipients. 

    These rallies did not gain much legitimacy in the eyes of politicians, press, or the public, because they seemed unmoored from the reality of covid-19 itself. 

    But when the Black Lives Matter protests emerged and spread, it opened a new political opportunity to muddy the waters. President Trump laid the foundation by threatening to invade cities with the military after applying massive force in DC as part of a staged television event. The cinema of the state was intended to counter the truly painful images of the preceding week of protests, where footage of the police firing rubber bullets, gas, and flash grenades dominated media coverage of US cities on fire. Rather than acknowledge the pain and anguish of Black people in the US, Trump went on to blame “Antifa” for the unrest. 

    @Antifa_US was suspended by Twitter, but this screenshot continues to circulate among right wing groups on Facebook.
    For many on the left, antifa simply means “anti-fascist.” For many on the right, however, “Antifa” has become a stand-in moniker for the Democratic Party. In 2017, we similarly saw right-wing pundits and commentators try to rebrand their political opponents as the “alt-left,” but that failed to stick. 

    Shortly after Trump’s declaration, several Twitter accounts outed themselves as influence operations bent on calling for violence and collecting information about anti-fascists. Twitter, too, confirmed that an “Antifa” account, running for three years, was tied to a now-defunct white nationalist organization that had helped plan the Unite the Right rally that killed Heather Heyer and injured hundreds more. Yet the “alt-right” and other armed militia groups that planned this gruesome event in Charlottesville have not drawn this level of concern from federal authorities.

    @OCAntifa Posted this before the account was suspended on Twitter for platform manipulation.
    Disinformation stating that the protests were being inflamed by Antifa quickly traveled up the chain from impostor Twitter accounts and throughout the right-wing media ecosystem, where it still circulates among calls for an armed response. This disinformation, coupled with widespread racism, is why armed groups of white vigilantes are lining the streets in different cities and towns. Simply put, when disinformation mobilizes, it endangers the public.

    What next?
    As researchers of disinformation, we have seen this type of attack play out before. It’s called “source hacking”: a set of tactics where media manipulators mimic the patterns of their opponents, try to obfuscate the sources of their information, and then slowly become more and more dangerous in their rhetoric. Now that Trump says he will designate Antifa a domestic terror group, investigators will have to take a hard look at social-media data to discern who was actually calling for violence online. They will surely unearth this widespread disinformation campaign of far-right agitators.

    That doesn’t mean that every call to action is suspect: all protests are poly-vocal and many tactics and policy issues remain up for discussion, including the age-old debate on reform vs. revolution. But what is miraculous about public protest is how easy it is to perceive and document the demands of protesters on the ground. 

    Moments like this call for careful analysis. Journalists, politicians, and others must not waver in their attention to the ways Black organizers are framing the movement and its demands. As a researcher of disinformation, I am certain there will be attempts to co-opt or divert attention from the movement’s messaging, attack organizers, and stall the progress of this movement. Disinformation campaigns tend to proceed cyclically as media manipulators learn to adapt to new conditions, but the old tactics still work—such as impostor accounts, fake calls to action (like #BaldForBLM), and grifters looking for a quick buck. 

    Crucially, there is an entire universe of civil society organizations working to build this movement for the long haul, and they must learn to counter misinformation on the issues they care about. More than just calling for justice, the Movement for Black Lives and Color of Change are organizing actions to move police resources into community services. Media Justice is doing online trainings under the banner of #defendourmovements, and Reclaim the Block is working to defund the police in Minneapolis. 

    Through it all, one thing remains true: when thousands of people show up to protest in front of the White House, it is not reducible to fringe ideologies or conspiracy theories about invading outside agitators. People are protesting during a pandemic because justice for Black lives can’t wait for a vaccine.

    —Joan Donovan, PhD, is research director Research Director of the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School.

    #Fake_news #Extrême_droite #Etats_unis

  • How Twitter Botched Its Fact-Check of Trump’s Lies – Mother Jones

    Facing widespread condemnation for not removing President Trump’s tweets falsely accusing MSNBC host Joe Scarborough of murder, Twitter finally took action. On Wednesday, the company slapped disclaimer links onto two of Trump’s tweets, the first time it has pushed back on the misinformation that regularly flows from the president’s account.

    But the tweets in question had nothing to do with the debunked conspiracy theories surrounding Scarborough and his late congressional aide, who in 2001 died after suffering a fall from an undiagnosed heart condition. Instead, the ignominious honor belonged to Trump’s false claims that mail-in voting would lead to rampant voter fraud.

    The move drew more questions than praise. Why not simply remove the tweets pushing a vile murder conspiracy, as the widower of Scarborough’s late staffer pleaded in a letter to Twitter CEO Jack Dorsey? Even top Republicans, who have remained silent about Trump’s smears against Scarborough, would have been unlikely to object to the removal of accusations so clearly false and defamatory. Why instead wade into a more politically divisive territory such as mail-in voting practices?

    #Twitter #Trump #Fake_news

  • How covid-19 conspiracy theorists are exploiting YouTube culture | MIT Technology Review

    Covid-19 conspiracy theorists are still getting millions of views on YouTube, even as the platform cracks down on health misinformation.

    The answer was obvious to Kennedy, one of many anti-vaccination leaders trying to make themselves as visible as possible during the covid-19 pandemic. “I’d love to talk to your audience,” he replied.

    Kennedy told Bet-David that he believes his own social-media accounts have been unfairly censored; making an appearance on someone else’s popular platform is the next best thing. Bet-David framed the interview as an “exclusive,” enticingly titled “Robert Kennedy Jr. Destroys Big Pharma, Fauci & Pro-Vaccine Movement.” In two days, the video passed half a million views.

    As of Wednesday, advertisements through YouTube’s ad service were playing before the videos, and Bet-David’s merchandise was for sale in a panel below the video’s description. Two other interviews, in which anti-vaccine figures aired several debunked claims about coronavirus and vaccines (largely unchallenged by Bet-David), were also showing ads. Bet-David said in an interview that YouTube had limited ads on all three videos, meaning they can generate revenue, but not as much as they would if they were fully monetized.

    We asked YouTube for comment on all three videos on Tuesday afternoon. By Thursday morning, one of the three (an interview with anti-vaccine conspiracy theorist Judy Mikovits) had been deleted for violating YouTube’s medical misinformation policies. Before it was deleted, the video had more than 1 million views.

    YouTube said that the other two videos were borderline, meaning that YouTube decided they didn’t violate rules, but would no longer be recommended or show up prominently in search results.

    I asked Bet-David whether he felt any responsibility over airing these views on his channel—particularly potentially harmful claims by his guests, urging viewers to ignore public health recommendations.

    “I do not,” he said. “I am responsible for what comes out of my mouth. I’m not responsible for what comes out of your mouth”

    For him, that lack of responsibility extends to misinformation that could be harmful to his audience. He is just giving people what they are asking for. That, in turn, drives attention, which allows him to make money from ads, merchandise, speaking gigs, and workshops. “It’s up to the audience to make the decision for themselves,” he says. Besides, he thinks he’s done interviewing anti-vaccine activists for now. He’s trying to book some “big name” interviews of what he termed “pro-vaccine” experts.

    #YouTube #Complotisme #Vaccins #Médias_sociaux #Fake_news

  • Fake news 101: A guide to help sniff out the truth - CSMonitor.com

    What is misinformation vs. disinformation?

    Misinformation is information that is misleading or wrong, but not intentionally. It includes everything from a factoid your friend reposted on Facebook to assertions made by officials or, yes, even journalists.

    Disinformation is more deliberate and is distributed with the intent to confuse, disturb, or provoke. It also includes plausible information shared through devious means, such as a fake Twitter account; done en masse, this can create a skewed impression of popular opinion. A particularly deceptive form of disinformation are “deepfake” videos, with imperceptible alterations in the footage making it appear that someone said or did something that he or she never said or did.

    Be particularly on guard against misinformation and disinformation during crises, which provide fertile ground for exploiting fear, anger, and other emotions.

    #fake_news #infox #Fausses_information #Désinformation

  • Covid hoaxes are using a loophole to stay alive—even after being deleted | MIT Technology Review

    Pandemic conspiracy theorists are using the Wayback Machine to promote ’zombie content’ that avoids content moderators and fact-checkers.

    by Joan Donovan archive page
    April 30, 2020

    Since the onset of the pandemic, the Technology and Social Change Research Project at Harvard Kennedy’s Shorenstein Center, where I am the director, has been investigating how misinformation, scams, and conspiracies about covid-19 circulate online. If fraudsters are now using the virus to dupe unsuspecting individuals, we thought, then our research on misinformation should focus on understanding the new tactics of these media manipulators. What we found was a disconcerting explosion in “zombie content.”

    While the original page failed to spread fake news, the version of the page saved on the Internet Archive’s Wayback Machine absolutely flourished on Facebook. With 649,000 interactions and 118,000 shares, the engagement on the Wayback Machine’s link was much larger than legitimate press outlets. Facebook has since placed a fact-check label over the link to the Wayback Machine link too, but it had already been seen a huge number of times.

    There are several explanations for this hidden virality. Some people use the Internet Archive to evade blocking of banned domains in their home country, but it is not simply about censorship. Others are seeking to get around fact-checking and algorithmic demotion of content.

    When looking for more evidence of hidden virality, we searched for “web.archive.org” across platforms. Unsurprisingly, Medium posts that were taken down for spreading health misinformation have found new life through Wayback Machine links. One deleted Medium story, “Covid-19 had us all fooled, but now we might have finally found its secret,” violated Medium’s policies on misleading health information. Before Medium’s takedown, the original post amassed 6,000 interactions and 1,200 shares on Facebook, but the archived version is vastly more popular—1.6 million interactions, 310,000 shares, and still climbing. This zombie content has better performance than most mainstream media news stories and, yet it only exists as an archived record.

    Perhaps the most alarming element to a researcher like me is that these harmful conspiracies permeate private pages and groups on Facebook. This means researchers have access to less than 2 % of the interaction data, and that health misinformation circulates in spaces where journalists, independent researchers and public health advocates can not assess or counterbalance these false claims with facts. Crucially, if it weren’t for the Internet Archive’s records we would not be able to do this research on deleted content in the first place, but these use cases suggest that the Internet Archive will soon have to address how their service can be adapted to deal with disinformation.

    Hidden virality is growing in places where Whatsapp is popular because it’s easy to forward misinformation through encrypted channels and evade content moderation. But when hidden virality happens on Facebook with health misinformation, it is particularly disconcerting. More than 50% of Americans rely on Facebook for their news, and still, after many years of concern and complaint, researchers have a very limited window into the data. This means it’s nearly impossible to ethically investigate how dangerous health misinformation is shared on private pages and groups.

    All of this is a threat for public health in a different way than political or news misinformation, because people do quickly change their behaviors based on medical recommendations.

    #Fake_news #Viralité #Internet_archive #zombie_content #Joan_Donovan

  • Bambou | le blog Pro de la MIOP

    Pour celles et ceux qui n’ont pas pu assister à la journée d’étude du 28 novembre dernier, organisée par Médiat Rhône-Alpes, en voici la captation audiovisuelle complète, divisée en 2 parties :

    La 1ère partie comprend les interventions de :
    – Hervé Le Crosnier pour l’introduction
    – Grégoire Borst « Le cerveau en développement face aux Fake news »
    – Julien Giry « Quelles réponses aux théories du complot et au conspirationnisme »
    – Gérald Bronner « Une révolution pédagogique contre la démocratie des crédules »

    La 2ème partie :

    – Pierre Haski « Fake news, désinformation, liberté d’informer : le nouveau défi. »

    – Rose-Marie Farinella « Ateliers de détection d’intox à l’école primaire »

    – Anne Cécile Hivernât « Les ateliers EMI à la BML »


  • Dé·mis·information : doit-on repenser le fact-checking ? - journalism.design


    Contrairement à la croyance populaire, les fake news, les infox n’auraient pas l’impact qu’on leur prête un peu abusivement. Une étude tombe à point nommé pour réveiller un peu les esprits endormis et questionner l’utilité d’un des piliers du journalisme.

    Évidemment, ce type d’étude va être immédiatement décriée par l’ensemble des professionnels de l’information qui voient dans la vérification des faits un des piliers du métier. Pourtant, l’étude publiée dans le dernier Science Advances 1 donne matière à réflexion. Si les études sur les fausses informations en ligne ne manquent pas 2 3 4 5 elles semblent principalement restreindre leur champ d’investigation à des plates-formes précises (Twitter, Facebook, Whatsapp) et ne considèrent que leur écosystème, négligeant au passage une approche plus large, davantage ancrée dans le réel et l’expérience de tous, en considérant le paysage média disponible dans son ensemble. Celle-ci, plus large, avance en conclusion que l’impact des infox sur la population reste minimal, tant la portée initiale (le reach 6 ) de celles-ci est faible. Quelles autres questions soulèvent-elle ?

    #journalisme #information #infox #fake_news

  • Opinion | What We Pretend to Know About the Coronavirus Could Kill Us - The New York Times

    Article passionnant sur l’enjeu des fausses informations, sur la différence de temps entre la réflexion et la science d’un côté et les outils de l’information de l’autre. Les fausses informations se construisent sur la multiplicité des données disponibles. En ajoutant des chiffres et des courbes, les fake news adoptent un « effet de réel » qui les rend crédibles. Une vieille technique littéraire largement exploitée par la science fiction depuis Jules verne.

    (complément : je viens de trouver une version en français à : https://teles-relay.com/2020/04/03/opinion-ce-que-nous-pretendons-savoir-sur-le-coronavirus-pourrait-nous-)

    Other than a vaccine or an extra 500,000 ventilators, tests and hospital beds, reliable information is the best weapon we have against Covid-19. It allows us to act uniformly and decisively to flatten the curve. In an ideal pandemic scenario, sound information is produced by experts and travels quickly to the public.

    But we seem to be living in a nightmare scenario. The coronavirus emerged in the middle of a golden age for media manipulation. And it is stealthy, resilient and confounding to experts. It moves far faster than scientists can study it. What seems to be true today may be wrong tomorrow. Uncertainty abounds. And an array of dangerous misinformation, disinformation and flawed amateur analysis fills the void.

    On Friday, President Trump announced that the Centers for Disease Control and Prevention had changed the recommendation on masks to say that all Americans should use “non-medical, cloth” ones. “You can do it. You don’t have to do it. I’m choosing not to do it,” Mr. Trump said. “It’s only a recommendation.”

    But the reversal may prove costly for the World Health Organization’s and the C.D.C.’s credibility. As Zeynep Tufekci, a University of North Carolina professor, wrote in a Times Op-Ed weeks ago, a lack of transparency up front created its own information crisis. “What should the authorities have said?” she asked. “The full painful truth.”

    The fear and uncertainty around the coronavirus is, of course, fertile ground for extremists and hucksters. Alex Jones of Infowars is pushing a conspiracy theory that the virus is an American-made biological weapon and is directing viewers to purchase any number of overpriced vitamin products from his stores. People who believe the myth that 5G wireless signals are harmful to health have falsely linked the technology to Covid-19.

    The anti-vaccination movement is also capitalizing on the pandemic. The New York Times used the analytics tool CrowdTangle to survey 48 prominent anti-vax Instagram accounts and found that video views spiked from 200,000 in February to more than two million in March, just as the pandemic took off globally. Another Times analysis of anti-vax accounts showed a surge in followers during the last week of March. In private groups on Facebook, junk science and unproven treatment claims proliferate.

    But you don’t have to be a science denier to end up seduced by bad information. A pandemic makes us all excellent targets for misinformation. No one has natural immunity to this coronavirus, leaving us all threatened and looking for information to make sense of the world. Unfortunately, the pace of scientific discovery doesn’t match the speed of our information ecosystems. As Wired reported in March, researchers are moving faster than ever to understand the virus — so fast that it may be compromising some of the rigor.

    But much of the pernicious false news about the coronavirus operates on the margins of believability — real facts and charts cobbled together to formulate a dangerous, wrongheaded conclusion or news reports that combine a majority of factually accurate reporting with a touch of unproven conjecture.

    The phenomenon is common enough that it already has its own name: armchair epidemiology, which Slate described as “convincing but flawed epidemiological analyses.” The prime example is a Medium blog post titled “Covid-19 — Evidence Over Hysteria” by Aaron Ginn, a Silicon Valley product manager and “growth hacker” who argued against the severity of the virus and condemned the mainstream media for hyping it.

    Without a deeper knowledge of epidemiology or evolutionary biology, it would have been easy to be seduced by Mr. Ginn’s piece. This, according to Dr. Bergstrom, is what makes armchair epidemiology so harmful. Posts like Mr. Ginn’s “deplete the critical resource you need to manage the pandemic, which is trust,” he told me. “When people are getting conflicting messages, it makes it very hard for state and local authorities to generate the political will to take strong actions downstream.”

    It’s this type of misinformation on the margins that’s most insidious. “I am seeing this playbook more and more,” Dr. Bergstrom said. “Secondhand data showing a crisis narrative that feels just a bit too well crafted. Mixing the truth with the plausible and the plausible with that which seems plausibly true in a week.” Dr. Bergstrom argues that the advances in available data make it easier than ever for junk-science peddlers to appear legitimate.

    This hybrid of true and false information is a challenge for social media platforms. Covid-19 and the immediate threat to public health means that networks like Facebook, Twitter and YouTube have been unusually decisive about taking down misinformation. “In a case of a pandemic like this, when we are seeing posts that are urging people not to get treatment,” Facebook’s chief executive, Mark Zuckerberg, said recently, “that’s a completely different class of content versus the back-and-forth of what candidates may say about each other.”

    Facebook took down a video of Mr. Bolsonaro when it became clear he was using the platform to spread unproven claims that chloroquine was an effective cure for the coronavirus. Similarly, Twitter temporarily locked the account of Rudolph Giuliani, a former mayor of New York City and Mr. Trump’s personal lawyer, for violating Twitter’s rules on Covid-19 misinformation with regard to hydroxychloroquine treatments. Depending on how you feel about technology companies, this is either heartening progress or proof that the companies could have been doing far more to tamp down misinformation over the past five years.

    The platforms are slightly more prepared than they once were to counter public-health myths, having changed their policies around medical misinformation after measles outbreaks in 2019. “With measles there was a lot of available authoritative information about measles,” Ms. DiResta told me. “The difference with coronavirus is that until months ago, nobody had seen this virus before.”

    “The really big question that haunts me is, ‘When do we return to reality?’” Mr. Pomerantsev mused over the phone from his own quarantine. “Or is it that in this partisan age absolutely everything is chopped, cut and edited to fit a different view? I’m waiting for society to finally hit up against a shared reality, like diving into the bottom of swimming pool. Instead we just go deeper.”

    #Fake_news #Culture_numérique #Trolls #Coronavirus

  • Fake Off & révisionnisme

    Coronavirus : Attention à ces photos de manifestations de soignants qui datent de dix ans

    FAKE OFF Dans le contexte d’épidémie du coronavirus, attention à ces photos trompeuses de manifestation de personnels soignants, qui remontent à 2010

    Plusieurs photos de manifestations de soignants réprimées par les forces de l’ordre circulent sur les réseaux sociaux depuis quelques jours. « C’était le temps où l’on réclamait des moyens pour mieux prendre soin !! Qui avait raison ???? », indique notamment l’auteur de l’une de ces publications virales, suggérant ainsi que la récente mobilisation des soignants pour réclamer des moyens supplémentaires, avant la crise du coronavirus, était méprisée par l’exécutif et les forces de l’ordre.

    C’est assez fort cet article qui explique que l’executif et les forces de l’ordre qui réprimaient les soignantes c’était il y a 10 ans ! Comme si les seules photos dispo dataient de Sarko et comme si on avait pas exactement les mêmes avec plus de mains et d’yeux arrachés par la police et comme si les flics ne gazaient pas jusqu’à l’intérieur de l’hopital à la Salpêtrière le premier mai dernier.

    #révisionnisme #fake_news_au_carré

    • Bon rappel de ce qu’il y aura pas mal de monde sur le banc des accusés — pas seulement des marcheures.

  • Coronavirus. Nantes n’échappe pas aux fake news

    « "L’Armée est déjà sur place" ». Au soir de l’annonce présidentielle, le 17 mars, sur le confinement de la population, les réseaux sociaux se sont enflammés. Photo à l’appui d’un convoi de véhicules militaires, d’aucuns affirmaient sur Snapchat que l’armée avait déjà pris position à Nantes.

    En toute bonne foi, ce jeune Nantais a relayé auprès de son père le screen (capture d’écran) de ladite photo sur laquelle était mentionné : « "Le périphérique de Beaujoire. Rentrez chez vous ça démarre". Ni une ni deux, le père a repoussé l’information via Messenger. « "L’Armée est déjà sur place" ».
    « Je l’ai screenée sur le compte d’un ami qui l’avait lui-même screenée sur Snapchat »

    Sauf que les véhicules militaires en question, photographiés à travers le pare-brise d’une voiture ne sont pas des véhicules de l’Armée française. Et si la photo a bien été réalisée sur une voie rapide ou une route à quatre voies, rien ne permet de localiser précisément le lieu et encore moins de dire qu’il s’agît du périphérique nantais. L’information était fausse, comme l’ont confirmé les autorités militaires. Le jeune Nantais explique l’avoir « "screenée" » sur Snapchat. « "C’est un ami qui l’avait lui-même screenée sur le compte d’un ami qui lui-même l’avait déjà screenée chez un autre…" ». Le circuit classique des fake news et de la rumeur.

    Mais les réseaux sociaux ne sont pas les seuls vecteurs de fausses nouvelles. Le bouche-à-oreille fonctionne également à plein régime. Vendredi, avec le plus grand sérieux, un habitant du nord du département nous rapportait qu’une infirmière de 28 ans était décédée au CHU de Nantes du coronavirus. Il l’avait entendu au bureau de tabac d’une femme, très persuasive, qui tenait l’information d’une cousine dont la meilleure amie connaît le mari d’une aide-soignante… Là aussi, fausse information.
    « "La d" "iffusion d’informations fausses pour alimenter une certaine vie sociale a toujours existé"

    « "La d" "iffusion d’informations fausses pour alimenter une certaine vie sociale a toujours existé. C’est la même logique qui prévaut avec les réseaux sociaux, parfois avec la volonté de nuire ou de faire pression" », constate Olivier Ertzscheid, professeur d’information et de communication à l’Université de Nantes. « "Les réseaux offrent cependant un champ de développement plus rapide et plus large qui donne cette chaîne de viralité" ». Pour Olivier Ertzscheid, la banalisation des outils de trucage, que l’on trouve facilement sur Internet, favorise également le phénomène. « "N’importe qui peut récupérer une image sur Internet et y ajouter ce qu’il veut !" ».

    [ Presse Océan vous offre 2 mois d’abonnement numérique, sans engagement et sur simple création de compte. En savoir plus ]
    « Il faut se méfier de ce qui nourrit nos propres angoisses »

    Pour autant, les réseaux sociaux, restent à ses yeux, une source d’information vitale. « "Les fake news ne sont que la partie émergée de l’iceberg. L’essentielle de l’information qui figure sur les réseaux est vérifiée et validée". Quels sont alors les bons réflexes à adopter pour faire la part des choses sur les réseaux ? « "Il faut se méfier de ce qui nourrit nos propres angoisses et nos convictions. Si on lit quelque chose qui va dans notre sens, on aura tendance à moins vérifier avant de cliquer sur partager" ». Olivier Ertzscheid met également en garde contre l’élément contextuel : « "On voit réapparaître sur les réseaux, c’est notamment vrai sur Facebook, des articles anciens. Il faut donc toujours voir la source dans son contexte maximal, la provenance, la date…"

    #Fake_news #Coronavirus