• Facebook funnelling readers towards Covid misinformation - study | Technology | The Guardian
    https://www.theguardian.com/technology/2020/aug/19/facebook-funnelling-readers-towards-covid-misinformation-study
    https://i.guim.co.uk/img/media/905ac886c6dc0f5a3d40eb514637a8cdf0255873/0_5_4703_2822/master/4703.jpg?width=1200&height=630&quality=85&auto=format&fit=crop&overlay-ali

    Facebook had promised to crack down on conspiracy theories and inaccurate news early in the pandemic. But as its executives promised accountability, its algorithm appears to have fuelled traffic to a network of sites sharing dangerous false news, campaign group Avaaz has found.

    False medical information can be deadly; researchers led by Bangladesh’s International Centre for Diarrhoeal Disease Research, writing in The American Journal of Tropical Medicine and Hygiene, have directly linked a single piece of coronavirus misinformation to 800 deaths.

    Pages from the top 10 sites peddling inaccurate information and conspiracy theories about health received almost four times as many views on Facebook as the top 10 reputable sites for health information, Avaaz warned in a report.

    “This suggests that just when citizens needed credible health information the most, and while Facebook was trying to proactively raise the profile of authoritative health institutions on the platform, its algorithm was potentially undermining these efforts,” the report said.

    A relatively small but influential network is responsible for driving huge amounts of traffic to health misinformation sites. Avaaz identified 42 “super-spreader” sites that had 28m followers generating an estimated 800m views.

    A single article, which falsely claimed that the American Medical Association was encouraging doctors and hospitals to over-estimate deaths from Covid-19, was seen 160m times.

    This vast collective reach suggested that Facebook’s own internal systems are not capable of protecting users from misinformation about health, even at a critical time when the company has promised to keep users “safe and informed”.

    “Avaaz’s latest research is yet another damning indictment of Facebook’s capacity to amplify false or misleading health information during the pandemic,” said British MP Damian Collins, who led a parliamentary investigation into disinformation.

    “The majority of this dangerous content is still on Facebook with no warning or context whatsoever … The time for [Facebook CEO, Mark] Zuckerberg to act is now. He must clean up his platform and help stop this harmful infodemic.”

    Some of the false claims were directly harmful: one, suggesting that pure alcohol could kill the virus, has been linked to 800 deaths, as well as 60 people going blind after drinking methanol as a cure. “In India, 12 people, including five children, became sick after drinking liquor made from toxic seed Datura (ummetta plant in local parlance) as a cure to coronavirus disease,” the paper says. “The victims reportedly watched a video on social media that Datura seeds give immunity against Covid-19.”

    Beyond the specifically dangerous falsehoods, much misinformation is merely useless, but can contribute to the spread of coronavirus, as with one South Korean church which came to believe that spraying salt water could combat the virus.

    “They put the nozzle of the spray bottle inside the mouth of a follower who was later confirmed as a patient before they did likewise for other followers as well, without disinfecting the sprayer,” an official later said. More than 100 followers were infected as a result.

    Among Facebook’s tactics for fighting disinformation on the platform has been giving independent fact-checkers the ability to put warning labels on items they consider untrue.

    Zuckerberg has said fake news would be marginalised by the algorithm, which determines what content viewers see. “Posts that are rated as false are demoted and lose on average 80% of their future views,” he wrote in 2018.

    But Avaaz found that huge amounts of disinformation slips through Facebook’s verification system, despite having been flagged up by factcheck organisations.

    They analysed nearly 200 pieces of health misinformation which were shared on the site after being identified as problematic. Fewer than one in five carried a warning label, with the vast majority – 84% – slipping through controls after they were translated into other languages, or republished in whole or part.

    “These findings point to a gap in Facebook’s ability to detect clones and variations of fact-checked content – especially across multiple languages – and to apply warning labels to them,” the report said.

    Two simple steps could hugely reduce the reach of misinformation. The first would be proactively correcting misinformation that was seen before it was labelled as false, by putting prominent corrections in users feeds.

    Recent research has found corrections like these can halve belief in incorrect reporting, Avaaz said. The other step would be to improve the detection and monitoring of translated and cloned material, so that Zuckerberg’s promise to starve the sites of their audiences is actually made good.

    A Facebook spokesperson said: “We share Avaaz’s goal of limiting misinformation, but their findings don’t reflect the steps we’ve taken to keep it from spreading on our services. Thanks to our global network of fact-checkers, from April to June, we applied warning labels to 98m pieces of Covid-19 misinformation and removed 7mpieces of content that could lead to imminent harm. We’ve directed over 2bn people to resources from health authorities and when someone tries to share a link about Covid-19, we show them a pop-up to connect them with credible health information.”

    #Facebook #Fake_news #Désinformation #Infodemics #Promesses #Culture_de_l_excuse #Médias_sociaux

  • Kamala Harris and Disinformation: Debunking 3 Viral Falsehoods - The New York Times
    https://www.nytimes.com/2020/08/14/technology/kamala-harris-disinformation.html

    As Joseph R. Biden Jr. announced that he had selected Senator Kamala Harris of California as his vice-presidential running mate, internet trolls got to work.

    Since then, false and misleading information about Ms. Harris has spiked online and on TV. The activity has jumped from two dozen mentions per hour during a recent week to over 3,200 per hour in the last few days, according to the media insights company Zignal Labs, which analyzed global television broadcasts and social media.

    Much of that rise is fueled by fervent supporters of President Trump and adherents of the extremist conspiracy movement QAnon, as well as by the far left, according to a New York Times analysis of the most widespread falsehoods about Ms. Harris. On Thursday, Mr. Trump himself encouraged one of the most persistent falsehoods, a racist conspiracy theory that Ms. Harris is not eligible for the vice presidency or presidency because her parents were immigrants.

    “Sadly, this wave of misinformation was predictable and inevitable,” said Melissa Ryan, chief executive of Card Strategies, a consulting firm that researches disinformation.

    Many of the narratives are inaccurate accusations that first surged last year during Ms. Harris’s campaign to become the Democratic presidential nominee. Here are three false rumors about Ms. Harris that continue circulating widely online.

    #Fake_News #Kamala_Harris #Politique_USA

  • Randonautica: What Is It and Are the Stories Real? - The New York Times
    https://www.nytimes.com/2020/07/31/style/randonautica-app.html

    That is the gamble one takes with Randonautica, which claims to channel users’ “intentions” to produce nearby coordinates for exploration. Think: The law of attraction meets geocaching.

    Randonautica makes a few asks of users — “What would you like to get?” “Choose your entropy source” — before prompting them to “focus on your intent” while it fetches coordinates. This process relies on location settings and a random number generator, which, despite what the company says, cannot be directly affected by human thoughts.

    Since its release, Randonautica has been downloaded 10.8 million times from the App Store and Google Play, according to the research firm Sensor Tower. After a few months of rapid growth, much of it propelled by TikTok, its downloads have started to taper off, according to data from the analytics firm App Annie.

    In an interview in July, Mr. Lengfelder described Randonautica as “a multimedia storytelling platform” that encourages “performance art.” He said the overwhelming response has not surprised him.

    “I kind of figured it was inevitable,” he said. “Because basically what it is is like a machine that creates memes and legends, and it kind of virally propagates on its own.”

    So How Does It Work?

    On first use, Randonautica offers a brief intro and some tips (“Always Randonaut with a charged phone,” “Never trespass”) before prompting you to share your location.

    Then it will ask you to choose which type of point you would like it to generate (the differences between which only matter if you believe the app can read your thoughts) before fetching coordinates from a random number generator. The user can then open that location in Google Maps to begin their journey.

    Randonautica throws big words like “quantum” and “entropy” around a lot. Its creators believe that quantum random numbers are more likely to be influenced by human consciousness than non-quantum random numbers. This hypothesis is part of a theory Mr. Lengfelder refers to as “mind-machine interaction,” or M.M.I.: It posits that when you focus on your intent, you are influencing the numbers.

    “Basically if you’re looking for any kind of peer-reviewed, scientific consensus, that does not exist yet in the literature,” Mr. Lengfelder said in a TikTok video in June, speaking about the theory. Instead, he pointed to the work of Dean Radin, a prominent figure in the pseudoscientific field of parapsychology, and the Princeton Engineering Anomalies Research (PEAR) program, which has cited Dr. Radin’s research, as evidence.

    Randonautica claims that a 1998 PEAR experiment supported the idea that people can control random number generation with their thoughts. That study was published in the Journal of Scientific Exploration, which includes work about the paranormal, spirit possessions, poltergeists and questions about Shakespeare’s authorship. In the study, PEAR’s researchers wrote that the experiment was far from conclusive.

    “It looks like they saw some kind of correlation, but they admit that it was weak and it needed to have further research associated with it,” said Casey Schwarz, an experimental physicist and assistant professor at Ursinus College who reviewed Randonautica’s claims for this article. She said she did not know of any quantum system that could be influenced by human thoughts.

    Mr. Lengfelder dismissed such criticisms, stating that the app was not created to prove a hypothesis. “I would say it’s not some kind of academic science work,” he said. “We’re more like inventors than academic scientists.”

    An update coming in August will feature improved graphics and, Mr. Lengfelder said, a custom random number generator that would have a higher “rate of entropy.” “So technically our M.M.I. effects should be higher,” he said. Of course, as noted above, M.M.I. is a theory that is not supported by science.

    Daniel J. Rogers, a physicist who has worked with quantum random number generators, called Randonautica’s M.M.I. theory “completely absurd.”

    “There is no quantum physics here,” said Dr. Rogers, a founder of the Global Disinformation Index. “This is just people using big science words to sound magical. There is no actual science here.”
    ‘Do Not Go Randonauting’

    Randonauting became popular partly because of reverse psychology; young people approach it with a sense of foreboding. “Do not go randonauting” has become a popular title for videos.

    Some adults have expressed concerns about the app’s lack of safety precautions for children. Though Randonautica’s terms of use specify that anyone who is a minor must obtain parental consent to use the app, such consent is collected by email, making it easy for young users to bypass.

    Know and Tell, a child protection education program with the Granite State Children’s Alliance in New Hampshire, has posted on Instagram telling parents to keep young people off the app, or at least supervise their use.

    “It was very apparent that these were young teenagers that were going to undisclosed areas in the middle of the night,” said Jana El-Sayed, the outreach project manager for the Granite State Children’s Alliance. She described these circumstances as “a perpetrator’s dream.”

    #Randonautica #Fake_science #Culture_numérique #Mèmes

  • A new Trump campaign ad depicting a police officer being attacked by protesters is actually a 2014 photo of pro-democracy protests in Ukraine
    https://www.businessinsider.fr/us/trump-campaign-ad-police-officer-attacked-2014-ukraine-protests-2020

    However, the image the Trump campaign used is not from the US – or from this year. It was uploaded on Wikimedia Commons, Wikipedia’s public-domain media archive, in 2014 with the label “a police officer attacked by protesters during clashes in Ukraine, Kyiv. Events of February 18, 2014.”

    The photographer, Mstyslav Chernov, confirmed to Business Insider that this was his photo from Ukraine in 2014.

    “Photography has always been used to manipulate public opinion. And with the rise of social media and the rise of populism, this is happening even more,” he said.

    https://i.insider.com/5f180d0e5af6cc5ba8284103

    A source close to Facebook told Business Insider that the company does not plan to remove the ad.

    #désinformation #fake_news #Trump #Facebook

  • With social media, Zimbabwean youth fight pandemic ’infodemic’
    https://news.trust.org/item/20200723041330-fqvs7
    https://d8zcwdvc14g2e.cloudfront.net/contentAsset/image/8adbe014-7451-4891-805c-0a8f4050302a/image/byInode/1/filter/Resize,Jpeg/jpeg_q/70/resize_w/1100

    JOHANNESBURG/BULAWAYO, Zimbabwe, July 23 (Thomson Reuters Foundation) - Drinking alcohol will kill the coronavirus. It is OK to share face masks. Africans cannot get COVID-19. The pandemic is not even real.

    These are some of the coronavirus myths that a team of 20 Zimbabwean youth have been busting online since the country’s lockdown began in late March, using social media and radio shows to reach an estimated 100,000 people to date.

    “There is a common saying that ’ignorance is bliss’. Well, in this instance, ignorance is not bliss, if anything ignorance is death,” said Bridget Mutsinze, 25, a volunteer based in the capital, Harare.

    Although relatively low compared to the rest of the continent, Zimbabwe is experiencing an uptick in the number of coronavirus infections, with more than 1,800 cases and at least 26 deaths, according to a tally by Johns Hopkins University.

    To stem the spread of the disease, Zimbabwean youth working with development charity Voluntary Service Overseas (VSO) have taken to Twitter, WhatsApp, Facebook and radio to comb through online comments, identify and correct COVID-19 misinformation.

    The spread of coronavirus misinformation has been a global issue, with the World Health Organization describing it as an “infodemic”.

    While tech giants WhatsApp and Facebook have teamed up with African governments to tackle fake news through interactive bots, adverts and push notifications, VSO volunteers are leading the battle within their communities.

    Across the continent, 86% of Africans aged 18-24 own a smartphone and nearly 90% use it for social media, according to a survey by the South African-based Ichikowitz Family Foundation.

    VSO volunteers are tapping into the informal conversations taking place on these platforms.

    “If we do not get facts out there, people will continue to live as they wish and the number of people who get the virus will continue to spread,” Mutsinze told the Thomson Reuters Foundation.

    #Désinformation #Fake_News #COVID-19 #Zimbabwe

  • Bruno Le Maire sur France Inter :
    https://twitter.com/franceinter/status/1283660166475505664

    https://video.twimg.com/amplify_video/1283660102948589570/vid/1280x720/yTWbKNc_wWZVZPwg.mp4


    « Nous sommes l’un des pays européens où l’on travaille le moins longtemps, où le volume global de #travail n’est pas suffisant pour renforcer la prospérité des générations qui viennent » #réforme #retraites #le69inter @LN_Roussel44

    Réponse des Économistes Atterrés :
    https://twitter.com/atterres/status/1283764454228926465

    Non, non et non toujours pas ! Contre les #FakeNews quelques chiffres

    ⤵️⤵️⤵️

    Sur le temps de travail en 2018 (source eurostat) :

    – France : 37,3h de travail par semaine
    – Moyenne européenne : 37,1h
    – Grande Bretagne : 36,5h
    – Allemagne : 34,9h

    Et surtout, la « particularité » française c’est notre productivité (l’efficacité de la main d’œuvre) :

    – Référence productivité UE à 28 : base 100
    – France : 114,8 (près de 15 points de plus !)
    – Allemagne : 106,3
    – Royaume-Uni : 100,2
    – Italie : 107
    – Pays-Bas : 111

    Les français sont donc parmi les plus productifs d’Europe (8e sur 28). C’est d’ailleurs le « problème » français... Comme le dit @TheEconomist :

    « Les français pourraient être en congés le vendredi, ils produiraient encore davantage que les Britanniques en une semaine »

    Si nous étions aussi « improductifs » que les allemands, avec le même PIB, nous aurions (avec les chiffres de 2018), 2,8 millions d’emplois en plus (3,4 millions si nous étions comme les britanniques)

    Pour aller plus loin :
    – Pourquoi le temps de travail en France n’est pas un handicap
    https://www.alternatives-economiques.fr/temps-de-travail-france-nest-un-handicap/00089165
    – Non, les salariés français ne travaillent pas moins que leurs voisins
    https://www.lemonde.fr/les-decodeurs/article/2019/04/29/non-les-salaries-francais-ne-travaillent-pas-moins-que-leurs-voisins_5456229

  • A TikTok Twist on ‘PizzaGate’ - The New York Times
    https://www.nytimes.com/2020/06/29/technology/pizzagate-tiktok.html

    One of social media’s early conspiracy theories is back, but remade in creatively horrible ways.

    “PizzaGate,” a baseless notion that a Washington pizza parlor was the center of a child sex abuse ring, leading to a shooting in 2016, is catching on again with younger people on TikTok and other online hangouts, my colleagues Cecilia Kang and Sheera Frenkel wrote.

    I talked to Sheera about how young people have tweaked this conspiracy and how internet sites help spread false ideas. (And, yes, our names are pronounced the same but spelled differently.)

    Shira: How has this false conspiracy changed in four years?

    Sheera: Younger people on TikTok have made PizzaGate more relatable for them. So a conspiracy that centered on Hillary Clinton and other politicians a few years ago now instead ropes in celebrities like Justin Bieber. Everyone is at home, bored and online more than usual. When I talked to teens who were spreading these conspiracy videos, many of them said it seemed like fun.

    If it’s for “fun,” is this version of the PizzaGate conspiracy harmless?

    It’s not. We’ve seen over and over that some people can get so far into conspiracies that they take them seriously and commit real-world harm. And for people who are survivors of sexual abuse, it can be painful to see people talking about it all over social media.

    Have the internet companies gotten better at stopping false conspiracies like this?

    They have, but people who want to spread conspiracies are figuring out workarounds. Facebook banned the PizzaGate hashtag, for example, but the hashtag is not banned on Instagram, even though it’s owned by Facebook. People also migrated to private groups where Facebook has less visibility into what’s going on.

    Tech companies’ automated recommendation systems also can suck people further into false ideas. I recently tried to join Facebook QAnon conspiracy groups, and Facebook immediately recommended I join PizzaGate groups, too. On TikTok, what you see is largely decided by computer recommendations. So I watched one video about PizzaGate, and the next videos I saw in the app were all about PizzaGate.

    TikTok is a relatively new place where conspiracies can spread. What is it doing to address this?

    TikTok is not proactively going out and looking for videos with potentially false and dangerous ideas and removing them. There were more than 80 million views of TikTok videos with PizzaGate-related hashtags.

    The New York Times reached out to TikTok about the videos, pointing out their spike. After we sent our email, TikTok removed many of the videos and seemed to limit their spread. Facebook and Twitter often do this, too — they frequently remove content only after journalists reach out and point it out.

    Do you worry that writing about baseless conspiracies gives them more oxygen?

    We worry about that all the time, and spend as much time debating whether to write about false conspiracies and misinformation as we do writing about them.

    We watch for ones that reach a critical mass; we don’t want to be the place where people first find out about conspiracies. When a major news organization writes about a conspiracy — even to debunk it — people who want to believe it will twist it to appear to validate their views.

    But to ignore them completely could also be dangerous.

    #Pizzagate #complotisme #fake_news #TikTok

  • Pourquoi les personnes âgées partagent plus de fake news
    https://www.usine-digitale.fr/article/pourquoi-les-personnes-agees-partagent-plus-de-fake-news.N979141

    Complot mondial, virus créé en laboratoire ou terre plate… Sur Internet, les fake news sont souvent partagées par des internautes plus âgés. Une étude d’Harvard se penche sur les raisons pour lesquelles les seniors sont plus prompts à diffuser de fausses informations que les jeunes sur les réseaux sociaux.

    #fakenews #internet #Réseaux_sociaux #vieillesse

  • Que disent les sciences humaines et sociales de la place de l’irrationnel dans les sociétés contemporaines ? #irrationnel #sciences #fakenews #complotisme #altersciences

    https://sms.hypotheses.org/25099

    L’irrationnel peut être défini succinctement comme un mode de pensée qui se situe en dehors du domaine de la raison ou qui s’y oppose. Ce mode de pensée connaît depuis plusieurs années un essor important. En effet, il ne se passe pas un jour sans découvrir une remise en cause d’un vaccin, sans lire un article, voir un documentaire ou entendre une émission de radio dénonçant les supposés ravages de la rationalité scientifique.

    Certains essayistes en font même leur fonds de commerce vantant les mérites des médecines douces et de la méditation transcendantale, prétendant avoir découvert les causes de l’autisme, ou trouvant des raisons d’être dans la mise au jour de complots secrets internationaux. Dès lors, il était tentant de convoquer les sciences humaines et sociales pour étudier ces phénomènes dans les registres politiques, religieux, scientifiques et culturels et pour mettre au jour leurs mécanismes (...)

  • Protest misinformation is riding on the success of pandemic hoaxes | MIT Technology Review
    https://www.technologyreview.com/2020/06/10/1002934/protest-propaganda-is-riding-on-the-success-of-pandemic-hoaxes

    Misinformation about police brutality protests is being spread by the same sources as covid-19 denial. The troubling results suggest what might come next.

    by Joan Donovan
    June 10, 2020

    Police confront Black Lives Matter protesters in Los Angeles
    JOSEPH NGABO ON UNSPLASH
    After months spent battling covid-19, the US is now gripped by a different fever. As the video of George Floyd being murdered by Derek Chauvin circulated across social media, the streets around America—and then the world—have filled with protesters. Floyd’s name has become a public symbol of injustice in a spiraling web of interlaced atrocities endured by Black people, including Breonna Taylor, who was shot in her home by police during a misdirected no-knock raid, and Ahmaud Arbery, who was murdered by a group of white vigilantes. 

    Meanwhile, on the digital streets, a battle over the narrative of protest is playing out in separate worlds, where truth and disinformation run parallel. 

    Related Story

    How to protect yourself online from misinformation right now
    In times of crisis it’s easy to become a spreader of incorrect information online. We asked the experts for tips on how to stay safe—and protect others.

    In one version, tens of thousands of protesters are marching to force accountability on the US justice system, shining a light on policing policies that protect white lives and property above anything else—and are being met with the same brutality and indifference they are protesting against. In the other, driven by Donald Trump, US attorney general Bill Barr, and the MAGA coalition, an alternative narrative contends that anti-fascist protesters are traveling by bus and plane to remote cities and towns to wreak havoc. This notion is inspiring roving gangs of mostly white vigilantes to take up arms. 

    These armed activists are demographically very similar to those who spread misinformation and confusion about the pandemic; the same Facebook groups have spread hoaxes about both; it’s the same older Republican base that shares most fake news. 

    The fact that those who accept protest misinformation also rose up to challenge stay-at-home orders through “reopen” rallies is no coincidence: these audiences have been primed by years of political misinformation and then driven to a frenzy by months of pandemic conspiracy theories. The infodemic helped reinforce routes for spreading false stories and rumors; it’s been the perfect breeding ground for misinformation.

    How it happened
    When covid-19 hit like a slow-moving hurricane, most people took shelter and waited for government agencies to create a plan for handling the disease. But as the weeks turned into months, and the US still struggled to provide comprehensive testing, some began to agitate. Small groups, heavily armed with rifles and misinformation, held “reopen” rallies that were controversial for many reasons. They often relied on claims that the pandemic was a hoax perpetrated by the Democratic Party, which was colluding with the billionaire donor class and the World Health Organization. The reopen message was amplified by the anti-vaccination movement, which exploited the desire for attention among online influencers and circulated rampant misinformation suggesting that a potential coronavirus vaccine was part of a conspiracy in which Bill Gates planned to implant microchips in recipients. 

    These rallies did not gain much legitimacy in the eyes of politicians, press, or the public, because they seemed unmoored from the reality of covid-19 itself. 

    But when the Black Lives Matter protests emerged and spread, it opened a new political opportunity to muddy the waters. President Trump laid the foundation by threatening to invade cities with the military after applying massive force in DC as part of a staged television event. The cinema of the state was intended to counter the truly painful images of the preceding week of protests, where footage of the police firing rubber bullets, gas, and flash grenades dominated media coverage of US cities on fire. Rather than acknowledge the pain and anguish of Black people in the US, Trump went on to blame “Antifa” for the unrest. 

    @Antifa_US was suspended by Twitter, but this screenshot continues to circulate among right wing groups on Facebook.
    For many on the left, antifa simply means “anti-fascist.” For many on the right, however, “Antifa” has become a stand-in moniker for the Democratic Party. In 2017, we similarly saw right-wing pundits and commentators try to rebrand their political opponents as the “alt-left,” but that failed to stick. 

    Shortly after Trump’s declaration, several Twitter accounts outed themselves as influence operations bent on calling for violence and collecting information about anti-fascists. Twitter, too, confirmed that an “Antifa” account, running for three years, was tied to a now-defunct white nationalist organization that had helped plan the Unite the Right rally that killed Heather Heyer and injured hundreds more. Yet the “alt-right” and other armed militia groups that planned this gruesome event in Charlottesville have not drawn this level of concern from federal authorities.

    @OCAntifa Posted this before the account was suspended on Twitter for platform manipulation.
    Disinformation stating that the protests were being inflamed by Antifa quickly traveled up the chain from impostor Twitter accounts and throughout the right-wing media ecosystem, where it still circulates among calls for an armed response. This disinformation, coupled with widespread racism, is why armed groups of white vigilantes are lining the streets in different cities and towns. Simply put, when disinformation mobilizes, it endangers the public.

    What next?
    As researchers of disinformation, we have seen this type of attack play out before. It’s called “source hacking”: a set of tactics where media manipulators mimic the patterns of their opponents, try to obfuscate the sources of their information, and then slowly become more and more dangerous in their rhetoric. Now that Trump says he will designate Antifa a domestic terror group, investigators will have to take a hard look at social-media data to discern who was actually calling for violence online. They will surely unearth this widespread disinformation campaign of far-right agitators.

    That doesn’t mean that every call to action is suspect: all protests are poly-vocal and many tactics and policy issues remain up for discussion, including the age-old debate on reform vs. revolution. But what is miraculous about public protest is how easy it is to perceive and document the demands of protesters on the ground. 

    Moments like this call for careful analysis. Journalists, politicians, and others must not waver in their attention to the ways Black organizers are framing the movement and its demands. As a researcher of disinformation, I am certain there will be attempts to co-opt or divert attention from the movement’s messaging, attack organizers, and stall the progress of this movement. Disinformation campaigns tend to proceed cyclically as media manipulators learn to adapt to new conditions, but the old tactics still work—such as impostor accounts, fake calls to action (like #BaldForBLM), and grifters looking for a quick buck. 

    Crucially, there is an entire universe of civil society organizations working to build this movement for the long haul, and they must learn to counter misinformation on the issues they care about. More than just calling for justice, the Movement for Black Lives and Color of Change are organizing actions to move police resources into community services. Media Justice is doing online trainings under the banner of #defendourmovements, and Reclaim the Block is working to defund the police in Minneapolis. 

    Through it all, one thing remains true: when thousands of people show up to protest in front of the White House, it is not reducible to fringe ideologies or conspiracy theories about invading outside agitators. People are protesting during a pandemic because justice for Black lives can’t wait for a vaccine.

    —Joan Donovan, PhD, is research director Research Director of the Shorenstein Center on Media, Politics and Public Policy at the Harvard Kennedy School.

    #Fake_news #Extrême_droite #Etats_unis

  • (6) Didier Raoult, général Boulanger de la médecine - Libération
    https://www.liberation.fr/france/2020/06/01/didier-raoult-general-boulanger-de-la-medecine_1789960

    Et voilà comment dans un pays sans culture de santé publique, où la science est une opinion comme une autre, un grand patron autoritaire et caractériel, symbole d’un système mandarinal, est devenu une figure de la contre-culture populiste. A son avantage, il bénéficie de l’indigence absolue de la gestion gouvernementale de la crise, et d’une communication hors-sol qui heurte le bon sens. La minimisation du risque pandémique, la condescendance ministérielle envers les expériences chinoise et italienne, les mensonges répétés sur l’inutilité des masques ou des tests, tout ceci a laissé la porte ouverte à un homme d’une extrême vanité, qui va à la fois jouer le dedans et le dehors, mettre en avant son gros impact factor, et sa capacité à éclater les codes méthodologiques usuels.

    Tétanisés par son assurance, incapables de lui opposer autre chose qu’un conseil scientifique qui fait où on lui dit de faire (comme lors du maintien du premier tour des municipales), les gouvernants restent muets. Et alors que Raoult et son équipe se livrent à des expérimentations humaines hors-cadre, publiant dans des revues amies des études indigentes, maquillant des études interventionnelles en études observationnelles, les agences sanitaires se distinguent par leur silence embarrassé.

    Il faudra attendre le 23 mai, soit trois mois après les premières sorties de Didier Raoult, pour qu’un ancien vice-président de la commission des autorisations de mise sur le marché à l’Agence française du médicament, le Pr Jean-François Bergmann, s’avance courageusement – sarcasme – pour annoncer dans le Parisien : « On peut le dire haut et fort, le Pr Raoult se trompe ! » OK boomer, bienvenue aux résistants de 1946… Mais pourquoi n’avoir pas réagi plus tôt aux innombrables violations scientifiques dont s’est rendue coupable l’équipe Raoult ? Réponse magique – sarcasme, quand tu nous tiens : « On s’est tus avec élégance. »

    La longue liste de politiques ayant cautionné les thèses de Didier Raoult n’est plus à faire. Au prétexte de l’urgence, Bruno Retailleau, président du groupe LR au Sénat, assène le 22 mars : « De toute façon, qu’est-ce qu’on risque ? Les gens meurent. » Christian Estrosi, maire LR de Nice, déclare le 23 mars : « On n’a pas le temps de tester sur des souris pendant six mois. » Marine Le Pen, présidente du Rassemblement national, déclarera le 30 mars : « Je pense qu’il faut tout de suite donner la possibilité à tous les médecins de ville de le prescrire. » Et, jamais en reste, Nicolas Dupont-Aignan, député de l’Essonne et président du parti Debout la France, martèlera le 6 avril : « Chaque jour perdu est un crime ! Je le dis. Je le répéterai. » Au moins ont-ils l’honnêteté de ne pas effacer leurs tweets, à la différence de Ségolène Royal, qui dénonçait le 23 mars : « C’est urgent ! Pourquoi ces hésitations bureaucratiques incompréhensibles ? »

    Ce qui est fascinant ici, chez ces politiques censés légiférer sur la chose publique, c’est l’absence de toute compétence médicale, de toute culture scientifique. Qu’un mandarin barrisse plus fort que les autres, et ils se mettront en ordre de marche derrière lui, comme le fera Michel Onfray.

    #Didier_Raoult #Fake_science #Chloroquine

  • En Tunisie, un chatbot pour contrer le coronavirus
    https://www.lemonde.fr/afrique/article/2020/06/02/en-tunisie-un-chatbot-pour-contrer-le-coronavirus_6041546_3212.html

    Le SAMU tunisien a en effet été saturé d’appels, de mi-mars à fin avril, avec des pics de 30 000 en une journée, selon les données de Samir Abdelmoumen, médecin urgentiste membre de la Commission nationale de lutte contre le coronavirus au sein du ministère de la santé : « En un mois et demi, nous avons eu pratiquement l’équivalent de dix-sept ans d’activité en matière d’appels. Au début, c’étaient des gens qui appelaient pour demander des informations ou dénoncer un voisin qui rentrait de l’étranger et sortait de chez lui. Ensuite, nous avons eu des appels plus spécifiques liés à la maladie. »

    #Covid-19#migrant#migration#diaspora#Tunisie#chatbot#information#fakenews#étranger#maladie#santé

  • Il complottismo è nato per scherzo, ma è ora di iniziare a capirlo - Wired

    Nato negli anni ’60 come poco più che una burla con l’operazione Mindfuck, il complottismo moderno è diventato mainstream. E al di là delle sue derive, l’immaginario del complotto spesso cerca di raccontare realtà che ci sfuggono (ed è bene che lo faccia)

    La nascita del complottismo, così come lo conosciamo oggi, si può far risalire alla fine degli anni ‘60 in America, e più precisamente a un’operazione di controcultura denominata operazione Mindfuck. Al tempo, gli Stati Uniti erano pervasi da movimenti hippie e situazionisti, che provavano a cambiare il mondo cercando di ribaltare l’ordine costituito e la narrazione dominante.

    A gettare le basi, più o meno inconsapevolmente, di quel fenomeno che solo qualche decennio dopo avrebbe fatto credere a 12 milioni di persone che il mondo è dominato da rettili umanoidi furono Greg Hill e Kerry Thornley. Ispirandosi al culto della dea greca del caos, Eris, fondarono il discordianesimo: una religione costruita sull’idea che l’ordine era solo un’illusione, una proiezione della mente umana, e che alla base di tutto ci fosse il caos.

    Se la religione organizzata era l’oppio dei popoli, pensavano Hill e Thornley, allora la loro religione disorganizzata sarebbe stata la marijuana della frangia lunatica. Decisero di raccogliere il loro pensiero in un libro, Principia Discordia, pubblicato nel ‘63.

    L’operazione Mindfuck nacque dal discordianesimo, e il suo obiettivo era quello di portare le persone a un tale stato di disorientamento e confusione da far crollare le loro rigide credenze, giungendo così a una sorta di illuminazione. Nella pratica però non funzionò proprio così, vista anche la tendenza di molti discordiani a scivolare nella schizofrenia paranoide. Almeno all’inizio, comunque, il discordianesimo voleva essere uno scherzo. Ma più lo scherzo andava avanti, più assumeva i contorni di una religione, e diventava difficile comportarsi come se fosse un semplice scherzo.

    Durante gli anni ‘70 il concetto di caos non venne solo abbracciato nei posti più disparati, ma venne teorizzato matematicamente dai dipartimenti di fisica e matematica delle università (vedi ad esempio il concetto di effetto farfalla). Le teorie del complotto create da Hill e Thorney attecchirono molto più di quanto i suoi creatori avrebbero mai immaginato. In particolare, presero piede tra alcuni redattori della rivista Playboy, che iniziarono a dedicare ampio spazio alle teorie di Hill, Thorney e dei discordiani in genere. Fra interviste, conigliette e paginoni centrali la rivista iniziò a ospitare lettere anonime che parlavano dell’omicidio di Kennedy. C’era chi sosteneva che fosse stata la Cia, altri la mafia, alcuni se la prendevano con Fidel Castro, e altri ancora le forze anti castriste. I due redattori dell’epoca, Robert Shea e Robert Anton Wilson, già sedotti dalle teorie discordiane e in contatto con Hill e Thorney, ci presero talmente gusto che alcuni anni più tardi scrissero Illuminatus!, una trilogia di romanzi sull’eterno conflitto tra Discordiani e Illuminati, una setta massonica creata con l’obiettivo di impadronirsi del mondo. I libri vennero pubblicati nel ‘75 e da allora sono sempre stati in ristampa.

    Fra i Principia Discordia e la trilogia degli illuminati passa la personificazione del conflitto. Se nel primo libro Hill e Thorney si erano limitati a teorizzare, sotto forma di religione, il caos alla base del quale pensavano si reggesse il mondo, nella trilogia Shea e Wilson fanno un passo ulteriore: gli illuminati diventano i paladini del caos, una forza oscura che vuole conquistare il mondo. La differenza sostanziale che passa fra i Principia Discordia e la trilogia Illuminatus è proprio questa. Nel libro di Hill e Thorney non ci sono buoni e cattivi, ma semplice paradosso e contraddizione. Mentre nella trilogia, sotto forma del conflitto fra discordiani e illuminati, ecco che il paradosso e la contraddizione prendono forma in un noi contro di loro.

    Il ragionamento dietro al complotto è sempre lo stesso, a ritroso. Date determinate premesse, si cerca qualsiasi evidenza o congettura che possa convalidare la tesi. È un circolo vizioso impossibile da spezzare, in cui una serie di coincidenze legate fra loro da eventi insignificanti vengono reinterpretate come prove a favore. I discordiani la definirono sincronicità, riprendendo il concetto già teorizzato da Jung all’inizio del secolo. Erano coincidenze significative, che non potevano essere spiegata dal semplice nesso di causa ed effetto. In altre parole, pensiero magico.

    Esiste una teoria del complotto per tutto. Pensiamo al più assurdo: in un sondaggio del 2013, il 4 per cento delle persone intervistate (un dato che, se esteso a tutta la popolazione degli Stati Uniti, equivale a 12 milioni di abitanti) ha dichiarato di pensare che “rettili mutaforma controllano il nostro mondo, assumendo sembianze umane e accaparrandosi il potere politico per manipolare le nostre società”. Un ulteriore 7 per cento ha dichiarato semplicemente di non averne ancora la certezza assoluta.

    Lo racconta lo psicologo Rob Brotherton nel suo libro intitolato: Menti sospettose, perché siamo tutti complottisti. Ma se la teoria dei rettiliani non permette di avere il polso della situazione, non bisogna fare l’errore di credere all’idea che le teorie del complotto siano una questione di nicchia, che interessa solamente una piccola frangia paranoica dell’umanità, fatta di “uomini di mezza età, depressi ed emarginati, outsider tutt’altro che stupidi con una stravagante mania per le ricerche”. E non è nemmeno una questione di poveri e ricchi, o di ignoranti e laureati. L’argomento principale di Brotherton è che tutti noi possediamo una mentalità cospirazionista in qualche misura, perché è radicata nelle nostre teste e dipende da come funziona il nostro cervello.

    Nel libro vengono raccontati diversi esempi ed esperimenti che descrivono in dettaglio le varie stranezze e scorciatoie psicologiche attuate dal nostro cervello. Uno di questi metteva di fronte a una classe di studenti queste due figure.
    Ovviamente, quasi tutti gli studenti individuavano la barca a vela nascosta nella prima figura, ma spesso riferivano di scorgere delle immagini anche là dove, in realtà, non vi erano altro che puntini e linee disposti a caso, come nell’immagine a destra. Per quanto possa essere affascinante, non esiste alcun legame fra i punti, oltre a quelli che raffigurano la barca. Il legame è mentale: uniamo i punti che preferiamo, per raccontarci la storia che ci suona più familiare. Perché il nostro cervello funziona così, si chiama euristica.

    Alcune teorie del complotto sono assurde se non grottesche, è vero: però si tratta del nostro modo di ragionare, e in questo senso la storia ci ha insegnato che spesso non solo è sano mettere in discussione quello che sappiamo del mondo, ma è anche un modo per svelare la complessità del reale. Perché se c’è una cosa che ci ha insegnato il complottismo, nelle sue forme migliori – penso a William Burroughs o Alan Moore – è che al di là delle caricature che vanno dai Savi di Sion al piano Kalergi, l’immaginario del complotto ha anche cercato di raccontare una realtà che, per un motivo o per un altro, spesso ci sfugge. E in questo senso il complotto non è più la narrazione più facile per spiegare il reale, quanto piuttosto quella più complicata. A farla semplice, è semmai il debunker che spiega che le cose stanno così come ti sono state raccontate e punto.

    Se esiste un paese in cui il complotto si è fatto storia, quello è il nostro. E in questo senso, il paradigma di verità fondato sulla distinzione fra debunker/complottisti, risponde a una logica pericolosa, perché mette in moto un meccanismo per cui tutto ciò che non fa parte della versione ufficiale e veritiera è paranoico dissenso senza alcun fondamento. E se ci trovassimo quindi nella posizione di criticare, a priori, l’euristica del pensiero critico come processo cognitivo spontaneo, sminuendolo a roba da rettiliani o altre fesserie? Quello che non si accetta di un certo modo di ragionare – oltre al fatto che si ritiene pericoloso – è che le teorie complottiste con il passare degli anni sono diventate espressione di un sentimento comune e complesso che crea identità.

    Nel suo discorso di ringraziamento all’Anti-Defamation League lo scorso novembre, l’attore Sacha Baron Cohen lo ha detto chiaro e tondo. Secondo lui, è finita l’era della ragione, il consenso scientifico è delegittimato, e alcuni demagoghi stanno facendo appello ai nostri peggiori istinti, per alimentare teorie del complotto che ormai sono diventate il mainstream. Per come la butta giù l’attore inglese, si tratta di una narrazione attraente: le grandi compagnie tech stanno distruggendo le democrazie. Ma forse è una spiegazione troppo semplice. Certo, quello delle teorie cospirazioniste è un mercato come tutti gli altri, con domanda e offerta. E la mano che lo muove è il denaro.

    Prima di andare oltre però, occorre una precisazione. Il concetto di propaganda è cambiato nel tempo. Propaganda significava storicamente indurre le persone a credere alle cose: ora significa indurle a mettere in discussione ciò in cui credono. Era propaganda il modo in cui l’amministrazione di Woodrow Wilson convinse gli americani a sostenere il coinvolgimento degli Stati Uniti nella Prima guerra mondiale. La propaganda raccontava la stessa storia attraverso così tanti canali mediatici contemporaneamente che sembrava esserci solo una storia: la necessità di prendere parte al conflitto, per il bene del pianeta. Oggi, tuttavia, l’obiettivo principale della propaganda governativa sembra essere quello di minare la nostra fiducia in tutto.

    In questo senso quindi è vero che le piattaforme dei social network hanno contribuito a decentralizzare quello che era il potere dei mezzi di comunicazione mainstream. E di conseguenza che abbiano alimentato i canali di comunicazioni non tradizionali, come quelli pseudoscientifici. Tuttavia ridurre il problema a questo implica non considerare come alla base della proliferazione di notizie false sui social non ci sia un problema di offerta, ma di domanda. L’idea che i social abbiano creato le condizioni del nostro rancore, della nostra paranoia e del nostro odio contro i poteri forti difficilmente regge. La radicalizzazione algoritmica, ovvero la teoria secondo cui siamo tutti imprigionati in una bolla informativa, in cui le notizie che non ci piacciono o con cui non siamo d’accordo vengono automaticamente filtrate rendendoci incapaci di cambiare idea, è d’altronde già stata messa in discussione.

    Facebook, Twitter e gli altri social hanno capito prima di altri che per attirare l’attenzione delle persone bisognava sfruttare il potere della comunità per creare identità. La polarizzazione che hanno generato è stata una dei motivi per cui in molti dicono che il sistema è truccato. Tuttavia, come scrive Richard Fletcher, Senior Research Fellow presso il Reuters Institute for the Study of Journalism dell’università di Oxford, “concentrarsi sulle filter bubble può farci fraintendere i meccanismi in gioco, e potrebbe anche distrarci da problemi un po’ più pressanti”.

    Preoccuparsi del fatto che una determinata affermazione sia vera o meno manca l’obiettivo: persuadere l’altra squadra a cambiare richieste, per convincerla che starebbe meglio con delle aspirazioni diverse. È un progetto politico. Ed è giusto quindi che venga trattato come tale; anche perché innalzare le big tech a capro espiatorio significa ridurre il tutto all’ennesimo complotto.

    https://www.wired.it/attualita/media/2020/06/01/complottismo-storia-operazione-mindfuck

    #complot #fakenews #wired

  • A mysterious company’s coronavirus papers in top medical journals may be unraveling | Science | AAAS
    https://www.sciencemag.org/news/2020/06/mysterious-company-s-coronavirus-papers-top-medical-journals-may-be-unra

    On its face, it was a major finding: Antimalarial drugs touted by the White House as possible COVID-19 treatments looked to be not just ineffective, but downright deadly. A study published on 22 May in The Lancet used hospital records procured by a little-known data analytics company called Surgisphere to conclude that coronavirus patients taking chloroquine or hydroxychloroquine were more likely to show an irregular heart rhythm—a known side effect thought to be rare—and were more likely to die in the hospital.

    Within days, some large randomized trials of the drugs—the type that might prove or disprove the retrospective study’s analysis—screeched to a halt. Solidarity, the World Health Organization’s (WHO’s) megatrial of potential COVID-19 treatments, paused recruitment into its hydroxychloroquine arm, for example.

    But just as quickly, the Lancet results have begun to unravel—and Surgisphere, which provided patient data for two other high-profile COVID-19 papers, has come under withering online scrutiny from researchers and amateur sleuths. They have pointed out many red flags in the Lancet paper, including the astonishing number of patients involved and details about their demographics and prescribed dosing that seem implausible. “It began to stretch and stretch and stretch credulity,” says Nicholas White, a malaria researcher at Mahidol University in Bangkok.

    Today, The Lancet issued an Expression of Concern (EOC) saying “important scientific questions have been raised about data” in the paper and noting that “an independent audit of the provenance and validity of the data has been commissioned by the authors not affiliated with Surgisphere and is ongoing, with results expected very shortly.”

    Hours earlier, The New England Journal of Medicine (NEJM) issued its own EOC about a second study using Surgisphere data, published on 1 May. The paper reported that taking certain blood pressure drugs including angiotensin-converting enzyme (ACE) inhibitors didn’t appear to increase the risk of death among COVID-19 patients, as some researchers had suggested. (Several studies analyzing other groups of COVID-19 patients support the NEJM results.) “Recently, substantive concerns have been raised about the quality of the information in that database,” an NEJM statement noted. “We have asked the authors to provide evidence that the data are reliable.”

    Surgisphere’s sparse online presence—the website doesn’t list any of its partner hospitals by name or identify its scientific advisory board, for example—have prompted intense skepticism. Physician and entrepreneur James Todaro of the investment fund Blocktown Capital wondered in a blog post why Surgisphere’s enormous database doesn’t appear to have been used in peer-reviewed research studies until May. Another post, from data scientist Peter Ellis of the management consulting firm Nous Group, questioned how LinkedIn could list only five Surgisphere employees—all but Desai apparently lacking a scientific or medical background—if the company really provides software to hundreds of hospitals to coordinate the collection of sensitive data from electronic health records. (This morning, the number of employees on LinkedIn had dropped to three.) And Chaccour wonders how such a tiny company was able to reach data-sharing agreements with hundreds of hospitals around the world that use many different languages and data recording systems, while adhering to the rules of 46 different countries on research ethics and data protection.

    The controversy has been an unfortunate distraction, Hernán adds. “If you do something as inflammatory as this without a solid foundation, you are going to make a lot of people waste time trying to understand what is going on.”

    Chaccour says both NEJM and The Lancet should have scrutinized the provenance of Surgisphere’s data more closely before publishing the studies. “Here we are in the middle of a pandemic with hundreds of thousands of deaths, and the two most prestigious medical journals have failed us,” he says.

    #Chloroquine #Revues_médicales #Surgisphere #Données_médicales #Fake_science

  • How Twitter Botched Its Fact-Check of Trump’s Lies – Mother Jones
    https://www.motherjones.com/politics/2020/05/how-twitter-botched-its-fact-check-of-trumps-lies

    Facing widespread condemnation for not removing President Trump’s tweets falsely accusing MSNBC host Joe Scarborough of murder, Twitter finally took action. On Wednesday, the company slapped disclaimer links onto two of Trump’s tweets, the first time it has pushed back on the misinformation that regularly flows from the president’s account.

    But the tweets in question had nothing to do with the debunked conspiracy theories surrounding Scarborough and his late congressional aide, who in 2001 died after suffering a fall from an undiagnosed heart condition. Instead, the ignominious honor belonged to Trump’s false claims that mail-in voting would lead to rampant voter fraud.

    The move drew more questions than praise. Why not simply remove the tweets pushing a vile murder conspiracy, as the widower of Scarborough’s late staffer pleaded in a letter to Twitter CEO Jack Dorsey? Even top Republicans, who have remained silent about Trump’s smears against Scarborough, would have been unlikely to object to the removal of accusations so clearly false and defamatory. Why instead wade into a more politically divisive territory such as mail-in voting practices?

    #Twitter #Trump #Fake_news

  • How covid-19 conspiracy theorists are exploiting YouTube culture | MIT Technology Review
    https://www.technologyreview.com/2020/05/07/1001252/youtube-covid-conspiracy-theories/?truid=a497ecb44646822921c70e7e051f7f1a

    Covid-19 conspiracy theorists are still getting millions of views on YouTube, even as the platform cracks down on health misinformation.

    The answer was obvious to Kennedy, one of many anti-vaccination leaders trying to make themselves as visible as possible during the covid-19 pandemic. “I’d love to talk to your audience,” he replied.

    Kennedy told Bet-David that he believes his own social-media accounts have been unfairly censored; making an appearance on someone else’s popular platform is the next best thing. Bet-David framed the interview as an “exclusive,” enticingly titled “Robert Kennedy Jr. Destroys Big Pharma, Fauci & Pro-Vaccine Movement.” In two days, the video passed half a million views.

    As of Wednesday, advertisements through YouTube’s ad service were playing before the videos, and Bet-David’s merchandise was for sale in a panel below the video’s description. Two other interviews, in which anti-vaccine figures aired several debunked claims about coronavirus and vaccines (largely unchallenged by Bet-David), were also showing ads. Bet-David said in an interview that YouTube had limited ads on all three videos, meaning they can generate revenue, but not as much as they would if they were fully monetized.

    We asked YouTube for comment on all three videos on Tuesday afternoon. By Thursday morning, one of the three (an interview with anti-vaccine conspiracy theorist Judy Mikovits) had been deleted for violating YouTube’s medical misinformation policies. Before it was deleted, the video had more than 1 million views.

    YouTube said that the other two videos were borderline, meaning that YouTube decided they didn’t violate rules, but would no longer be recommended or show up prominently in search results.

    I asked Bet-David whether he felt any responsibility over airing these views on his channel—particularly potentially harmful claims by his guests, urging viewers to ignore public health recommendations.

    “I do not,” he said. “I am responsible for what comes out of my mouth. I’m not responsible for what comes out of your mouth”

    For him, that lack of responsibility extends to misinformation that could be harmful to his audience. He is just giving people what they are asking for. That, in turn, drives attention, which allows him to make money from ads, merchandise, speaking gigs, and workshops. “It’s up to the audience to make the decision for themselves,” he says. Besides, he thinks he’s done interviewing anti-vaccine activists for now. He’s trying to book some “big name” interviews of what he termed “pro-vaccine” experts.

    #YouTube #Complotisme #Vaccins #Médias_sociaux #Fake_news

  • Fake news 101: A guide to help sniff out the truth - CSMonitor.com
    https://www.csmonitor.com/USA/Society/2020/0430/Fake-news-101-A-guide-to-help-sniff-out-the-truth

    What is misinformation vs. disinformation?

    Misinformation is information that is misleading or wrong, but not intentionally. It includes everything from a factoid your friend reposted on Facebook to assertions made by officials or, yes, even journalists.

    Disinformation is more deliberate and is distributed with the intent to confuse, disturb, or provoke. It also includes plausible information shared through devious means, such as a fake Twitter account; done en masse, this can create a skewed impression of popular opinion. A particularly deceptive form of disinformation are “deepfake” videos, with imperceptible alterations in the footage making it appear that someone said or did something that he or she never said or did.

    Be particularly on guard against misinformation and disinformation during crises, which provide fertile ground for exploiting fear, anger, and other emotions.

    #fake_news #infox #Fausses_information #Désinformation

  • "Settlement of migrants in Serbia" and the corona virus: How the epidemic affects the spread of false news and anti-migrant attitudes" [Google Translate] 

    „Naseljavanje migranata u Srbiji" i korona virus: Kako epidemija utiče na širenje lažnih vesti i antimigrantskih stavova

    https://www.bbc.com/serbian/lat/srbija-52524776

    #Covid-19 #Migration #Migrant #Serbie #Xenophobie #Incident #Obrenovac #Fakenews

  • #France : #Castaner redoute une montée du #communautarisme

    Le ministre de l’Intérieur Christophe Castaner a déclaré jeudi qu’il redoutait une montée du communautarisme en France à la faveur de l’épidémie de #coronavirus et des mesures de #confinement, susceptibles selon lui de provoquer un #repli_communautaire.

    “Je crains le risque de communautarisme et que le communautarisme puisse se développer”, a-t-il déclaré lors d’une audition en visioconférence devant la mission parlementaire portant sur l’impact, la gestion et les conséquences de l’épidémie de coronavirus.

    “L’organisation du renforcement communautaire dans une période où une société doute est quelque chose à laquelle tous les pays ont pu faire face”, a-t-il ajouté. “C’est un sujet qui peut provoquer du repli sur soi et peut provoquer du repli communautaire, c’est un sujet de #préoccupation que nous suivons et que nous analysons pour nous préparer à la sortie du confinement, le moment venu”.

    Christophe Castaner a également évoqué, sans donner plus de précisions, des “réseaux d’ultra droite et d’ultra gauche”, très actifs “sur les réseaux sociaux” et appelant “à préparer un certain nombre d’actes qu’ils voudraient commettre à la sortie de la période de confinement”.

    https://fr.reuters.com/article/idFRKCN21R1EB


    https://cache.media.eduscol.education.fr/file/Reprise_deconfinement_Mai2020/69/5/Fiche-Replis-communautaires_1280695.pdf
    –-> attention à ne pas critiquer devant vos enfants les « mesures gouvernementales », car ielles peuvent après en parler à l’école et... tac :

    certaines questions et réactions d’élèves peuvent être abruptes et empreintes d’#hostilité et de #défiance : remise en question radicale de notre société et des valeurs républicaines, méfiance envers les discours scientifiques, fronde contre les mesures gouvernementales, etc.

    #risque #repli_communautariste #communautarisme #déconfinement #ultra_droite #ultra_gauche #extrême_droite #extrême_gauche #mesures_gouvernementales #fake-news #école #valeurs_républicaines #idéaux_républicains #France #radicalisation #complotisme #idées_radicales #mots #vocabulaire #terminologie #communauté #universalisme #intégration #cohésion_sociale #lien_social #identité #lien_positif #vigilance #peur #religion #vengeance #apocalypse #antagonismes #confusion #autorité_scientifique #science #signalement #indivisibilité_de_la_République #unicité_du_peuple_français #égalité_hommes_femmes #laïcité #esprit_critique #complotisme #socialisation_positive
    #géographie_culturelle

    ping @cede @karine4

    via @isskein

  • Corona Chroniques, #Jour49 - davduf.net
    http://www.davduf.net/corona-chroniques-jour49

    Ailleurs, un autre front se dessine : celui de qui parle, et de qui nomme. Après quatre jours sans trop rien dire, les sociétés de rédacteurs des principaux journaux réagissent enfin à la drôle d’idée du gouvernement, surgie en milieu de semaine : la labellisation des « sources d’informations sûres et vérifiées » en cette période de #Covid19 qui « favorise la propagation de #fakenews » (selon l’annonce de #Sibeth_Ndaye, grande pourvoyeuse en la matière, porte-parole du gouvernement, et auteure de l’inoubliable « J’assume de mentir pour protéger le président de la République »). Drôle, délétère et désespérée compilation de bons points sur le site officiel de #Matignon : qu’un gouvernement soit à ce point acculé pour appeler à la rescousse les services de fact checking de la presse en dit long. Sur lui, en premier lieu — mais aussi sur le monde des médias. Durant des décennies, l’essentiel exercice de fact checking consistait, dans les grandes rédactions anglo-saxonnes, à faire vérifier par d’autres leurs propres informations, avant de glisser en terrain de chasse aux rumeurs réseaux-sociales, vérificateurs-vitrines d’un journalisme de neutralité apparente, de moins en moins enclin à descendre dans l’arène, et se contenant d’en relater une partie des aventures, au point que certains, comme le philosophe #Alain_Cambier, parlent d’expédient efficace mais insuffisant.

    Au Figaro, #Arnaud_Benedetti déclare : « L’escalade de l’engagement, [c’est quand] une structure ne parvient plus à enrayer la mécanique de déni qu’elle a enclenché. Sa survie est alors indexée sur la perpétuation de ce déni. Ce n’est là plus l’État légal-rationnel mais une forme pathologique d’État. L’administration fédérale aux États-Unis savait dès 1965 qu’elle avait perdu la guerre au Vietnam, mais elle a préféré mentir à son opinion. C’est un peu la même chose avec la pénurie des masques qui n’a pas fini de fragmenter la réputation de l’exécutif et de démonétiser sa parole. »

    Demain, Après demain, ce terrain de la parole prise — comme on on prend position (tireur couché, ou franc tireur ; reporter ou copiste ; narrateur ou falsificateur) — sera probablement plus dévastateur que jamais, et #Debord plus spectaculaire qu’Avant (« Dans le monde réellement renversé, le vrai est un moment du faux. »). Et c’est Pendant que chacun fourbit ses armes — d’où la voracité à tout lire, tout le temps, dans nos dimanches de confinement et de tous les jours — c’est maintenant que se constitue notre arsenal d’Après, à coups de banderoles vers la rue, de carnets vers les siens ; à grands renforts de comités informels et de graffitis fugaces (aujourd’hui, sublime, vu sur Twitter : drone d’ambiance, cet État d’urgence).

  • Coronavirus : le site du gouvernement contre les « infox » irrite les médias
    https://www.lemonde.fr/economie/article/2020/05/03/coronavirus-le-site-du-gouvernement-contre-les-infox-irrite-les-medias_60385

    C’est un Tweet de Sibeth Ndiaye, qui a mis le feu aux poudres. « La crise du #COVID19 favorise la propagation de #fakenews, a écrit, jeudi 30 avril, la porte-parole du gouvernement sur le réseau social. Plus que jamais, il est nécessaire de se fier à des sources d’informations sûres et vérifiées. C’est pourquoi le site du @gouvernementFR propose désormais un espace dédié ».

    Un clic sur « Désinfox coronavirus », et l’on déroule effectivement un fil d’articles piochés dans les rubriques de fact-checking (vérification des faits) des médias – Les Décodeurs pour Le Monde, CheckNews de Libération, l’AFP Factuel AFP de l’Agence France-Presse, Fake Off de 20 Minutes ou Vrai ou fake de FranceTVInfo)
    […]
    L’initiative n’est pas du goût des rédactions concernées, qui découvrent alors qu’elles participent, pour ainsi dire à l’insu de leur plein gré, à une rubrique intitulée « S’informer sur la désinformation » créée, pour le site gouvernemental, sur la suggestion du SIG (Service d’information du gouvernement, qui dépend de Matignon). « Le Monde n’a pas été consulté en amont, et il va de soi que nous aurions refusé ce type de démarche », tweete alors Luc Bronner, directeur des rédactions.

    « Défiance et suspicion »
    Mise en ligne le 23 avril, la rubrique prospérait discrètement avant que Sibeth Ndiaye en fasse la promotion et crée ainsi la polémique. « Ces papiers diffusés sur nos différents médias se sont retrouvés utilisés, instrumentalisés, sur une plate-forme qui s’appelle gouvernement.fr », lance, scandalisé, Vincent Giret, le directeur de France Info.fr. Or « notre bien le plus précieux, c’est notre indépendance, corrobore Paul Quinio, directeur délégué de la rédaction de Libération. Ce genre d’opération ne peut qu’introduire de la défiance et de la suspicion quant aux relations entre la presse et le monde politique ». « Notre démarche partait d’une intention louable, se défend-on dans l’entourage du gouvernement.

  • Covid hoaxes are using a loophole to stay alive—even after being deleted | MIT Technology Review
    https://www.technologyreview.com/2020/04/30/1000881/covid-hoaxes-zombie-content-wayback-machine-disinformation

    Pandemic conspiracy theorists are using the Wayback Machine to promote ’zombie content’ that avoids content moderators and fact-checkers.

    by Joan Donovan archive page
    April 30, 2020

    PHOTO BY CHRIS HALL ON UNSPLASH
    Since the onset of the pandemic, the Technology and Social Change Research Project at Harvard Kennedy’s Shorenstein Center, where I am the director, has been investigating how misinformation, scams, and conspiracies about covid-19 circulate online. If fraudsters are now using the virus to dupe unsuspecting individuals, we thought, then our research on misinformation should focus on understanding the new tactics of these media manipulators. What we found was a disconcerting explosion in “zombie content.”

    While the original page failed to spread fake news, the version of the page saved on the Internet Archive’s Wayback Machine absolutely flourished on Facebook. With 649,000 interactions and 118,000 shares, the engagement on the Wayback Machine’s link was much larger than legitimate press outlets. Facebook has since placed a fact-check label over the link to the Wayback Machine link too, but it had already been seen a huge number of times.

    There are several explanations for this hidden virality. Some people use the Internet Archive to evade blocking of banned domains in their home country, but it is not simply about censorship. Others are seeking to get around fact-checking and algorithmic demotion of content.

    When looking for more evidence of hidden virality, we searched for “web.archive.org” across platforms. Unsurprisingly, Medium posts that were taken down for spreading health misinformation have found new life through Wayback Machine links. One deleted Medium story, “Covid-19 had us all fooled, but now we might have finally found its secret,” violated Medium’s policies on misleading health information. Before Medium’s takedown, the original post amassed 6,000 interactions and 1,200 shares on Facebook, but the archived version is vastly more popular—1.6 million interactions, 310,000 shares, and still climbing. This zombie content has better performance than most mainstream media news stories and, yet it only exists as an archived record.

    Perhaps the most alarming element to a researcher like me is that these harmful conspiracies permeate private pages and groups on Facebook. This means researchers have access to less than 2 % of the interaction data, and that health misinformation circulates in spaces where journalists, independent researchers and public health advocates can not assess or counterbalance these false claims with facts. Crucially, if it weren’t for the Internet Archive’s records we would not be able to do this research on deleted content in the first place, but these use cases suggest that the Internet Archive will soon have to address how their service can be adapted to deal with disinformation.

    Hidden virality is growing in places where Whatsapp is popular because it’s easy to forward misinformation through encrypted channels and evade content moderation. But when hidden virality happens on Facebook with health misinformation, it is particularly disconcerting. More than 50% of Americans rely on Facebook for their news, and still, after many years of concern and complaint, researchers have a very limited window into the data. This means it’s nearly impossible to ethically investigate how dangerous health misinformation is shared on private pages and groups.

    All of this is a threat for public health in a different way than political or news misinformation, because people do quickly change their behaviors based on medical recommendations.

    #Fake_news #Viralité #Internet_archive #zombie_content #Joan_Donovan

  • Coronavirus et fake news : les 10 mythes les plus populaires - Geek Junior -
    https://www.geekjunior.fr/coronavirus-fake-news-mythes-plus-populaires-35409

    Quand survient une grande crise, les fake news ne sont jamais bien loin… Voici ici une liste recensée par le site NewsGuard des 10 mythes les plus populaires sur le Covid-19.

    #fakenews #EMI

  • Sensibiliser les élèves aux fake-news en période de confinement – Prof & Doc – Site des document listes de l’académie de Besançon
    http://documentation.ac-besancon.fr/sensibiliser-les-eleves-aux-fake-news-en-periode-de-confin

    Comprendre ce que sont les fake-news, comment elles se propagent.
    Développer son esprit critique face à l’information et des stratégies à adopter pour analyser une information.
    Connaître des outils qui permettent de vérifier une information.

    #EMI #fakenews