• « Bientôt des limitations de vitesse sur Facebook ou Twitter »
    https://www.lemonde.fr/idees/article/2020/11/26/bientot-des-limitations-de-vitesse-sur-facebook-ou-twitter_6061168_3232.html

    Chronique. Un réseau social qui ralentit ses utilisateurs et la propagation des contenus sur sa plate-forme, ce n’est pas courant. Pourtant, pendant la présidentielle américaine, Twitter a ajouté un écran pour inviter les internautes à lire le contenu d’un article avant de le partager avec leurs abonnés. Les utilisateurs ont aussi été incités à ajouter un commentaire plutôt que de simplement retweeter de façon passive. Dans le même esprit, les membres de Facebook qui souhaitaient partager un contenu lié à l’élection ont d’abord vu un message les dirigeant vers un centre de ressources fiables sur le scrutin.

    L’enjeu touche même au modèle économique des réseaux sociaux, fondé sur la publicité ciblée et donc sur la viralité, qui augmente les interactions avec le contenu. La limiter, pour Facebook ou Twitter, ferait baisser leurs revenus à court terme mais pourrait être un pari de long terme. Par ailleurs, il se pourrait que ce débat favorise l’émergence de nouveaux modèles : publics, associatifs ou même payants.

    #Médias_sociaux #Viralité #Régulation #Editorialisation

  • The National-Security Case for Fixing Social Media | The New Yorker
    https://www.newyorker.com/tech/annals-of-technology/the-national-security-case-for-fixing-social-media

    On Wednesday, July 15th, shortly after 3 P.M., the Twitter accounts of Barack Obama, Joe Biden, Jeff Bezos, Bill Gates, Elon Musk, Warren Buffett, Michael Bloomberg, Kanye West, and other politicians and celebrities began behaving strangely. More or less simultaneously, they advised their followers—around two hundred and fifty million people, in total—to send Bitcoin contributions to mysterious addresses. Twitter’s engineers were surprised and baffled; there was no indication that the company’s network had been breached, and yet the tweets were clearly unauthorized. They had no choice but to switch off around a hundred and fifty thousand verified accounts, held by notable people and institutions, until the problem could be identified and fixed. Many government agencies have come to rely on Twitter for public-service messages; among the disabled accounts was the National Weather Service, which found that it couldn’t send tweets to warn of a tornado in central Illinois. A few days later, a seventeen-year-old hacker from Florida, who enjoyed breaking into social-media accounts for fun and occasional profit, was arrested as the mastermind of the hack. The F.B.I. is currently investigating his sixteen-year-old sidekick.

    In its narrowest sense, this immense security breach, orchestrated by teen-agers, underscores the vulnerability of Twitter and other social-media platforms. More broadly, it’s a telling sign of the times. We’ve entered a world in which our national well-being depends not just on the government but also on the private companies through which we lead our digital lives. It’s easy to imagine what big-time criminals, foreign adversaries, or power-grabbing politicians could have done with the access the teen-agers secured. In 2013, the stock market briefly plunged after a tweet sent from the hacked account of the Associated Press reported that President Barack Obama had been injured in an explosion at the White House; earlier this year, hundreds of armed, self-proclaimed militiamen converged on Gettysburg, Virginia, after a single Facebook page promoted the fake story that Antifa protesters planned to burn American flags there.

    When we think of national security, we imagine concrete threats—Iranian gunboats, say, or North Korean missiles. We spend a lot of money preparing to meet those kinds of dangers. And yet it’s online disinformation that, right now, poses an ongoing threat to our country; it’s already damaging our political system and undermining our public health. For the most part, we stand defenseless. We worry that regulating the flow of online information might violate the principle of free speech. Because foreign disinformation played a role in the election of our current President, it has become a partisan issue, and so our politicians are paralyzed. We enjoy the products made by the tech companies, and so are reluctant to regulate their industry; we’re also uncertain whether there’s anything we can do about the problem—maybe the price of being online is fake news. The result is a peculiar mixture of apprehension and inaction. We live with the constant threat of disinformation and foreign meddling. In the uneasy days after a divisive Presidential election, we feel electricity in the air and wait for lightning to strike.

    In recent years, we’ve learned a lot about what makes a disinformation campaign effective. Disinformation works best when it’s consistent with an audience’s preconceptions; a fake story that’s dismissed as incredible by one person can appear quite plausible to another who’s predisposed to believe in it. It’s for this reason that, while foreign governments may be capable of more concerted campaigns, American disinformers are especially dangerous: they have their fingers on the pulse of our social and political divisions.

    As cyber wrongdoing has piled up, however, it has shifted the balance of responsibility between government and the private sector. The federal government used to be solely responsible for what the Constitution calls our “common defense.” Yet as private companies amass more data about us, and serve increasingly as the main forum for civic and business life, their weaknesses become more consequential. Even in the heyday of General Motors, a mishap at that company was unlikely to affect our national well-being. Today, a hack at Google, Facebook, Microsoft, Visa, or any of a number of tech companies could derail everyday life, or even compromise public safety, in fundamental ways.

    Because of the very structure of the Internet, no Western nation has yet found a way to stop, or even deter, malicious foreign cyber activity. It’s almost always impossible to know quickly and with certainty if a foreign government is behind a disinformation campaign, ransomware implant, or data theft; with attribution uncertain, the government’s hands are tied. China and other authoritarian governments have solved this problem by monitoring every online user and blocking content they dislike; that approach is unthinkable here. In fact, any regulation meant to thwart online disinformation risks seeming like a step down the road to authoritarianism or a threat to freedom of speech. For good reason, we don’t like the idea of anyone in the private sector controlling what we read, see, and hear. But allowing companies to profit from manipulating what we view online, without regard for its truthfulness or the consequences of its viral dissemination, is also problematic. It seems as though we are hemmed in on all sides, by our enemies, our technologies, our principles, and the law—that we have no choice but to learn to live with disinformation, and with the slow erosion of our public life.

    We might have more maneuvering room than we think. The very fact that the disinformation crisis has so many elements—legal, technological, and social—means that we have multiple tools with which to address it. We can tackle the problem in parts, and make progress. An improvement here, an improvement there. We can’t cure this chronic disease, but we can manage it.

    Online, the regulation of speech is governed by Section 230 of the Communications Decency Act—a law, enacted in 1996, that was designed to allow the nascent Internet to flourish without legal entanglements. The statute gives every Internet provider or user a shield against liability for the posting or transmission of user-generated wrongful content. As Anna Wiener wrote earlier this year, Section 230 was well-intentioned at the time of its adoption, when all Internet companies were underdogs. But today that is no longer true, and analysts and politicians on both the right and the left are beginning to think, for different reasons, that the law could be usefully amended.

    Technological progress is possible, too, and there are signs that, after years of resistance, social-media platforms are finally taking meaningful action. In recent months, Facebook, Twitter, and other platforms have become more aggressive about removing accounts that appear inauthentic, or that promote violence or lawbreaking; they have also moved faster to block accounts that spread disinformation about the coronavirus or voting, or that advance abhorrent political views, such as Holocaust denial. The next logical step is to decrease the power of virality. In 2019, after a series of lynchings in India was organized through the chat program WhatsApp, Facebook limited the mass forwarding of texts on that platform; a couple of months ago, it implemented similar changes in the Messenger app embedded in Facebook itself. As false reports of ballot fraud became increasingly elaborate in the days before and after Election Day, the major social media platforms did what would have been unthinkable a year ago, labelling as misleading messages from the President of the United States. Twitter made it slightly more difficult to forward tweets containing disinformation; an alert now warns the user about retweeting content that’s been flagged as untruthful. Additional changes of this kind, combined with more transparency about the algorithms they use to curate content, could make a meaningful difference in how disinformation spreads online. Congress is considering requiring such transparency.

    #Désinformation #Fake_news #Propositions_légales #Propositions_techniques #Médias_sociaux

  • Dave Grohl’s Epic Drum Battle With 10-Year-Old Nandi Bushell - The New York Times
    https://www.nytimes.com/2020/11/09/arts/music/dave-grohl-nandi-bushell-drums.html

    Marrant, je lisais et écoutais les vidéos... et je me suis dit "Que voilà une belle histoire, comme j’aime en raconter dans mes cours. Il n’y a pas que le côté surveillance de la force à regarder, mais aussi ces feel good stories qui magnifient les médias sociaux. J’étais content quand j’ai vu que le New York Times avait la même conclusion. Il y aura donc de la batterie dans mes prochains cours !!!

    That said, he experienced it like any piece of content — you watch it, you enjoy it, you pass it on and then move on. But toward the end of the summer, another one of Bushell’s videos made its way to Grohl via a flood of texts from friends around the world. This time, Bushell had prefaced her cover of the 1997 Foo Fighters song “Everlong” with a direct challenge to a drum-off. The rules of a drum-off aren’t formally sanctioned by any governing body, but Bushell’s exhilarated facial expressions and mastery of the song’s breakneck pace meant Grohl was in for a battle, should he choose to accept.

    In a separate video interview, Bushell offered a very simple reason for why she decided to call out Grohl: “He’s a drummer, ’cause he drummed in quite a few bands, so why not?” Bushell is 10 years old, and the clarity of her logic — her favorite word might be “epic” — was blessedly refreshing. Grohl is her favorite drummer, and when asked why, she answered, “He thrashes the kit really hard, which I like.”

    Despite his full docket, and after enough peer pressure, Grohl rose to the challenge with a performance of “Dead End Friends” by Them Crooked Vultures, one of those many bands he’s played in over the years. “At first I thought, ‘I’m not going to hit her with something too complicated, because I want this to be fun,’” he said. “I’m not a technical drummer; I am a backyard keg-party, garage jam-band drummer, and that’s the way it is.”

    Nonetheless, Bushell volleyed back another astute and overjoyed performance in two days. Grohl conceded defeat, and since then the two have continued playing music for each other. He recorded an original song about Bushell (sample lyric: “She got the power/She got the soul/Gonna save the world with her rock ’n’ roll”); Bushell returned the favor with her own song, “Rock and Grohl.” Cumulatively, the videos have attracted millions of views across YouTube and Twitter, making it a truly rare uncomplicated feel-good story from the last few months.

    #Médias_sociaux #Nandi_Bushell #Dave_Grohl #Batterie #Battle #Feel_good #Culture_numérique

  • Réseaux sociaux : La viralité, enjeu majeur (et délaissé) de la lutte contre la haine en ligne
    https://www.20minutes.fr/high-tech/2901107-20201106-reseaux-sociaux-viralite-enjeu-majeur-delaisse-lutte-cont

    Un modèle économique mis en cause

    Si la modération des contenus est devenue au fil des ans un pan entier de l’activité de Facebook, Twitter et consorts, les mécanismes de viralité ont longtemps échappé à toute forme de régulation. « C’est compliqué d’agir sur ces fonctionnalités parce qu’elles font partie de l’ingénierie interne des réseaux sociaux, expose Olivier Ertzscheid, chercheur en sciences de l’information et de la communication à l’Université de Nantes et auteur de Le monde selon Zuckerberg.. Et si les plateformes ne les modifient pas, c’est parce qu’elles ont un intérêt à les conserver. C’est un fait, les contenus haineux ou polémiques suscitent plus d’interactions entre les internautes, donc plus de temps passé sur la plateforme, et génèrent plus de revenus publicitaires. Cela produit de manière inflationniste des interactions, et c’est ce qui nourrit l’économie de ces plateformes. »

    Un enjeu économique dont ont parfaitement conscience les pouvoirs publics. « Le mode de fonctionnement des réseaux sociaux repose sur la viralité et sur l’économie de l’attention. Structurellement, ces plateformes vont générer leurs activités sur des contenus agressifs », pointe la députée LREM Laetitia Avia, à l’origine d’une proposition de loi sur la haine en ligne, censurée en grande partie en juin dernier par le Conseil Constitutionnel. « Et il y a un travail de fond à faire pour changer cela. Notre volonté, c’est d’aller vers une évolution de ce "business model". »

    Mais la capacité d’un Etat à faire changer seul le modèle économique de plateformes qui regroupent des milliards d’utilisateurs à travers le monde semble limitée. En ce sens, l’initiative européenne portée par Thierry Breton pourrait changer le rapport de force, estime-t-on dans l’entourage du secrétaire d’Etat au numérique, Cédric O : « Si on veut être efficace, il faut une nouvelle législation européenne. Certains réseaux sociaux ont une empreinte massive sur nos démocraties à travers ces mécanismes de viralité. La France est très engagée et pousse pour adopter un texte ambitieux visant à réguler et responsabiliser ces acteurs et c’est un souhait qui semble partagé par les commissaires européens compétents. »
    Une prise de conscience récente

    Ebranlés par des polémiques et des mouvements épisodiques de protestation de leurs utilisateurs, les plateformes ont petit à petit entamé leur mue. Sur WhatsApp, propriété de Facebook, plusieurs mesures ont été prises pour réduire la propagation des messages. Depuis janvier 2019, le nombre de partage simultané d’un contenu est limité à cinq conversations seulement. Selon Facebook, cette mesure a conduit à une baisse de 25 % du nombre de transferts de messages. Depuis le 20 octobre, dans le cadre de l’élection présidentielle américaine, Twitter de son côté incite systématiquement ses utilisateurs à commenter les messages et contenus qu’ils souhaitent partager avant de le faire.

    Des mesures qui vont dans le bon sens, selon Olivier Ertzscheid : « Tout ce qui peut permettre de ralentir le caractère instinctif de l’activité de partage, de remettre du temps éditorial dans le contenu que l’on publie – que ce soit des messages d’alerte, une limitation du nombre de retweets, de "likes" – peut contribuer à lutter contre la polarisation et la radicalisation des discours en ligne ».

    #Médias_sociaux #Modération #Régulation #Olivier_Ertzscheid

  • Réseaux sociaux : « Casser les chaînes de contamination virales » | Arrêt sur images | Avec Olivier Ertzscheid
    https://www.arretsurimages.net/emissions/arret-sur-images/haine-en-ligne-casser-les-chaines-de-contamination-virales

    Après l’assassinat de Samuel Paty, ils ont fourni une cible à tous les tenants d’une liberté d’expression encadrée. Eux, ce sont Facebook, Twitter, YouTube et les autres, accusés d’être trop laxistes vis-à-vis des contenus illicites ou polémiques, du terrorisme à l’épidémie de Covid-19. Comment casser les chaînes de contamination de la viralité ? Nous recevons Olivier Ertzscheid, enseignant-chercheur en sciences de l’information et de la communication, et Pacôme Thiellement, essayiste et vidéaste, pour tenter de comprendre comment les réseaux sociaux encouragent la diffusion de la parole haineuse ou complotiste à des fins commerciales.

    #Olivier_Ertzscheid #Médias_sociaux #Communication_virale

  • Assassinat de Samuel Paty : “Il faut trouver un moyen de limiter les partages pulsionnels sur les réseaux sociaux” | Interview d’Olivier Ertzscheid
    https://www.telerama.fr/idees/assassinat-de-samuel-paty-il-faut-trouver-un-moyen-de-limiter-les-partages-

    Spécialiste en sciences de l’information et de la communication, Olivier Ertzscheid pointe l’incapacité politique à comprendre les enjeux des échanges numériques, et la propension structurelle des réseaux sociaux à diffuser la haine. Il appelle à une réflexion large, des concepteurs aux régulateurs et aux utilisateurs des outils de communication virtuels.

    Huit jours. C’est le temps qui s’est écoulé entre l’allumage de la mèche et l’explosion de la charge. Entre le virtuel et le réel. Entre la vidéo d’un parent d’élève prenant nommément à partie Samuel Paty, le professeur d’histoire-géo de sa fille, et l’assassinat barbare de celui-ci, décapité près de son collège de Conflans-Sainte-Honorine par un individu qui, il y a deux semaines encore, ignorait jusqu’à son existence.
    Abonné Assassinat de Samuel Paty : “Être musulman, c’est respecter l’autre dans sa dignité absolue” Youness Bousenna 8 minutes à lire

    Pendant ces huit jours, jusqu’à leur épouvantable culmination, l’enseignant a été la cible d’une campagne de dénigrement, à l’intensité centuplée par les réseaux sociaux, en public et en privé, en messages et en boucle. Avec l’aide de Facebook, WhatsApp ou Snapchat, la vindicte populaire a même franchi les frontières géographiques pour arriver jusqu’en Algérie. On n’en a rien su. Ou trop tard. Qu’aurait-il fallu faire pendant ces huit jours, aussi courts qu’interminables ?

    Dans cet embrasement, le problème n’est pas l’anonymat, déjà pris pour cible par certains responsables politiques désarmés, mais la foule et son viatique numérique. Comme un précipité des passions les plus tristes de l’époque, cet attentat vient rappeler que la raison voyage toujours moins vite que le bruit, et repose, dans les pires conditions possibles, cette question aussi vitale qu’indémêlable : comment rendre vivables nos conversations en ligne ? Comment faire pour que, a minima, on n’en meure pas ?

    Pour Olivier Ertzscheid, maître de conférences en sciences de l’information et de la communication à l’université de Nantes, auteur du Monde selon Zuckerberg (éditions C&F, 2020), la solution s’écrit lentement : à l’urgence omnipotente des plateformes, il faut réussir à opposer un peu de friction et une certaine lenteur.

    En de pareilles circonstances, pourquoi accuse-t-on systématiquement les réseaux sociaux ?
    Il y a un effet de visibilité sur les réseaux sociaux, particulièrement dans la temporalité d’un attentat : ce que les gens disent de l’événement en ligne est immédiatement observable. Par un effet de raccourci intellectuel paresseux, il est facile de se raconter cette fable selon laquelle les réseaux sociaux seraient seuls responsables. C’est ainsi qu’on remobilise en catastrophe la loi Avia contre les discours de haine, alors qu’elle a été censurée par le Conseil constitutionnel et ferait plus de mal que de bien.
    Hommage à Samuel Paty : un pays sur une ligne de crête
    Valérie Lehoux 3 minutes à lire

    Quel regard portez-vous sur les déclarations de certains responsables politiques, notamment Xavier Bertrand, qui saisissent ce moment pour réclamer la fin de l’anonymat en ligne ?
    Il faut d’abord rappeler qu’il est totalement hors sujet de convoquer l’anonymat par rapport à ce qui s’est passé à Conflans-Sainte-Honorine. En dehors du compte Twitter qui a publié la photo de la tête décapitée de Samuel Paty, tous les acteurs de la chaîne sont identifiés. En vérité, ce réflexe pavlovien traduit deux choses : une incapacité politique à comprendre ce qui se joue sur les espaces numériques, et la tentation d’un discours liberticide, la volonté d’une prise de contrôle, d’une hyper-surveillance. À cet égard, certains de nos responsables politiques ne sont visiblement pas sortis du syndrome sarkozyste, qui décrivait Internet comme un « Far West » à civiliser.

    Difficile toutefois d’escamoter la part de responsabilité des réseaux sociaux. Le problème ne se situe-t-il pas dans leur architecture ?
    La finalité de ces espaces est de générer une rente attentionnelle, d’être un réceptacle publicitaire pour déclencher des actes d’achat, qu’il s’agisse de produits ou d’opinions. Je considère aujourd’hui les réseaux sociaux comme des espaces « publicidaires », c’est-à-dire qu’ils tuent toute possibilité d’un discours qui ne soit pas haineux. Tant qu’on ne changera pas ce modèle, économique et politique, on ne résoudra rien. Revenons vingt ans en arrière : quand Sergey Brin et Larry Page, les fondateurs de Google, ont créé PageRank, l’algorithme de leur moteur de recherche, ils ont expliqué leur démarche, en arguant que les autres moteurs – contrairement au leur – étaient biaisés et dangereux pour la démocratie. Regardons où ça nous a menés…
    “La vidéo du parent d’élève qui a déclenché cette spirale infernale a été partagée par de grands comptes, très suivis. Pas nécessairement parce qu’ils adhèrent au fond, mais parce que c’était facile.”

    Qu’aurait-il fallu faire pendant les huit jours qui ont séparé la mise en ligne de la première vidéo d’un parent d’élève et l’assassinat de Samuel Paty ?
    Dans un monde idéal, où les plateformes seraient soucieuses de produire des espaces de discours régulés, nous aurions pu mettre en place des mécanismes de friction afin de briser les chaînes de contamination. La vidéo du parent d’élève qui a déclenché cette spirale infernale a été partagée par de grands comptes, très suivis. Pas nécessairement parce qu’ils adhèrent au fond, mais parce que c’était facile. Ça ne coûte rien cognitivement et ça rapporte beaucoup socialement.

    Ces partages sont perdus dans un brouillard d’intentionnalité. Prenons l’exemple de la mosquée de Pantin : elle a d’abord diffusé la vidéo du père de famille puis, quelques heures après l’attentat, l’a finalement supprimée en exprimant ses regrets et en appelant aux rassemblements de soutien. Le temps de l’éditorialisation a complètement disparu, plus personne ne sait pourquoi tel ou tel contenu est relayé. Or les réseaux sociaux mettent en proximité des communautés qui ne s’entendent pas. Et parfois, ce mécanisme d’hystérisation déclenche chez quelqu’un une pulsion.
    “Dans une démocratie de deux milliards d’habitants comme Facebook, il n’est pas normal qu’il n’y ait pas de haltes, de ralentissements.”

    Faudrait-il renvoyer les discours de haine dans la sphère privée, au risque de favoriser un phénomène de balkanisation, de morcellement, qui nous empêcherait de repérer les départs d’incendie ?
    Facebook et consorts ont fait le choix de mettre en avant le partage privé dans ce qu’on appelle le « dark social » [c’est-à-dire le trafic de l’ombre, constitué d’informations échangées hors de la sphère publique, NDLR]. Il y a chez les plateformes une volonté de capter les moments d’agitation de surface pour alimenter des conversations privées.

    Il y a une clé pour comprendre le problème de la hiérarchisation entre espaces publics et interpersonnels : le design d’interface. Comme le formulait Lawrence Lessig [professeur de droit à Harvard et théoricien de l’Internet libre, NDLR] en 1999, « le code, c’est la loi », et ceux qui le fabriquent ne peuvent pas faire l’économie d’une profonde réflexion éthique. Ce n’est pas anodin de voir aujourd’hui un très grand nombre d’ingénieurs de la Silicon Valley regretter leurs créations, qu’il s’agisse du bouton j’aime ou du scrolling infini.

    On le voit, ce débat oppose l’urgence de la réaction à la lenteur, salutaire. Comment les réconcilier ?
    Dans une démocratie de deux milliards d’habitants comme Facebook, il n’est pas normal qu’il n’y ait pas de haltes, de ralentissements. Je ne dis pas qu’il faut rationner le nombre de contenus qu’on peut relayer quotidiennement sur les réseaux sociaux, ou qu’il faut instaurer un permis à points, mais il faut trouver un moyen de limiter les partages pulsionnels. Dans mes cours, j’oblige par exemple mes étudiants à recréer des liens hypertextes pour qu’ils produisent un effort intellectuel même minime avant de les envoyer. Pour autant, on ne peut pas uniquement blâmer l’utilisateur.

    Nous sommes dans un espace-temps qui nous sort de notre posture lucide, et personne ne se regarde en train d’utiliser les réseaux sociaux, pas plus qu’on ne s’observe en train de visionner un film au cinéma. Il faut travailler sur un cycle, qui va de la conception des plateformes – le premier levier – à l’éducation, en passant par la régulation. À condition de ne pas tout attendre du design (design utilisateur, ou UX design, ndlr), du politique ou de l’internaute, on peut y arriver.

    #Olivier_Ertzscheid #Samuel_Paty #Médias_sociaux

  • Présidentielle américaine : « J’ai peur que nous allions au bord d’une guerre civile à cause de Facebook »
    https://information.tv5monde.com/info/presidentielle-americaine-j-ai-peur-que-nous-allions-au-bord-d

    Tim Kendall, un ancien dirigeant de Facebook, a témoigné devant des membres du Congrès le 27 septembre 2020. Mark Zuckerberg sera entendu à son tour le 28 octobre prochain. La question posée par les sénateurs consiste à savoir si Facebook crée de la division dans la population, au point de pousser à une potentielle guerre civile. Explications et entretien avec Dominique Boullier, sociologue et spécialiste de l’économie de l’attention.

    Tim Kendall : « Les médias sociaux nous déchirent »
    Tim Kendall a travaillé pour Facebook de 2006 à 2010 en tant que responsable de « la monétisation » du réseau. Il est donc un acteur important de la mise en place des outils incitatifs publicitaires, les fameux algorithmes et interfaces qui enferment chaque membre du réseau dans une « bulle informationnelle ». Ces systèmes renforcent les opinions, modifient les émotions et forcent chacun à rester connecté le plus possible, à prendre parti, à s’indigner ou à croire en des théories radicales.

    Kendall a donc témoigné le 27 septembre dernier (vidéo complète en anglais sur le site, Tim Kendall à 48:05) pour répondre aux préoccupations des responsables politiques face aux déchainements de haine et d’agressivité, de polarisation politique — uniques dans l’histoire américaine récente — qui font rage sur les réseaux sociaux depuis le début de la campagne électorale. Le titre de cette audition était sans ambiguïté : « Extrémisme dominant : le rôle des médias sociaux dans la radicalisation de l’Amérique. »

    Lors de cette audition, l’ancien cadre dirigeant de Facebook a établi un parallèle entre les procédés de l’industrie du tabac et ceux exploités par Facebook pour rendre les gens dépendants : « Les fabricants de tabac ont ajouté de l’ammoniac aux cigarettes pour augmenter la vitesse à laquelle la nicotine parvenait au cerveau. La capacité de Facebook à fournir du contenu incendiaire à la bonne personne, au bon moment, de la bonne manière ; c’est leur ammoniac. »

    Mais Kendall a aussi mis en garde sur les effets délétères que ces outils généraient dans la société : « Les services de médias sociaux, que moi et d’autres avons construits au cours des 15 dernières années ont servi à déchirer les gens entre eux à une vitesse et une intensité alarmantes. À tout le moins, nous avons érodé notre compréhension collective — au pire, je crains que nous nous poussions au bord d’une guerre civile. »

    Dans l’une de ses interventions, Tristan Harris finit par dresser un constat terrible : « Nous sommes passés d’un environnement technologique basé sur des outils à un environnement technologique de dépendance et de manipulation. Les médias sociaux ne sont pas des outils en attente d’être utilisés. Il ont leurs propres objectifs et il ont leurs propres moyens de les poursuivre, en utilisant votre psychologie contre vous-même. »
    L’implosion de la société par les réseaux sociaux ?

    Une possible implosion de la société — par une confrontation aveugle entre des franges de populations manipulées par les algorithmes — est donc l’une des issues que Tim Kendall et Tristan Harris envisagent sérieusement en conclusion du documentaire diffusé sur Netflix, tout comme lors de leur témoignage au Congrès américain. Cette possibilité d’implosion est prise très au sérieux par la plupart des observateurs et des spécialistes des réseaux sociaux, qui estiment qu’elle est probablement déjà en cours et craignent qu’elle ne se propage à travers le monde. Le réseau est global, les algorithmes sont les mêmes partout sur la planète et leurs effets sont similaires…

    TV5MONDE : Twitter et Facebook, par exemple, génèrent des effets similaires sur leurs utilisateurs, avec une montée en puissance de l’agressivité, des discours haineux et polarisés, sans nuances et souvent partisans. Comment l’expliquez-vous ?

    D.B : La polarisation n’est pas liée à la méchanceté intrinsèque des émetteurs, elle est liée à la valorisation par ces plateformes de certaines expressions pour des raisons de réactivité. Ces réseaux ont été positionnés comme ça. Il y a la partie algorithme qui est en jeu, mais il a aussi la partie design de l’interface en valorisant votre réaction, en vous facilitant votre réactivité. Je donne une exemple simple, lié à la captologie, avec Twitter : si on vous oblige à prendre un élément d’un tweet, à en créer un nouveau dans lequel vous devez coller votre élément pour pouvoir enfin reposter, vous avez tout un temps, plusieurs activités, qui ralentissent votre réaction et qui du coup introduisent un tout petit peu de réflexion et de hiérarchisation. Là, on élimine tout ça et on crée ce bouton retweet.

    J’ai travaillé longtemps dans les années 90 sur les interfaces numériques pour les améliorer et permettre leur appropriation par les utilisateurs, et aujourd’hui on se rend compte que ça a été retourné dans l’autre sens : les nouvelles interfaces ont facilité l’appropriation du comportement des utilisateurs par les plateformes. C’est une perversion de l’activité scientifique d’une part et de la finalité de tout ça d’une autre. Au bout du compte le principe qui a été retenu est « ne vous préoccupez de rien, laissez-nous vous guider ». En réalité, on vous emmène vers la posture cognitive qui est la plus rentable pour l’entreprise puisque c’est celle-là qui va augmenter les taux d’engagement pour les plateformes publicitaires.

    On a oublié les apéros Facebook, d’avant les années 2010, avec une horizontalité, une convivialité qui a disparu. Ce modèle a dérivé largement à cause de ses motivations économiques par les placements publicitaires. Il pourrait encore favoriser ces aspects là, même s’il le fait un peu avec les groupes Facebook, qui leur permettent d’organiser leurs activités, de se coordonner, comme ça a été le cas pour les printemps arabes. Ce qu’il faut mettre à bas c’est le modèle économique puisque le design est fait en fonction de ce modèle économique. Alors que si on organisait un espace de même type mais fait pour faciliter les rencontres, permettre de la modération, valoriser ceux qui présentent des contenus enrichis, ce serait totalement différent. Ce sont donc des finalités différentes.

    TV5MONDE : Y a-t-il des solutions pour éviter cette radicalisation via les plateformes et la possible implosion des sociétés ultra-connectées ?

    D.B : Il faut casser ce qu’on appelle les chaînes de contagion de ces réseaux. Il faut conserver bien entendu ces outils d’expression, on ne peut pas l’enlever aux gens. Les utilisateurs vont s’exprimer radicalement, dire des bêtises, il y aura des fake news, des propos de haine, on est bien d’accord, mais simplement il ne faut pas que ça se propage à la vitesse à laquelle ça se propage. On est donc obligé de mettre en place des mécanismes qui ralentissent cette propagation, qui obligent les gens à hiérarchiser. Il faut arrêter les réflexes, les réactions instantanées par retweet ou partage, like, etc. On pourrait par exemple dire « vous avez droit à 10 tweets et retweets dans la journée ou 24 heures », puis un seul, pareil pour les posts et partages Facebook ou les « j’aime ».

    A ce moment là vous allez avoir des individus obligés de choisir. Et quand on choisit — les gens ne sont pas complètement idiots — on évite le plus inutile.

    #Facebook #Médias_sociaux #Démocratie #Dominique_Boullier

  • #MEGHAN_MURPHY : Si nous ne pouvons pas toutes nous entendre, débarrassons-nous au moins du vitriol en ligne.
    https://tradfem.wordpress.com/2020/10/07/si-nous-ne-pouvons-pas-toutes-nous-entendre-debarrassons-nous-au-

    Je suis d’accord. Et ce n’est pas seulement sur internet. Les luttes intestines et les divisions ne sont pas nouvelles pour le féminisme, pas plus que les problèmes de contrôle, les différences politiques, les commérages et la jalousie. Mais les comportements qui accablent les mouvements politiques et les êtres humains ont été amplifiés et rendus plus toxiques par les médias sociaux. Il est très facile de tweeter quelque chose sous le coup de la colère ou par désir de se stimuler l’ego ou de s’envoyer une injection de dopamine, facile de s’amuser en ligne comme on ne le ferait jamais dans la vie à propos de comportements et d’activités des autres qui ne sont pas d’ancrage dans la réalité. Les médias sociaux engendrent la division, le drame, la polarisation et l’hyperbole. Comme si nous avions besoin de plus d’aide pour y arriver.

    Nous sommes tous et toutes victimes de cette dérive – c’est pratiquement inévitable. Une chose que nous a apprise le film The Social Dilemma, c’est que les médias sociaux sont conçus à cette fin : ces entreprises et ces applications visent à nous attirer et nous retenir sur Twitter, Instagram, Facebook, Tinder… (Choisissez votre poison.) Ils veulent que nous revenions sans cesse, en quête d’engagement et de validation. Il n’est pas surprenant qu’aujourd’hui, beaucoup d’entre nous choisissent ce qu’ils veulent écrire en ligne non pas en fonction de l’intérêt, de la productivité, de la nécessité ou même de la véracité de leurs propos, mais en fonction de l’intérêt qu’ont les gens à accumuler des « likes », des abonnés et des retweets. Nous sommes accros, et l’attrait de la forme particulière de validation que nous trouvons en ligne semble trop grand pour être ignoré au profit de la classe, du tact, de la stratégie ou de l’éthique. Nous en sommes venus à rechercher par-dessus tout ces poussées temporaires d’attention, de flatterie ou de brassage de merde, et il est clair que cela fait un gâchis au sein du féminisme.

    Les médias sociaux engendrent la division, le drame, la polarisation et l’hyperbole. Comme si nous avions besoin de plus d’aide pour y arriver.

    Traduction : #Tradfem
    Version originale : https://www.feministcurrent.com/2020/10/05/if-we-cant-all-get-along-lets-at-least-cut-the-online-vitriol
    #médias_sociaux #division_féministe #féminisme_radical #misogynie #pollution_des_médias #alliance_féministe

  • Forget TikTok. China’s Powerhouse App Is WeChat. - The New York Times
    https://www.nytimes.com/2020/09/04/technology/wechat-china-united-states.html

    Ms. Li said. “It felt like if I only watched Chinese media, all of my thoughts would be different.”

    Ms. Li had little choice but to take the bad with the good. Built to be everything for everyone, WeChat is indispensable.

    For most Chinese people in China, WeChat is a sort of all-in-one app: a way to swap stories, talk to old classmates, pay bills, coordinate with co-workers, post envy-inducing vacation photos, buy stuff and get news. For the millions of members of China’s diaspora, it is the bridge that links them to the trappings of home, from family chatter to food photos.

    Woven through it all is the ever more muscular surveillance and propaganda of the Chinese Communist Party. As WeChat has become ubiquitous, it has become a powerful tool of social control, a way for Chinese authorities to guide and police what people say, whom they talk to and what they read.

    As a cornerstone of China’s surveillance state, WeChat is now considered a national security threat in the United States. The Trump administration has proposed banning WeChat outright, along with the Chinese short video app TikTok. Overnight, two of China’s biggest internet innovations became a new front in the sprawling tech standoff between China and the United States.

    While the two apps are lumped in the same category by the Trump administration, they represent two distinct approaches to the Great Firewall that blocks Chinese access to foreign websites.

    The hipper, better-known TikTok was designed for the wild world outside of China’s cloistering censorship; it exists only beyond China’s borders. By hiving off an independent app to win over global users, TikTok’s owner, ByteDance, created the best bet any Chinese start-up has had to compete with the internet giants in the West. The separation of TikTok from its cousin apps in China, along with deep popularity, has fed corporate campaigns in the United States to save it, even as Beijing potentially upended any deals by labeling its core technology a national security priority.

    Though WeChat has different rules for users inside and outside of China, it remains a single, unified social network spanning China’s Great Firewall. In that sense, it has helped bring Chinese censorship to the world. A ban would cut dead millions of conversations between family and friends, a reason one group has filed a lawsuit to block the Trump administration’s efforts. It would also be an easy victory for American policymakers seeking to push back against China’s techno-authoritarian overreach.

    WeChat started out as a simple copycat. Its parent, the Chinese internet giant Tencent, had built an enormous user base on a chat app designed for personal computers. But a new generation of mobile chat apps threatened to upset its hold over the way young Chinese talked to one another.

    The visionary Tencent engineer Allen Zhang fired off a message to the company founder, Pony Ma, concerned that they weren’t keeping up. The missive led to a new mandate, and Mr. Zhang fashioned a digital Swiss Army knife that became a necessity for daily life in China. WeChat piggybacked on the popularity of the other online platforms run by Tencent, combining payments, e-commerce and social media into a single service.

    It became a hit, eventually eclipsing the apps that inspired WeChat. And Tencent, which made billions in profits from the online games piped into its disparate platforms, now had a way to make money off nearly every aspect of a person’s digital identity — by serving ads, selling stuff, processing payments and facilitating services like food delivery.

    While the Chinese government could use any chat app, WeChat has advantages. Police know well its surveillance capabilities. Within China most accounts are linked to the real identity of users.

    Ms. Li was late to the WeChat party. Away in Toronto when it exploded in popularity, she joined only in 2013, after her sister’s repeated urging.

    It opened up a new world for her. Not in China, but in Canada.

    She found people nearby similar to her. Many of her Chinese friends were on it. They found restaurants nearly as good as those at home and explored the city together. One public account set up by a Chinese immigrant organized activities. It kindled more than a few romances. “It was incredibly fun to be on WeChat,” she recalled.

    Now the app reminds her of jail. During questioning, police told her that a surveillance system, which they called Skynet, flagged the link she shared. Sharing a name with the A.I. from the Terminator movies, Skynet is a real-life techno-policing system, one of several Beijing has spent billions to create.

    Wary of falling into automated traps, Ms. Li now writes with typos. Instead of referring directly to police, she uses a pun she invented, calling them golden forks. She no longer shares links from news sites outside of WeChat and holds back her inclination to talk politics.

    Still, to be free she would have to delete WeChat, and she can’t do that. As the coronavirus crisis struck China, her family used it to coordinate food orders during lockdowns. She also needs a local government health code featured on the app to use public transport or enter stores.

    “I want to switch to other chat apps, but there’s no way,” she said.

    “If there were a real alternative I would change, but WeChat is terrible because there is no alternative. It’s too closely tied to life. For shopping, paying, for work, you have to use it,” she said. “If you jump to another app, then you are alone.”

    #WeChat #Chine #Surveillance #Médias_sociaux

  • Facebook funnelling readers towards Covid misinformation - study | Technology | The Guardian
    https://www.theguardian.com/technology/2020/aug/19/facebook-funnelling-readers-towards-covid-misinformation-study
    https://i.guim.co.uk/img/media/905ac886c6dc0f5a3d40eb514637a8cdf0255873/0_5_4703_2822/master/4703.jpg?width=1200&height=630&quality=85&auto=format&fit=crop&overlay-ali

    Facebook had promised to crack down on conspiracy theories and inaccurate news early in the pandemic. But as its executives promised accountability, its algorithm appears to have fuelled traffic to a network of sites sharing dangerous false news, campaign group Avaaz has found.

    False medical information can be deadly; researchers led by Bangladesh’s International Centre for Diarrhoeal Disease Research, writing in The American Journal of Tropical Medicine and Hygiene, have directly linked a single piece of coronavirus misinformation to 800 deaths.

    Pages from the top 10 sites peddling inaccurate information and conspiracy theories about health received almost four times as many views on Facebook as the top 10 reputable sites for health information, Avaaz warned in a report.

    “This suggests that just when citizens needed credible health information the most, and while Facebook was trying to proactively raise the profile of authoritative health institutions on the platform, its algorithm was potentially undermining these efforts,” the report said.

    A relatively small but influential network is responsible for driving huge amounts of traffic to health misinformation sites. Avaaz identified 42 “super-spreader” sites that had 28m followers generating an estimated 800m views.

    A single article, which falsely claimed that the American Medical Association was encouraging doctors and hospitals to over-estimate deaths from Covid-19, was seen 160m times.

    This vast collective reach suggested that Facebook’s own internal systems are not capable of protecting users from misinformation about health, even at a critical time when the company has promised to keep users “safe and informed”.

    “Avaaz’s latest research is yet another damning indictment of Facebook’s capacity to amplify false or misleading health information during the pandemic,” said British MP Damian Collins, who led a parliamentary investigation into disinformation.

    “The majority of this dangerous content is still on Facebook with no warning or context whatsoever … The time for [Facebook CEO, Mark] Zuckerberg to act is now. He must clean up his platform and help stop this harmful infodemic.”

    Some of the false claims were directly harmful: one, suggesting that pure alcohol could kill the virus, has been linked to 800 deaths, as well as 60 people going blind after drinking methanol as a cure. “In India, 12 people, including five children, became sick after drinking liquor made from toxic seed Datura (ummetta plant in local parlance) as a cure to coronavirus disease,” the paper says. “The victims reportedly watched a video on social media that Datura seeds give immunity against Covid-19.”

    Beyond the specifically dangerous falsehoods, much misinformation is merely useless, but can contribute to the spread of coronavirus, as with one South Korean church which came to believe that spraying salt water could combat the virus.

    “They put the nozzle of the spray bottle inside the mouth of a follower who was later confirmed as a patient before they did likewise for other followers as well, without disinfecting the sprayer,” an official later said. More than 100 followers were infected as a result.

    Among Facebook’s tactics for fighting disinformation on the platform has been giving independent fact-checkers the ability to put warning labels on items they consider untrue.

    Zuckerberg has said fake news would be marginalised by the algorithm, which determines what content viewers see. “Posts that are rated as false are demoted and lose on average 80% of their future views,” he wrote in 2018.

    But Avaaz found that huge amounts of disinformation slips through Facebook’s verification system, despite having been flagged up by factcheck organisations.

    They analysed nearly 200 pieces of health misinformation which were shared on the site after being identified as problematic. Fewer than one in five carried a warning label, with the vast majority – 84% – slipping through controls after they were translated into other languages, or republished in whole or part.

    “These findings point to a gap in Facebook’s ability to detect clones and variations of fact-checked content – especially across multiple languages – and to apply warning labels to them,” the report said.

    Two simple steps could hugely reduce the reach of misinformation. The first would be proactively correcting misinformation that was seen before it was labelled as false, by putting prominent corrections in users feeds.

    Recent research has found corrections like these can halve belief in incorrect reporting, Avaaz said. The other step would be to improve the detection and monitoring of translated and cloned material, so that Zuckerberg’s promise to starve the sites of their audiences is actually made good.

    A Facebook spokesperson said: “We share Avaaz’s goal of limiting misinformation, but their findings don’t reflect the steps we’ve taken to keep it from spreading on our services. Thanks to our global network of fact-checkers, from April to June, we applied warning labels to 98m pieces of Covid-19 misinformation and removed 7mpieces of content that could lead to imminent harm. We’ve directed over 2bn people to resources from health authorities and when someone tries to share a link about Covid-19, we show them a pop-up to connect them with credible health information.”

    #Facebook #Fake_news #Désinformation #Infodemics #Promesses #Culture_de_l_excuse #Médias_sociaux

  • The Second Act of Social-Media Activism | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/the-second-act-of-social-media-activism

    Un article passionnant qui part des analyses de Zeynep Tufekci pour les reconsidérer à partir des mouvements plus récents.

    Some of this story may seem familiar. In “Twitter and Tear Gas: The Power and Fragility of Networked Protest,” from 2017, the sociologist Zeynep Tufekci examined how a “digitally networked public sphere” had come to shape social movements. Tufekci drew on her own experience of the 2011 Arab uprisings, whose early mobilization of social media set the stage for the protests at Gezi Park, in Istanbul, the Occupy action, in New York City, and the Black Lives Matter movement, in Ferguson. For Tufekci, the use of the Internet linked these various, decentralized uprisings and distinguished them from predecessors such as the nineteen-sixties civil-rights movement. Whereas “older movements had to build their organizing capacity first,” Tufekci argued, “modern networked movements can scale up quickly and take care of all sorts of logistical tasks without building any substantial organizational capacity before the first protest or march.”

    The speed afforded by such protest is, however, as much its peril as its promise. After a swift expansion, spontaneous movements are often prone to what Tufekci calls “tactical freezes.” Because they are often leaderless, and can lack “both the culture and the infrastructure for making collective decisions,” they are left with little room to adjust strategies or negotiate demands. At a more fundamental level, social media’s corporate infrastructure makes such movements vulnerable to coöptation and censorship. Tufekci is clear-eyed about these pitfalls, even as she rejects the broader criticisms of “slacktivism” laid out, for example, by Evgeny Morozov’s “The Net Delusion,” from 2011.

    “Twitter and Tear Gas” remains trenchant about how social media can and cannot enact reform. But movements change, as does technology. Since Tufekci’s book was published, social media has helped represent—and, in some cases, helped organize—the Arab Spring 2.0, France’s “Yellow Vest” movement, Puerto Rico’s RickyLeaks, the 2019 Iranian protests, the Hong Kong protests, and what we might call the B.L.M. uprising of 2020. This last event, still ongoing, has evinced a scale, creativity, and endurance that challenges those skeptical of the Internet’s ability to mediate a movement. As Tufekci notes in her book, the real-world effects of Occupy, the Women’s March, and even Ferguson-era B.L.M. were often underwhelming. By contrast, since George Floyd’s death, cities have cut billions of dollars from police budgets; school districts have severed ties with police; multiple police-reform-and-accountability bills have been introduced in Congress; and cities like Minneapolis have vowed to defund policing. Plenty of work remains, but the link between activism, the Internet, and material action seems to have deepened. What’s changed?

    The current uprisings slot neatly into Tufekci’s story, with one exception. As the flurry of digital activism continues, there is no sense that this movement is unclear about its aims—abolition—or that it might collapse under a tactical freeze. Instead, the many protest guides, syllabi, Webinars, and the like have made clear both the objectives of abolition and the digital savvy of abolitionists. It is a message so legible that even Fox News grasped it with relative ease. Rachel Kuo, an organizer and scholar of digital activism, told me that this clarity has been shaped partly by organizers who increasingly rely on “a combination of digital platforms, whether that’s Google Drive, Signal, Messenger, Slack, or other combinations of software, for collaboration, information storage, resource access, and daily communications.” The public tends to focus, understandably, on the profusion of hashtags and sleek graphics, but Kuo stressed that it was this “back end” work—an inventory of knowledge, a stronger sense of alliance—that has allowed digital activism to “reflect broader concerns and visions around community safety, accessibility, and accountability.” The uprisings might have unfolded organically, but what has sustained them is precisely what many prior networked protests lacked: preëxisting organizations with specific demands for a better world.

    What’s distinct about the current movement is not just the clarity of its messaging, but its ability to convey that message through so much noise. On June 2nd, the music industry launched #BlackoutTuesday, an action against police brutality that involved, among other things, Instagram and Facebook users posting plain black boxes to their accounts. The posts often included the hashtag #BlackLivesMatter; almost immediately, social-media users were inundated with even more posts, which explained why using that hashtag drowned out crucial information about events and resources with a sea of mute boxes. For Meredith Clark, a media-studies professor at the University of Virginia, the response illustrated how the B.L.M. movement had honed its ability to stick to a program, and to correct those who deployed that program naïvely. In 2014, many people had only a thin sense of how a hashtag could organize actions or establish circles of care. Today, “people understand what it means to use a hashtag,” Clark told me. They use “their own social media in a certain way to essentially quiet background noise” and “allow those voices that need to connect with each other the space to do so.” The #BlackoutTuesday affair exemplified an increasing awareness of how digital tactics have material consequences.

    These networks suggest that digital activism has entered a second act, in which the tools of the Internet have been increasingly integrated into the hard-won structure of older movements. Though, as networked protest grows in scale and popularity, it still risks being hijacked by the mainstream. Any urgent circulation of information—the same memes filtering through your Instagram stories, the same looping images retweeted into your timeline—can be numbing, and any shift in the Overton window means that hegemony drifts with it.

    In “Twitter and Tear Gas,” Tufekci wrote, “The Black Lives Matter movement is young, and how it will develop further capacities remains to be seen.” The movement is older now. It has developed its tactics, its messaging, its reach—but perhaps its most striking new capacity is a sharper recognition of social media’s limits. “This movement has mastered what social media is good for,” Deva Woodly, a professor of politics at the New School, told me. “And that’s basically the meme: it’s the headline.” Those memes, Woodley said, help “codify the message” that leads to broader, deeper conversations offline, which, in turn, build on a long history of radical pedagogy. As more and more of us join those conversations, prompted by the words and images we see on our screens, it’s clear that the revolution will not be tweeted—at least, not entirely.

    #Activisme_connecté #Black_lives_matter #Zeynep_Tufekci #Mèmes #Hashtag_movments #Médias_sociaux

  • The Civic Hijinks of K-pop’s Super Fans - Data & Society: Points
    https://points.datasociety.net/the-civic-hijinks-of-k-pops-super-fans-ae2e66e28c6

    K-pop fandoms, normally known for their dedication to South Korean music “idols,” made headlines this past month, between their social media manipulation to “defuse racist hashtags” and amplify the circulation of “petitions and fundraisers” for victims during the Black Lives Matter (BLM) movement, and their apparent foiling of Trump’s recent political rally in Tulsa, Oklahoma. The social media manipulation strategies of K-pop fandoms have been so impactful that hashtag trends such as #BanKpopAccounts have accused them of ruining user experiences and called to ban them. But some recent coverage on the power and sway that K-pop fans have over social media information ecologies has presented (unwittingly) truncated histories, (parochially) centered American K-pop fans, and cast these fan activities as somehow novel or even surprising.

    Yet, the opposite is true.

    K-pop fans, many of whom have mastered the power of social media manipulation and (mis)information via their intensely intimate relationships with their beloved idols, have a long history of utilizing their platforms in the service of social justice. It is absolutely necessary that the recent BLM activism of K-pop fans be historicized within this broader, global narrative, and that K-pop fans be recognized as more than just “bandwagoners” jumping at a media movement to simply “promote their faves.”

    South Korean entertainment companies recognized early on the transformative potentials of the internet, from the late-1990s uses of first generation Social Networking Site Cyworld to present-day mobilizations of social media. The K-pop industry played an influential role in the development of digital fandom, deploying social media services such as Twitter, Instagram, and the live-streaming app VLive to provide fans opportunities to interact directly with idols. For instance, it is routine for idols to interact with fans in live broadcast countdowns upon the release of each new song, just as it’s common for agencies to release poster and video teasers/trailers on Twitter and Instagram in the lead up to a ‘comeback’ or new release. Such intense social media interactions in turn boosted the strong sense of intimacy between idols and their fans, as well as allowed fans to regularly commune with each other in digital spaces. As a result, K-pop fans formed “tribes” who strategically draw upon the affordances of social media to promote their favorite idols on the world stage, allowing K-pop to go global.

    For instance, K-pop fans often facilitate ‘bulk pre-orders’ to increase album sales; host mass ‘streaming parties’ on YouTube, Spotify, and Shazam to increase music chart impact in a move known as “chart jacking”; plan “coordinated hashtag campaigns” on Twitter to signal boost their favorite group; or “keyword stuff” search terms on Twitter to alter SEO results and clear or bury bad press. Fans are also concerned over the wellbeing of idols, closely monitoring their personal safety and petitioning for agencies to take action, calling for fair representation in promotional material, and demanding for choreographies to be modified for the health of idols.

    However, idol support initiatives have also culminated in elaborate schemes, such as the BLACKPINK Starbucks hoax of April 2019: A rumour claimed that streaming any song from BLACKPINK would earn listeners a free drink from Starbucks through a digital voucher claimed via Twitter direct messaging or by showing “receipts” to the barista in the form of screen grabs of the streaming. Various Starbucks social media managers had their hands full clarifying this misinformation.

    K-pop fans have always been political

    K-pop fans deploy their networks and social media clout to consistently raise awareness of charitable causes, sharing resources across the globe to make the world a better place. K-pop fan activism within the BLM movement emerges from this broader history.

    Fans have mobilized support networks in the service of social justice as acts of cybervigilantism, with many clubs hosting charity events in honor of idols that are tied to these broader support projects. The recent Australian bushfires in January 2020 saw dozens of fandoms join forces to raise relief funds, with some even adopting wildlife in the name of their favorite idol. Fans of BTS alone have reportedly engaged with over 600 charity projects around the globe addressing a variety of issues. In fact, charity work is so essential to K-pop fandom that an app exists in South Korea where fans can record the amount of donations made on behalf of an idol group to develop a “charity angel” ranking.

    Fans have mobilized support networks in the service of social justice as acts of cybervigilantism…

    Social media campaigns have also regularly been hosted by K-pop fans seeking to hold K-pop stars and the industry accountable. As an expression of their strong support for idols, fans consistently call on K-pop groups to do better when they perceive that they have slipped up. For instance, fans were vocal in calling out racially insensitive performances such as when fans pressured girl group MAMAMOO to apologize for performing in blackface during a concert in 2017. Agencies, media outlets, and fandoms have also been called out for colorism and photo-editing idols’ images to preference fairer, whiter skin.

    …the activism of K-pop fans within the BLM movement is situated within broader social media debates surrounding anti-blackness within the K-pop fandom itself.

    Likewise, Black K-pop fans regularly express frustration at the persistent appropriation of Black culture and hip-hop fashion within the K-pop industry, for instance the persistent appropriation of braids, cornrows, and dreadlocks in K-pop styling. Recently, fans voiced dissatisfaction with BTS’s J-Hope, who was criticized for appropriating dreadlocks in the music video of the song “Chicken Noodle Soup ft. Becky G.” Indeed, the activism of K-pop fans within the BLM movement is situated within broader social media debates surrounding anti-blackness within the K-pop fandom itself.

    Apart from racism, several other K-pop fan initiatives focus on combating misogyny and abuse, in light of the rise of ‘molka’ or spycam incidents that prey on women and digital sex crimes (like the April 2020 Nth Room scandal) in South Korea. Considering the fact that young women make up a significant demographic in K-pop fandom, it is unsurprising that fans’ activism has evolved to also address discrimination against women around the world.
    K-pop fandom as subversive frivolity

    K-pop consumption is not an apolitical act and its fans are not disengaged or obsessive teenagers seeking to troll the world due to their sense of millennial ennui. Rather, K-pop fans in South Korea, Asia, and beyond are critical consumers who deliberately and explicitly act to address social justice concerns by harnessing their high visibility and strong community on social media networks. As noted by The Korea Herald reporter Hyunsu Yim, “the largely female, diverse & LGBT makeup” of K-pop fandoms are primed to push back against the “male dominant/less diverse/more right-wing” online discourses through their social media activism.

    The vernacular social media manipulation expertise of these fans has been honed since K-pop’s humble beginnings on websites and forums, where their fan activity is often cast as playful and feminized activity; but it is exactly this underestimation and under-valuation of K-pop fan networks, knowledge, and labor that has allowed millions of K-pop fandoms to evade sociocultural surveillance, optimize platforms’ algorithmic radars, and spread their messages far and wide in acts of subversive frivolity.

    Whether it is to persuade you to stream a song or to protest against social injustice, you can be sure that K-pop fandoms are always ready to mobilize, fueled by ferocious fan dedication, and remain extremely social media savvy.

    Dr. Crystal Abidin is Senior Research Fellow & ARC DECRA Fellow in Internet Studies at Curtin University (Perth, Australia). Learn more at wishcrys.com.

    Dr. Thomas Baudinette is Lecturer in International Studies, Department of International Studies: Languages and Cultures, Macquarie University (Sydney, Australia). Learn more at thomasbaudinette.wordpress.com.

    #K-pop #Culture_participative #Médias_sociaux #Politique

  • Une exploration de la « Raoultsphère » sur Facebook
    https://www.lemonde.fr/les-decodeurs/article/2020/07/03/une-exploration-de-la-raoultsphere-sur-facebook_6045017_4355770.html

    Sur le réseau social, les groupes favorables à l’infectiologue marseillais ont attiré plus d’un million d’internautes depuis mars. Qui sont les soutiens du promoteur de l’hydroxychloroquine ? Cartographie d’un phénomène social d’envergure et complexe.

    #Facebook #Médias_sociaux #Didier_Raoult #Complotisme

  • La fausse application 👁👄👁 qui a affolé Twitter soutient Black Lives Matter
    https://www.ladn.eu/tech-a-suivre/emojis-fausse-appli-twitter

    Un groupe de jeunes salariés américains de la tech a créé un buzz autour d’une fausse application baptisée 👁👄👁. Une blague virale qui s’est transformée en message politique. Le tout en 48 heures.

    Une bouche entourée de deux yeux ébahis. Il n’en a pas fallu plus pour intriguer la communauté tech pendant quelques jours. Jeudi 25 et vendredi 26 juin, plusieurs milliers de personnes ont partagé sur Twitter la combinaison d’émojis 👁👄👁 suivie de la phrase « It Is What It Is ».

    Cette étrange tendance a été initiée par le site https://👁👄👁.fm et son compte Twitter associé @itiseyemoutheye. Sur ces derniers, les curieux sont invités à donner leur adresse mail, ajouter 👁👄👁 à leur nom Twitter et partager l’URL du site sur le réseau social.

    Une blague pour se moquer des applis ultra-confidentielles...

    Vendredi 26 juin, un texte est finalement publié sur le site https://👁👄👁.fm et donne le fin mot de l’histoire : il n’y a pas et n’aura pas d’application. L’équipe derrière ce site web, qui se décrit comme un groupe de jeunes professionnels de la tech, voulait, au départ, simplement s’amuser en reprenant un mème de TikTok. L’idée est aussi de se moquer de la culture du FOMO (la peur de rater quelque chose) dans la tech et de l’engouement artificiel pour certaines applis hyper-confidentielles. À l’image du réseau social ClubHouse, réservé à quelques privilégiés de la Silicon Valley. La blague rappelle celle d’Oobah Butler il y a quelques années. Ce critique gastronomique avait réussi à classer un faux restaurant numéro 1 sur TripAdvisor.
    ... et défendre la communauté noire

    Mais notre histoire d’emojis ne s’arrête pas là. L’équipe de 👁👄👁.fm a voulu utiliser la « hype » créée autour de leur projet pour la bonne cause. Les personnes qui s’intéressent à 👁👄👁 sont invitées à donner à trois associations qui défendent la communauté noire : Loveland Foundation Therapy Fund, The Okra Project, et The Innocence Project. 200 000 dollars ont été déjà récoltés, assure l’équipe. Le site propose désormais des articles de merchandising à l’effigie de l’émoji dont les ventes serviront à soutenir le mouvement Black Lives Matter.

    #TikTok #Memes #Politique #Médias_sociaux

  • Reddit, Acting Against Hate Speech, Bans ‘The_Donald’ Subreddit - The New York Times
    https://www.nytimes.com/2020/06/29/technology/reddit-hate-speech.html

    SAN FRANCISCO — Reddit, one of the largest social networking and message board websites, on Monday banned its biggest community devoted to President Trump as part of an overhaul of its hate speech policies.

    The community or “subreddit,” called “The_Donald,” is home to more than 790,000 users who post memes, viral videos and supportive messages about Mr. Trump. Reddit executives said the group, which has been highly influential in cultivating and stoking Mr. Trump’s online base, had consistently broken its rules by allowing people to target and harass others with hate speech.

    “Reddit is a place for community and belonging, not for attacking people,” Steve Huffman, the company’s chief executive, said in a call with reporters. “‘The_Donald’ has been in violation of that.”

    Reddit said it was also banning roughly 2,000 other communities from across the political spectrum, including one devoted to the leftist podcasting group “Chapo Trap House,” which has about 160,000 regular users. The vast majority of the forums that are being banned are inactive.

    “The_Donald,” which has been a digital foundation for Mr. Trump’s supporters, is by far the most active and prominent community that Reddit decided to act against. For years, many of the most viral Trump memes that broke through to Facebook, Twitter and elsewhere could be traced back to “The_Donald.” One video, “The Trump Effect,” originated on “The_Donald” in mid-2016 before bubbling up to Mr. Trump, who tweeted it to his 83 million followers.

    Social media sites are facing a reckoning over the types of content they host and their responsibilities to moderate and police that content. While Facebook, Twitter, YouTube, Reddit and others originally positioned themselves as neutral sites that simply hosted people’s posts and videos, users are now pushing them to take steps against hateful, abusive and false speech on their platforms.

    Some of the sites have recently become more proactive in dealing with these issues. Twitter started adding labels last month to some of Mr. Trump’s tweets to refute their accuracy or call them out for glorifying violence. Snap also said it would stop promoting Mr. Trump’s Snapchat account after determining that his public comments off the site could incite violence.

    On Monday, the streaming website Twitch suspended Mr. Trump’s account for violating its policies against hateful conduct. Mr. Trump’s channel had rebroadcast one of his campaign rallies from 2015, in which he denigrated Mexicans and immigrants, among other streams. Twitch removed the videos from the president’s account.

    YouTube also said on Monday that it was barring six channels for violating its policies. They included those of two prominent white supremacists, David Duke and Richard Spencer, and American Renaissance, a white supremacist publication. Stefan Molyneux, a podcaster and internet commentator who had amassed a large audience on YouTube for his videos about philosophy and far-right politics, was also kicked off the site.

    Facebook , the world’s largest social network, has said it refuses to be an arbiter of content. The company said it would allow all speech from political leaders to remain on its platform, even if the posts were untruthful or problematic, because such content was newsworthy and in the public’s interest to read.

    Facebook has since come under increasing fire for its stance. Over the past few weeks, many large advertisers, including Coca-Cola, Verizon, Levi Strauss and Unilever, have said they plan to pause advertising on the social network because they were unhappy with its handling of hate speech and misinformation.

    Reddit, which was founded 15 years ago and has more than 430 million regular users, has long been one corner of the internet that was willing to host all kinds of communities. No subject — whether it was video games or makeup or power-washing driveways — was too small to discuss. People could simply sign up, browse the site anonymously and participate in any of the 130,000 active subreddits.

    Yet that freewheeling position led to many issues of toxic speech and objectionable content across the site, for which Reddit has consistently faced criticism. In the past, the company hosted forums that promoted racism against black people and openly sexualized underage children, all in the name of free speech.

    Mr. Huffman said users on “The_Donald” had frequently violated its first updated rule: “Remember the human.”

    Reddit executives said the site remained a place that they hoped could be a forum for civil political discourse in the future, as long as users played by its rules.

    “There’s a home on Reddit for conservatives, there’s a home on Reddit for liberals,” said Benjamin Lee, Reddit’s general counsel. “There’s a home on Reddit for Donald Trump.”

    #Reddit #Médias_sociaux #Politique

  • Les règles de fonctionnement de reddit
    human_reddiquette - reddit.com
    https://www.reddit.com/wiki/human_reddiquette

    Reddiquette is an informal expression of the values of many redditors, as written by redditors themselves. Please abide by it the best you can. This is a shortened version that mainly focuses on civil discourse.
    Please do
    Remember the human . When you communicate online, all you see is a computer screen. When talking to someone you might want to ask yourself “Would I say it to the person’s face?” or “Would I get jumped if I said this to a buddy?”
    Adhere to the same standards of behavior online that you follow in real life.
    Read the rules of a community before making a submission . These are usually found in the sidebar.
    Moderate/Vote based on quality, not opinion . Well written and interesting content can be worthwhile, even if you disagree with it.
    Consider posting constructive criticism / an explanation when you downvote something, and do so carefully and tactfully.
    Use an “Innocent until proven guilty” mentality. Unless there is obvious proof that a submission is fake, or is whoring karma, please don’t say it is. It ruins the experience for not only you, but the millions of people that browse reddit every day.
    Please do not
    Post someone’s personal information, or post links to personal information. This includes links to public Facebook pages and screenshots of Facebook pages with the names still legible. We all get outraged by the ignorant things people say and do online, but witch hunts and vigilantism hurt innocent people too often, and such posts or comments will be removed. Users posting personal info are subject to an immediate account deletion. If you see a user posting personal info, please contact the admins. Additionally, on pages such as Facebook, where personal information is often displayed, please mask the personal information and personal photographs using a blur function, erase function, or simply block it out with color. When personal information is relevant to the post (i.e. comment wars) please use color blocking for the personal information to indicate whose comment is whose.
    Do not repost deleted/removed information. Remember that comment someone just deleted because it had personal information in it or was a picture of gore? Resist the urge to repost it. It doesn’t matter what the content was. If it was deleted/removed, it should stay deleted/removed.
    Be intentionally rude at all. By choosing not to be rude, you increase the overall civility of the community and make it better for all of us.
    Conduct personal attacks on other commenters. Ad hominem and other distracting attacks do not add anything to the conversation.
    Start a flame war. Just report and “walk away”. If you really feel you have to confront them, leave a polite message with a quote or link to the rules, and no more.
    Insult others. Insults do not contribute to a rational discussion. Constructive Criticism, however, is appropriate and encouraged.
    Troll. Trolling Does not contribute to the conversation.

    #Reddit #Comportement #Médias_sociaux

  • OnlyFans, l’Instagram payant qui pourrait révolutionner l’industrie porno
    https://www.ladn.eu/media-mutants/reseaux-sociaux/onlyfans-instagram-payant-revolutionner-industrie-porno

    Avec une augmentation de 75% des inscriptions durant le mois de mars (soit une estimation de 35 millions d’inscrits) et plus de 105 millions de tweets échangés sur le sujet, OnlyFans est bien LA plateforme sociale gagnante de la crise du coronavirus. Cet Instagram payant s’est imposé comme un nouvel acteur incontournable du Web et plus précisément de l’industrie du porno avec en ligne de mire, un modèle économique plus juste pour les femmes.
    Financement participatif du porno

    Créée en 2016 par une discrète entreprise technologique – Fenix International Limited – OnlyFans avait pour vocation première de concurrencer d’autres services de financement participatif comme Patreon ou Tipeee. Le principe est d’ailleurs sensiblement le même : une fois inscrits sur la plateforme, les internautes peuvent choisir un ou une créatrice de contenu et s’abonner à son fil d’actualité contre une somme allant de 5 à 20 dollars par mois. Mais, contrairement aux réseaux sociaux classiques, sur OnlyFans, il est possible de poster des photos dénudées ou des vidéos à caractère pornographique.

    OnlyFans est donc naturellement devenue une plateforme centrée sur cette pratique même si d’autres thématiques existent à la marge. « On peut dire que la nudité ou le porno constituent 85% du contenu, explique Jean-Baptiste Bourgeois, planeur stratégique chez We are Social. Mais on y trouve aussi des coach de yoga, des danseuses ou des performeuses notamment dans le domaine du strip-tease. Dans tous les cas, il faut comprendre que 95 % des créateurs de contenu sont des femmes et que leur public est composé de 95% d’hommes. »
    Quand Beyoncé adoube OnlyFans

    Jusqu’en 2020, la plateforme est restée relativement sous les radars. L’explosion a eu lieu à partir du mois de mars 2020, notamment à cause du confinement. Et on peut trouver plusieurs explications au phénomène. Subitement, les tournages de film porno ont été interdits et beaucoup d’acteurs de ce secteur vont se réfugier sur le réseau pour s’assurer des revenus. Un autre évenement va aussi assurer la popularité d’OnlyFans : il s’agit de la chanteuse Beyoncé qui, dans le morceau Savage remix sorti le 16 mars dernier, évoque le réseau dans son couplet « On that Demon Time, she might start a OnlyFans ». Le « Demon Time » en question est un phénomène qui a démarré lui aussi avec la crise du Covid-19 et avec la fermeture des clubs de strip-tease des grandes villes américaines. De nombreuses performeuses se sont alors réunies pour proposer des danses érotiques dans des vidéos live d’Instagram en partenariat avec l’application CashApp pour assurer une rémunération. « Avant Beyoncé, OnlyFans était un réseau de niche, poursuit Jean-Baptiste Bourgeois. Grâce à elle, c’est devenu une plateforme cool ».

    À partir de ce moment, le hashtag #OnlyFans s’est mis à décoller

    #Médias_sociaux #Pornographie #Only_Fans

  • Twitter relance la certification des comptes
    https://www.rtl.fr/actu/futur/twitter-relance-la-certification-des-comptes-7800587025

    Twitter va relancer la fonctionnalité des « comptes certifiés ». Le réseau social confirme travailler sur une refonte du système de vérification des comptes. L’obtention du badge bleu sera plus transparente.

    C’est une ingénieure, Jane Manchun Wong, qui a repéré une nouvelle possibilité « Demande de vérification » dans les paramètres de l’application Twitter, dans la section « Informations personnelles ». L’information a été confirmée par le réseau social un peu plus tard.
    Twitter semble donc proposer une « vérification individuelle ». Cette refonte devrait s’accompagner d’un guide pour éclairer les utilisateurs quant à la démarche à suivre, et son fonctionnement. Pour l’instant, le réseau social n’accepte aucune requête supplémentaire, et confirme que le programme est en attente.

    Le petit badge bleu avait toujours suscité des discussions. Apparemment réservé aux personnalités et aux comptes d’intérêt public, Twitter avait suspendu le processus de vérification après que Jason Kessler, activiste à l’origine du rassemblement Unite The Right à Charlottesville, ait tweeté des commentaires sur la mort de Heather Heyer, décédée lors des violences survenues lors de cette manifestation.

    Verification was meant to authenticate identity & voice but it is interpreted as an endorsement or an indicator of importance. We recognize that we have created this confusion and need to resolve it. We have paused all general verifications while we work and will report back soon
    — Twitter Support (@TwitterSupport) November 9, 2017

    Début 2020, des badges de vérification ont été attribués à des responsables de santé publique pour prouver l’authenticité des comptes en pleine pandémie du coronavirus.

    #Twitter #Certification #Médias_sociaux

  • The Case for Social Media Mobs - The Atlantic
    https://www.theatlantic.com/technology/archive/2020/05/case-social-media-mobs/612202

    par Zeynep Tufekci

    There is no doubt that social-media fury can go wrong. In one infamous instance, a young woman made a joke to her small circle on Twitter, just before boarding a plane to South Africa, about white people not getting AIDS. The joke was either racist or making fun of racism depending on your interpretation, but Twitter didn’t wait to find out. By the time the woman had landed, her name was trending worldwide, and she’d been fired from her job.

    Throngs on social media violate fundamental notions of fairness and due process: People may be targeted because of a misunderstanding or an out-of-context video. The punishment online mobs can mete out is often disproportionate. Being attacked and ridiculed by perhaps millions of people whom you have never met, and against whom you have no defenses, can be devastating and lead to real trauma.

    The vagaries of human nature and the scale and algorithms of social-media platforms fuel case after case of people finding themselves in the midst of such whirlwinds, but sometimes these mobs perform an important function. Sometimes the social-media mob isn’t just justified or understandable, but necessary because little else is available to protect the real victims. Such is the case with Amy Cooper, the woman now famous for making a false police report claiming that an African American man was threatening her life, when in fact he had merely asked her to leash her dog in Central Park, where he was bird-watching.

    Deterrence is an important focus here, because the consequences of these fake cries can be dire. Black Americans have suffered a range of fates when police arrive thinking they’re dangerous from the outset, whether it’s needless arrest or being killed on the spot, like 12-year-old Tamir Rice, whom a police officer shot within two seconds of getting out of his (still not fully stopped) patrol car. Just this week, a black man in Minneapolis, George Floyd, was choked to death by a police officer who pressed his knee on Floyd’s neck for more than seven minutes while Floyd repeatedly said, “I can’t breathe,” and bystanders begged the officer to stop, to no avail.

    Amy Cooper’s case is remarkably straightforward. We don’t need to read her mind or speculate about her motives. She tells us exactly what they are. The minute-long video of the encounter, filmed by the bird-watcher, Christian Cooper (no relation), starts with Amy Cooper walking up to and lunging at him. He steps back, saying, “Please don’t come close to me.” She lunges at him again and demands that he stop recording, and he steps back again. Amy Cooper then looks at him, takes out her phone, and matter-of-factly tells him, “I’m going to call the cops, and I’m going to tell them there’s an African American man threatening my life.” Christian Cooper surely knows his own race and did not need a reminder. Her statement was meant as a deliberate threat.

    But life doesn’t end there. Amy Cooper’s 911 call was realistic enough that an NYPD unit showed up to what they thought was a “possible assault.” A tall black man suspected of assault, perhaps holding a shiny black object—bird-watching binoculars—may not even have had the two seconds Tamir Rice had. Thankfully, Christian Cooper had left by then, otherwise it might have been his name, not hers, that became a hashtag.

    During the Arab Spring and its aftermath, which I studied in the field as a scholar, in places such as Tahrir Square, Cairo, and Taksim Gezi Park, Istanbul, I witnessed numerous examples of social-media fury as protesters’ only tool of deterrence against wrongdoing by the powerful. Does it work? Not always, but sometimes there’s nothing else. For example, in the years before millions took to Egypt’s streets in 2011, many videos of police torturing victims surfaced and went viral online, provoking anger. Online comments may not have teeth against the Egyptian police, perhaps, in such a repressive state, but they made an important statement, the only statement available to the otherwise voiceless, powerless masses. Sometimes the social-media mob is the voice of the unheard, and sometimes it’s the only one they have.

    What Amy Cooper did was swatting-adjacent in both intent, execution, and possible consequences—calling 911 to make a false report of being in danger as a way to target someone. As a result of the publicity, she was fired from her job as the vice president at an investment firm, and she “voluntarily” surrendered her dog to the shelter she had adopted him from. I’m sure it’s a difficult time for her, but is it enough of a deterrent to future Amy Coopers? Absent a prosecution, I’m not so sure. And NYPD officials have already told us that they are “not going to pursue” any charges against her, that they have “bigger fish to fry,” and the district attorney “would never prosecute that.”

    If protecting black people’s lives from blatant false reports that may endanger them is not big enough fish to fry, what is? Social-media rage is not an unalloyed good. It has its excesses. But until there is sufficient lawful deterrence for this particular crime, I’m not ready to condemn this mob or this fury.

    #Zeynep_Tufekci #Swatting #Media_mob #Racisme #Médias_sociaux

  • Trump, Twitter, and the failed politics of appeasement

    https://link.wired.com/view/5cec29ba24c17c4c6465ed0bc6h9l.wnj/55e32496

    par Steven Levy

    Lately, my pandemic reading has included Munich, a historical novel by Robert Harris involving the tragic 1938 attempt by UK prime minister Neville Chamberlain to appease Adolph Hitler, hoping to stave off a world war that the Führer was hellbound to trigger. Chamberlain’s efforts (which Harris portrays sympathetically) were doomed.

    That reading now has an odd resonance with current events. For years, Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey have donned kid gloves to handle complaints of conservative bias from Donald Trump, other Republicans, and far-right wingnuts. Despite this appeasement, the executives are now facing a Trump executive order that will potentially impose government controls on what users can and cannot say on their platforms.

    Specifically, Trump is attempting to unilaterally reinterpret the meaning of Section 230, the part of the 1996 Telecommunications Bill that gives the platforms the ability to police the user-created content on their sites for safety and security without bearing the legal responsibility for anything those billions of people might say. His order explicitly echoes his claim—a bogus one—that the platforms are using the 1996 provision to censor conservatives. According to the order, Trump gives the government the power to strip companies of their protection under Section 230. Trump also wants to use something called the “Tech Bias Reporting Tool” to examine platforms for political bias and report offenders to the DOJ and FTC for possible action. It’s a bold move that would create government monitors to make sure Facebook, Twitter, and the rest give conservative speech more than its due. (One hopes that if this does come to pass, the courts will overturn the effort because, well, the constitution.)

    The longstanding claim that the platforms censor conservative speech is ridiculous. Facebook and Twitter remove content that violates community standards by spreading harmful misinformation or hate speech. A lot of that comes from elements of the right wing. Yeah, those standards aren’t perfect, and those platforms make mistakes in executing them, but there’s never been any evidence of an algorithmic bias. But instead of vigorously defending themselves, the leaders of the platforms keep assuring politicians that they take those gripes very seriously.

    Trump himself gets a pass when it comes to moderation because what a president says is newsworthy. That’s a defendable stance, but as he increasingly violates standards and norms, his posts have become a firehose of toxicity. In 2017, Dorsey told me, “I think it’s really important that we maintain open channels to our leaders, whether we like what they’re saying or not, because I don’t know of another way to hold them accountable.” He also implied that newsworthiness might have to be balanced with community standards. That was many tweets ago, and it wasn’t until this week that Twitter provided a fact-check to a Trump tweet that told falsehoods about voting by mail. (Still, Twitter left standing a Trump tweet spreading a bogus charge that former congressperson Joe Scarborough once killed an aide.)

    Zuckerberg has given Trump and other conservatives an even wider berth, beginning with his 2015 decision to leave up Trump’s anti-Muslim post that seemingly violated the company’s hate speech policy. During the 2016 election, Facebook did not remove false news stories from make-believe publications, even though it was clear that such information overwhelmingly benefited Trump. Despite this, the right kept complaining of bias, with Republicans blasting Zuckerberg in his April 2018 appearance in Congress. Zuckerberg knew full well that there was no statistical basis for the charge. But when I asked him about that soon after, his response was shockingly timid. “That depth of concern that there might be some political bias really struck me,” he said. “I was like, ‘Wow, we need to make sure we bring in independent, outside folks to help us do an audit and give us advice on making sure our systems are not biased in ways that we don’t understand.’”

    Later, Facebook commissioned a study led by conservative senator John Kyl which offered no data to back up any systematic bias. Instead of demanding that this should end the complaints, Facebook made some general adjustments in its policies that gave the anecdotal gripes in the report more credibility than they warranted. Appeasement!

    Look, I get it—who wants to take on the president and the ruling party, especially when regulation is in the air? But instead of avoiding conflict, Facebook and Twitter leaders should have been emphasizing that they have just as much right to set their own standards as television stations, newspapers, and other corporations. Despite the fact that they are popular enough to be considered a “public square,” they are still private businesses, and the government has no business determining what legal speech can and cannot occur there. That is the essence of the First Amendment. But even as Mark Zuckerberg goes on about how he values free expression—as he was doing on television the same day Trump issued his order—he still refrains from demanding that the government respect Facebook’s own right to free speech.

    To be sure, Trump is wading—no, make that belly-flopping—into a controversy over internet speech that is already fraught with intractable problems. The very act of giving bullhorns to billions is both a boon and a menace. Even with the purest intentions—and obviously those growth-oriented platforms are not pure—figuring out how to deal with it involves multiple shades of gray. But the current threat comes in clear black and white: the president of the United States is attempting a takeover of internet speech and asserting a federal privilege to topple truth itself.

    Munich has failed. It’s time for the internet moguls to stop acting like Chamberlain—and start channeling Churchill.

    #Trump #Twitter #Médias_sociaux #Régulation

    • But instead of avoiding conflict, Facebook and Twitter leaders should have been emphasizing that they have just as much right to set their own standards as television stations, newspapers, and other corporations. Despite the fact that they are popular enough to be considered a “public square,” they are still private businesses, and the government has no business determining what legal speech can and cannot occur there.

      Justement non : c’est soit l’un, soit l’autre. Les télévision et journaux sont responsables de ce qu’ils publient. Les plateformes sont des moyens de communication, et sont donc protégées des contenus publiés par des tiers.

      Et donc rappeler la position de Chemla : soit les plateformes sont des supports neutres et peuvent donc se prévaloir de l’irresponsabilité éditoriale, soit elles interviennent dans ce qui est publié, donc sont des éditeurs, et deviennent responsables des contenus.

    • Oui, c’est ce qui en fait des « public square ». Et c’est toute la complexité de l’affaire. Car ils ne sont justement pas dans le même temps « publics », c’est-à-dire qu’ils sont guidés (leurs algorithmes sont écrits pour..) par leurs intérêts.
      Je note ici des points de vue, qui ne sont pas forcément les miens ;-) J’enregistre de l’info pour le jour où j’aurais le courage d’écrire.

  • Trump’s Attacks on Twitter Are Part of a Plot to Keep Social Media in His Pocket – Mother Jones
    https://www.motherjones.com/2020-elections/2020/05/donald-trump-attacks-twitter

    If you’re a Twitter user, by now you’ve probably seen the news. After years of complaints about President Donald Trump broadcasting falsehoods over the platform, the company finally took a small step to mitigate his misinformation. On Tuesday, the social media giant appended a “get the facts” link to two Trump tweets in which he claimed that mail-in-ballots would result in fraudulent election outcomes.

    The link led to a page with several bullet points that refute the president—who, to be clear, was not telling the truth—along with links to reputable news stories providing context and correct information.

    Since 2016, the social media companies have taken some steps to rein in the worst behavior on their services, including setting up guardrails specifically related to the disinformation around voting. Facebook banned false information and suppressive content on elections in ads. Twitter rolled out its election integrity policy in January. Meanwhile, Trump, his campaign, and Republican lawmakers have engaged in a campaign to keep those guardrails off. Part of this pressure campaign is backed by the threat of regulation. The Justice Department under Attorney General Bill Barr is overseeing anti-trust investigations into major social media companies, and Republican lawmakers claiming, without evidence, conservative censorship have proposed regulations.

    And if such policies are ever acted upon, as they were Tuesday, the claims of bias will resonate because Trump has already created a context in which his supporters believe social media is working against them. I wrote about this how this strategy has been deployed in my recent profile of Brad Parscale, Trump’s campaign manager

    These tweets do exactly what the campaign has prepared to do in its battle with social media companies: Trump accuses them of bias, threatens regulation, and then goes ahead and repeats the false claim he was aiming to spread, in this case that voting by mail will delegitimize the election. He reminds Twitter of his power over the platform, then dares it to once again fact-check his false claim. How will Twitter respond?

    We already know how the company and its peers have responded to this exact treatment over the last several years. Caught between civil rights pushing to get voter suppression and hate off the platforms and Trump and his crew on the right pushing to let repugnant content stay up, the platforms have largely catered to Trump’s concerns.

    Facebook, for example, now allows politicians to lie in their posts and ads. On Tuesday, the Wall Street Journal reported that Facebook had internally determined that its algorithms increased polarization and radicalization but chose to do nothing, largely because of pressure from Republicans. And this Tuesday, the same day Twitter finally put its “get the facts” tag on Trump’s tweets, it refused to take down his tweets accusing the MSNBC host Joe Scarborough of involvement in the 2001 death of an employee. The president continued making the claim on Twitter on Wednesday.

    Facebook, in particular, has repeatedly shown itself to be more interested in pleasing conservatives than cracking down on extremists using its platform. While the company’s own content policies ban hate groups, for example, just last week, a report by the Tech Transparency Project found 153 white supremacist groups’ pages on Facebook. (Many were removed after the report’s publication.)

    With Trump reportedly growing more worried about his re-election prospects and Election Day in less than six months, his campaign is expected to unleash its war chest. That will include massive spending, particularly on Facebook, where he’ll seek to connect with his audience without filter by the platform. His attack on Twitter is just the latest chapter in a years-long campaign to work the refs to make sure the social media giants feel they have no other option.

    #Trump #Twitter #Médias_sociaux #Politique

  • How to Set Your Facebook, Twitter, and Instagram to Control Who Sees What | WIRED
    https://www.wired.com/story/lock-down-social-media-privacy-security-facebook-twitter

    12 règles pour protéger au mieux sa vie privée sur les médias sociaux (mais pas « des » médias sociaux).

    Social media can bring us together, and even distract us sometimes from our troubles—but it also can expose us to scammers, hackers, and...less than pleasant experiences.

    Don’t panic though: you can keep the balance towards the positive with just a few common-sense steps, and we have some of the most vital ones below. When it comes to staying safe on Facebook, Instagram and Twitter, a lot of it is common sense, with a sprinkling of extra awareness.

    #Médias_sociaux #Vie_privée

  • How covid-19 conspiracy theorists are exploiting YouTube culture | MIT Technology Review
    https://www.technologyreview.com/2020/05/07/1001252/youtube-covid-conspiracy-theories/?truid=a497ecb44646822921c70e7e051f7f1a

    Covid-19 conspiracy theorists are still getting millions of views on YouTube, even as the platform cracks down on health misinformation.

    The answer was obvious to Kennedy, one of many anti-vaccination leaders trying to make themselves as visible as possible during the covid-19 pandemic. “I’d love to talk to your audience,” he replied.

    Kennedy told Bet-David that he believes his own social-media accounts have been unfairly censored; making an appearance on someone else’s popular platform is the next best thing. Bet-David framed the interview as an “exclusive,” enticingly titled “Robert Kennedy Jr. Destroys Big Pharma, Fauci & Pro-Vaccine Movement.” In two days, the video passed half a million views.

    As of Wednesday, advertisements through YouTube’s ad service were playing before the videos, and Bet-David’s merchandise was for sale in a panel below the video’s description. Two other interviews, in which anti-vaccine figures aired several debunked claims about coronavirus and vaccines (largely unchallenged by Bet-David), were also showing ads. Bet-David said in an interview that YouTube had limited ads on all three videos, meaning they can generate revenue, but not as much as they would if they were fully monetized.

    We asked YouTube for comment on all three videos on Tuesday afternoon. By Thursday morning, one of the three (an interview with anti-vaccine conspiracy theorist Judy Mikovits) had been deleted for violating YouTube’s medical misinformation policies. Before it was deleted, the video had more than 1 million views.

    YouTube said that the other two videos were borderline, meaning that YouTube decided they didn’t violate rules, but would no longer be recommended or show up prominently in search results.

    I asked Bet-David whether he felt any responsibility over airing these views on his channel—particularly potentially harmful claims by his guests, urging viewers to ignore public health recommendations.

    “I do not,” he said. “I am responsible for what comes out of my mouth. I’m not responsible for what comes out of your mouth”

    For him, that lack of responsibility extends to misinformation that could be harmful to his audience. He is just giving people what they are asking for. That, in turn, drives attention, which allows him to make money from ads, merchandise, speaking gigs, and workshops. “It’s up to the audience to make the decision for themselves,” he says. Besides, he thinks he’s done interviewing anti-vaccine activists for now. He’s trying to book some “big name” interviews of what he termed “pro-vaccine” experts.

    #YouTube #Complotisme #Vaccins #Médias_sociaux #Fake_news

  • Inside the Early Days of China’s Coronavirus Coverup | WIRED
    https://www.wired.com/story/inside-the-early-days-of-chinas-coronavirus-coverup

    Seasoned journalists in China often say “Cover China as if you were covering Snapchat”—in other words, screenshot everything, under the assumption that any given story could be deleted soon. For the past two and half months, I’ve been trying to screenshot every news article, social media post, and blog post that seems relevant to the coronavirus. In total, I’ve collected nearly 100 censored online posts: 40 published by major news organizations, and close to 60 by ordinary social media users like Yue. In total, the number of Weibo posts censored and WeChat accounts suspended would be virtually uncountable. (Despite numerous attempts, Weibo and WeChat could not be reached for comment.)

    Taken together, these deleted posts offer a submerged account of the early days of a global pandemic, and they indicate the contours of what Beijing didn’t want Chinese people to hear or see. Two main kinds of content were targeted for deletion by censors: Journalistic investigations of how the epidemic first started and was kept under wraps in late 2019 and live accounts of the mayhem and suffering inside Wuhan in the early days of the city’s lockdown, as its medical system buckled under the world’s first hammerstrike of patients.

    It’s not hard to see how these censored posts contradicted the state’s preferred narrative. Judging from these vanished accounts, the regime’s coverup of the initial outbreak certainly did not help buy the world time, but instead apparently incubated what some have described as a humanitarian disaster in Wuhan and Hubei Province, which in turn may have set the stage for the global spread of the virus. And the state’s apparent reluctance to show scenes of mass suffering and disorder cruelly starved Chinese citizens of vital information when it mattered most.

    On January 20, 2020, Zhong Nanshan, a prominent Chinese infectious disease expert, essentially raised the curtain on China’s official response to the coronavirus outbreak when he confirmed on state television that the pathogen could be transmitted from human to human. Zhong was, in many ways, an ideal spokesperson for the government’s effort; he had become famous for being a medical truth-teller during the 2003 SARS outbreak.

    Immediately following Zhong’s announcement, the Chinese government allowed major news organizations into Wuhan, giving them a surprising amount of leeway to report on the situation there. In another press conference on January 21, Zhong praised the government’s transparency. Two days after that, the government shut down virtually all transportation into and out of Wuhan, later extending the lockdown to other cities.

    The sequence of events had all the appearances of a strategic rollout: Zhong’s January 20 TV appearance marked the symbolic beginning of the crisis, to which the government responded swiftly, decisively, and openly.

    But shortly after opening the information floodgates, the state abruptly closed them again—particularly as news articles began to indicate a far messier account of the government’s response to the disease. “The last couple of weeks were the most open Weibo has ever been and [offered] the most freedom many media organizations have ever enjoyed,” one Chinese Weibo user wrote on February 2. “But it looks like this has come to an end.”

    On February 5, a Chinese magazine called China Newsweek published an interview with a doctor in Wuhan, who said that physicians were told by hospital heads not to share any information at the beginning of the outbreak. At the time, he said, the only thing that doctors could do was to urge patients to wear masks.

    Various frontline reports that were later censored supported this doctor’s descriptions: “Doctors were not allowed to wear isolation gowns because that might stoke fears,” said a doctor interviewed by the weekly publication Freezing Point. The interview was later deleted.

    By January, according to Caixin, a gene sequencing laboratory in Guangzhou had discovered that the novel virus in Wuhan shared a high degree of similarity with the virus that caused the SARS outbreak in 2003; but, according to an anonymous source, Hubei’s health commission promptly demanded that the lab suspend all testing and destroy all samples. On January 6, according to the deleted Caixin article, China’s National Center for Disease Control and Prevention initiated an “internal second-degree emergency response”—but did not alert the public. Caixin’s investigation disappeared from the Chinese internet only hours after it was published.

    Among journalists and social critics in China, the 404 error code, which announces that the content on a webpage is no longer available, has become a badge of honor. “At this point, if you haven’t had a 404 under your belt, can you even call yourself a journalist?” a Chinese reporter, who requested anonymity, jokingly asked me.

    However, the crackdown on reports out of Wuhan was even more aggressive against ordinary users of social media.

    On January 24, a resident posted that nurses at a Hubei province hospital were running low on masks and protective goggles. Soon after that post was removed, another internet user reposted it and commented: “Sina employees—I’m begging you to stop deleting accounts. Weibo is an effective way to offer help. Only when we are aware of what frontline people need can we help them.”

    Only minutes later, the post was taken down. The user’s account has since vanished.

    But the real war between China’s censors and its social media users began on February 7.

    That day, a Wuhan doctor named Li Wenliang—a whistleblower who had raised alarms about the virus in late December, only to be reprimanded for “spreading rumors”—died of Covid-19.

    Within hours, his death sparked a spectacular outpouring of collective grief on Chinese social media—an outpouring that was promptly snuffed out, post by post, minute by minute. With that, grief turned to wrath, and posts demanding freedom of speech erupted across China’s social media platforms as the night went on.

    A number of posts directly challenged the party’s handling of Li’s whistleblowing and the government’s relentless suppression of the freedom of speech in China. Some Chinese social media users started to post references to the 2019 Hong Kong protests, uploading clips of “Do You Hear People Sing” from Les Miserables, which became a protest anthem during last year’s mass demonstrations. Even more daringly, some posted photos from the 1989 Tiananmen Square protest and massacre, one of the most taboo subjects in China.

    One image that surfaced from Tiananmen was an image of a banner from the 1989 protest that reads: “We shall not let those murderers stand tall so they will block our wind of freedom from blowing.”

    The censors frantically kept pace. In the span of a quarter hour from 23:16 to around 23:30, over 20 million searches for information on the death of Li Wenliang were winnowed down to fewer than 2 million, according to a Hong Kong-based outlet The Initium. The #DrLiWenLiangDied topic was dragged from number 3 on the trending topics list to number 7 within roughly the same time period.

    Since the night of February 7, whole publications have fallen to the scythe. On January 27, an opinion blog called Dajia published an article titled “50 Days into the Outbreak, The Entire Nation is Bearing the Consequence of the Death of the Media.” By February 19, the entire site was shut down, never to resurface.

    On March 10, an article about another medical whistleblower in Wuhan—another potential Li—was published and then swiftly wiped off the internet, which began yet another vast cat-and-mouse game between censors and Chinese social media users. The story, published by People, profiled a doctor, who, as she put it, had “handed out the whistle” by alerting other physicians about the emergence of a SARS-like virus in late December. The article reported that she had been scolded by hospital management for not keeping the information a secret.

    Soon after it was deleted, Chinese social media users started to recreate the article in every way imaginable: They translated it into over 10 languages; transcribed the piece in Morse code; wrote it out in ancient Chinese script; incorporated its content into a scannable QR code; and even rewrote it in Klingon—all in an effort to evade the censorship machine. All of these efforts were eradicated from the internet.

    But it’s unlikely that the masses of people who watched posts being expunged from the internet will forget how they were governed in the pandemic. On March 17, I picked up my phone, opened my Weibo account, and typed out the following sentence: “You are waiting for their apology, and they are waiting for your appreciation.” The post promptly earned me a 404 badge.

    Shawn Yuan is a Beijing-based freelance journalist and photographer. He travels between the Middle East and China to report on human rights and politics issues.

    #Chine #Censure #Médias_sociaux #Journalisme

  • Pourquoi les soignants ont rejoint les ados sur Tik Tok
    https://www.lefigaro.fr/actualite-france/pourquoi-les-soignants-ont-rejoint-les-ados-sur-tik-tok-20200430

    #jerestechezmoi … et je me filme en musique. Depuis plus d’un mois, l’application Tik Tok est une fenêtre d’évasion en cette période de confinement. Et les soignants ont rejoint les ados pour s’y divertir.

    « J’avais arrêté de l’utiliser » explique Alexandra, une lycéenne de 16 ans. « Mais avec le confinement, mes amis se sont tous réinscrits. Ça nous occupe ». Il faut dire que depuis son lancement, en 2016, l’application séduit surtout les plus jeunes, qui s’y filment pour reproduire chorégraphies et chansons en playback. Or, l’ex-Musical.ly a vu son audience s’élargir - jusqu’à totaliser 365 millions de téléchargements depuis le premier janvier sur l’App Store d’Apple et sur Google Play, selon le rapport trimestriel Sensor Tower.

    Sarah, chercheuse en histoire de 29 ans, avoue « s’être inscrite parce qu’il y avait des heures à perdre » - elle reconnaît y passer certains week-end jusqu’à 3 ou 4 heures d’affilés avec ses colocataires pour reproduire les danses et autres défis proposés par les autres utilisateurs de la plateforme. Parmi ces challenges, le hashtag “#jerestechezmoi”, lancé par les développeurs de l’application pour la France, totalise déjà près de 332 millions de vues en quelques semaines.

    Cette arrivée massive de nouveaux utilisateurs ne surprend pas Anne Cordier, enseignante-chercheuse en sciences de l’information et de la communication à l’Université de Rouen : « Les réseaux sociaux grand public sont quasi toujours investis en premier par les jeunes. C’est la logique de toute pratique culturelle nouvelle : les plus jeunes s’en emparent, créant un effet de mode, et dans un second temps, les plus âgés viennent s’y greffer ». Un phénomène comparable selon elle à l’évolution d’autres réseaux sociaux, comme Facebook ou Snapchat. Elle constate toutefois une évolution dans le discours : « Avant le confinement, certains réseaux sociaux comme Tik Tok étaient plutôt diabolisés. Aujourd’hui, on ne parle plus que de la formidable opportunité qu’offrent ces plateformes pour créer du lien ! »

    Un succès insolent dont il est encore difficile d’estimer s’il se prolongera après le confinement. « Je vais continuer à l’utiliser », nous dit Alexandra, qui a d’ailleurs inscrite sa mère sur l’application pour lui permettre de « regarder ce qu’elle y fait ». Néanmoins « c’est sûr, j’aurai beaucoup moins de temps ». La chercheuse Anne Cordier veut cependant croire que l’impact sera durable : « Le regard de certaines personnes sur les réseaux sociaux et leur usage va changer », souligne-t-elle. « Le fait qu’ils soient utilisés comme un outil de partage, d’amusement positif intergénérationnel, fait qu’il ne sera pas possible de faire comme si rien ne s’était passé. »

    #Tik_Tok #Médias_sociaux #Confinement