• Forget TikTok. China’s Powerhouse App Is WeChat. - The New York Times
    https://www.nytimes.com/2020/09/04/technology/wechat-china-united-states.html

    Ms. Li said. “It felt like if I only watched Chinese media, all of my thoughts would be different.”

    Ms. Li had little choice but to take the bad with the good. Built to be everything for everyone, WeChat is indispensable.

    For most Chinese people in China, WeChat is a sort of all-in-one app: a way to swap stories, talk to old classmates, pay bills, coordinate with co-workers, post envy-inducing vacation photos, buy stuff and get news. For the millions of members of China’s diaspora, it is the bridge that links them to the trappings of home, from family chatter to food photos.

    Woven through it all is the ever more muscular surveillance and propaganda of the Chinese Communist Party. As WeChat has become ubiquitous, it has become a powerful tool of social control, a way for Chinese authorities to guide and police what people say, whom they talk to and what they read.

    As a cornerstone of China’s surveillance state, WeChat is now considered a national security threat in the United States. The Trump administration has proposed banning WeChat outright, along with the Chinese short video app TikTok. Overnight, two of China’s biggest internet innovations became a new front in the sprawling tech standoff between China and the United States.

    While the two apps are lumped in the same category by the Trump administration, they represent two distinct approaches to the Great Firewall that blocks Chinese access to foreign websites.

    The hipper, better-known TikTok was designed for the wild world outside of China’s cloistering censorship; it exists only beyond China’s borders. By hiving off an independent app to win over global users, TikTok’s owner, ByteDance, created the best bet any Chinese start-up has had to compete with the internet giants in the West. The separation of TikTok from its cousin apps in China, along with deep popularity, has fed corporate campaigns in the United States to save it, even as Beijing potentially upended any deals by labeling its core technology a national security priority.

    Though WeChat has different rules for users inside and outside of China, it remains a single, unified social network spanning China’s Great Firewall. In that sense, it has helped bring Chinese censorship to the world. A ban would cut dead millions of conversations between family and friends, a reason one group has filed a lawsuit to block the Trump administration’s efforts. It would also be an easy victory for American policymakers seeking to push back against China’s techno-authoritarian overreach.

    WeChat started out as a simple copycat. Its parent, the Chinese internet giant Tencent, had built an enormous user base on a chat app designed for personal computers. But a new generation of mobile chat apps threatened to upset its hold over the way young Chinese talked to one another.

    The visionary Tencent engineer Allen Zhang fired off a message to the company founder, Pony Ma, concerned that they weren’t keeping up. The missive led to a new mandate, and Mr. Zhang fashioned a digital Swiss Army knife that became a necessity for daily life in China. WeChat piggybacked on the popularity of the other online platforms run by Tencent, combining payments, e-commerce and social media into a single service.

    It became a hit, eventually eclipsing the apps that inspired WeChat. And Tencent, which made billions in profits from the online games piped into its disparate platforms, now had a way to make money off nearly every aspect of a person’s digital identity — by serving ads, selling stuff, processing payments and facilitating services like food delivery.

    While the Chinese government could use any chat app, WeChat has advantages. Police know well its surveillance capabilities. Within China most accounts are linked to the real identity of users.

    Ms. Li was late to the WeChat party. Away in Toronto when it exploded in popularity, she joined only in 2013, after her sister’s repeated urging.

    It opened up a new world for her. Not in China, but in Canada.

    She found people nearby similar to her. Many of her Chinese friends were on it. They found restaurants nearly as good as those at home and explored the city together. One public account set up by a Chinese immigrant organized activities. It kindled more than a few romances. “It was incredibly fun to be on WeChat,” she recalled.

    Now the app reminds her of jail. During questioning, police told her that a surveillance system, which they called Skynet, flagged the link she shared. Sharing a name with the A.I. from the Terminator movies, Skynet is a real-life techno-policing system, one of several Beijing has spent billions to create.

    Wary of falling into automated traps, Ms. Li now writes with typos. Instead of referring directly to police, she uses a pun she invented, calling them golden forks. She no longer shares links from news sites outside of WeChat and holds back her inclination to talk politics.

    Still, to be free she would have to delete WeChat, and she can’t do that. As the coronavirus crisis struck China, her family used it to coordinate food orders during lockdowns. She also needs a local government health code featured on the app to use public transport or enter stores.

    “I want to switch to other chat apps, but there’s no way,” she said.

    “If there were a real alternative I would change, but WeChat is terrible because there is no alternative. It’s too closely tied to life. For shopping, paying, for work, you have to use it,” she said. “If you jump to another app, then you are alone.”

    #WeChat #Chine #Surveillance #Médias_sociaux

  • Facebook funnelling readers towards Covid misinformation - study | Technology | The Guardian
    https://www.theguardian.com/technology/2020/aug/19/facebook-funnelling-readers-towards-covid-misinformation-study
    https://i.guim.co.uk/img/media/905ac886c6dc0f5a3d40eb514637a8cdf0255873/0_5_4703_2822/master/4703.jpg?width=1200&height=630&quality=85&auto=format&fit=crop&overlay-ali

    Facebook had promised to crack down on conspiracy theories and inaccurate news early in the pandemic. But as its executives promised accountability, its algorithm appears to have fuelled traffic to a network of sites sharing dangerous false news, campaign group Avaaz has found.

    False medical information can be deadly; researchers led by Bangladesh’s International Centre for Diarrhoeal Disease Research, writing in The American Journal of Tropical Medicine and Hygiene, have directly linked a single piece of coronavirus misinformation to 800 deaths.

    Pages from the top 10 sites peddling inaccurate information and conspiracy theories about health received almost four times as many views on Facebook as the top 10 reputable sites for health information, Avaaz warned in a report.

    “This suggests that just when citizens needed credible health information the most, and while Facebook was trying to proactively raise the profile of authoritative health institutions on the platform, its algorithm was potentially undermining these efforts,” the report said.

    A relatively small but influential network is responsible for driving huge amounts of traffic to health misinformation sites. Avaaz identified 42 “super-spreader” sites that had 28m followers generating an estimated 800m views.

    A single article, which falsely claimed that the American Medical Association was encouraging doctors and hospitals to over-estimate deaths from Covid-19, was seen 160m times.

    This vast collective reach suggested that Facebook’s own internal systems are not capable of protecting users from misinformation about health, even at a critical time when the company has promised to keep users “safe and informed”.

    “Avaaz’s latest research is yet another damning indictment of Facebook’s capacity to amplify false or misleading health information during the pandemic,” said British MP Damian Collins, who led a parliamentary investigation into disinformation.

    “The majority of this dangerous content is still on Facebook with no warning or context whatsoever … The time for [Facebook CEO, Mark] Zuckerberg to act is now. He must clean up his platform and help stop this harmful infodemic.”

    Some of the false claims were directly harmful: one, suggesting that pure alcohol could kill the virus, has been linked to 800 deaths, as well as 60 people going blind after drinking methanol as a cure. “In India, 12 people, including five children, became sick after drinking liquor made from toxic seed Datura (ummetta plant in local parlance) as a cure to coronavirus disease,” the paper says. “The victims reportedly watched a video on social media that Datura seeds give immunity against Covid-19.”

    Beyond the specifically dangerous falsehoods, much misinformation is merely useless, but can contribute to the spread of coronavirus, as with one South Korean church which came to believe that spraying salt water could combat the virus.

    “They put the nozzle of the spray bottle inside the mouth of a follower who was later confirmed as a patient before they did likewise for other followers as well, without disinfecting the sprayer,” an official later said. More than 100 followers were infected as a result.

    Among Facebook’s tactics for fighting disinformation on the platform has been giving independent fact-checkers the ability to put warning labels on items they consider untrue.

    Zuckerberg has said fake news would be marginalised by the algorithm, which determines what content viewers see. “Posts that are rated as false are demoted and lose on average 80% of their future views,” he wrote in 2018.

    But Avaaz found that huge amounts of disinformation slips through Facebook’s verification system, despite having been flagged up by factcheck organisations.

    They analysed nearly 200 pieces of health misinformation which were shared on the site after being identified as problematic. Fewer than one in five carried a warning label, with the vast majority – 84% – slipping through controls after they were translated into other languages, or republished in whole or part.

    “These findings point to a gap in Facebook’s ability to detect clones and variations of fact-checked content – especially across multiple languages – and to apply warning labels to them,” the report said.

    Two simple steps could hugely reduce the reach of misinformation. The first would be proactively correcting misinformation that was seen before it was labelled as false, by putting prominent corrections in users feeds.

    Recent research has found corrections like these can halve belief in incorrect reporting, Avaaz said. The other step would be to improve the detection and monitoring of translated and cloned material, so that Zuckerberg’s promise to starve the sites of their audiences is actually made good.

    A Facebook spokesperson said: “We share Avaaz’s goal of limiting misinformation, but their findings don’t reflect the steps we’ve taken to keep it from spreading on our services. Thanks to our global network of fact-checkers, from April to June, we applied warning labels to 98m pieces of Covid-19 misinformation and removed 7mpieces of content that could lead to imminent harm. We’ve directed over 2bn people to resources from health authorities and when someone tries to share a link about Covid-19, we show them a pop-up to connect them with credible health information.”

    #Facebook #Fake_news #Désinformation #Infodemics #Promesses #Culture_de_l_excuse #Médias_sociaux

  • The Second Act of Social-Media Activism | The New Yorker
    https://www.newyorker.com/culture/cultural-comment/the-second-act-of-social-media-activism

    Un article passionnant qui part des analyses de Zeynep Tufekci pour les reconsidérer à partir des mouvements plus récents.

    Some of this story may seem familiar. In “Twitter and Tear Gas: The Power and Fragility of Networked Protest,” from 2017, the sociologist Zeynep Tufekci examined how a “digitally networked public sphere” had come to shape social movements. Tufekci drew on her own experience of the 2011 Arab uprisings, whose early mobilization of social media set the stage for the protests at Gezi Park, in Istanbul, the Occupy action, in New York City, and the Black Lives Matter movement, in Ferguson. For Tufekci, the use of the Internet linked these various, decentralized uprisings and distinguished them from predecessors such as the nineteen-sixties civil-rights movement. Whereas “older movements had to build their organizing capacity first,” Tufekci argued, “modern networked movements can scale up quickly and take care of all sorts of logistical tasks without building any substantial organizational capacity before the first protest or march.”

    The speed afforded by such protest is, however, as much its peril as its promise. After a swift expansion, spontaneous movements are often prone to what Tufekci calls “tactical freezes.” Because they are often leaderless, and can lack “both the culture and the infrastructure for making collective decisions,” they are left with little room to adjust strategies or negotiate demands. At a more fundamental level, social media’s corporate infrastructure makes such movements vulnerable to coöptation and censorship. Tufekci is clear-eyed about these pitfalls, even as she rejects the broader criticisms of “slacktivism” laid out, for example, by Evgeny Morozov’s “The Net Delusion,” from 2011.

    “Twitter and Tear Gas” remains trenchant about how social media can and cannot enact reform. But movements change, as does technology. Since Tufekci’s book was published, social media has helped represent—and, in some cases, helped organize—the Arab Spring 2.0, France’s “Yellow Vest” movement, Puerto Rico’s RickyLeaks, the 2019 Iranian protests, the Hong Kong protests, and what we might call the B.L.M. uprising of 2020. This last event, still ongoing, has evinced a scale, creativity, and endurance that challenges those skeptical of the Internet’s ability to mediate a movement. As Tufekci notes in her book, the real-world effects of Occupy, the Women’s March, and even Ferguson-era B.L.M. were often underwhelming. By contrast, since George Floyd’s death, cities have cut billions of dollars from police budgets; school districts have severed ties with police; multiple police-reform-and-accountability bills have been introduced in Congress; and cities like Minneapolis have vowed to defund policing. Plenty of work remains, but the link between activism, the Internet, and material action seems to have deepened. What’s changed?

    The current uprisings slot neatly into Tufekci’s story, with one exception. As the flurry of digital activism continues, there is no sense that this movement is unclear about its aims—abolition—or that it might collapse under a tactical freeze. Instead, the many protest guides, syllabi, Webinars, and the like have made clear both the objectives of abolition and the digital savvy of abolitionists. It is a message so legible that even Fox News grasped it with relative ease. Rachel Kuo, an organizer and scholar of digital activism, told me that this clarity has been shaped partly by organizers who increasingly rely on “a combination of digital platforms, whether that’s Google Drive, Signal, Messenger, Slack, or other combinations of software, for collaboration, information storage, resource access, and daily communications.” The public tends to focus, understandably, on the profusion of hashtags and sleek graphics, but Kuo stressed that it was this “back end” work—an inventory of knowledge, a stronger sense of alliance—that has allowed digital activism to “reflect broader concerns and visions around community safety, accessibility, and accountability.” The uprisings might have unfolded organically, but what has sustained them is precisely what many prior networked protests lacked: preëxisting organizations with specific demands for a better world.

    What’s distinct about the current movement is not just the clarity of its messaging, but its ability to convey that message through so much noise. On June 2nd, the music industry launched #BlackoutTuesday, an action against police brutality that involved, among other things, Instagram and Facebook users posting plain black boxes to their accounts. The posts often included the hashtag #BlackLivesMatter; almost immediately, social-media users were inundated with even more posts, which explained why using that hashtag drowned out crucial information about events and resources with a sea of mute boxes. For Meredith Clark, a media-studies professor at the University of Virginia, the response illustrated how the B.L.M. movement had honed its ability to stick to a program, and to correct those who deployed that program naïvely. In 2014, many people had only a thin sense of how a hashtag could organize actions or establish circles of care. Today, “people understand what it means to use a hashtag,” Clark told me. They use “their own social media in a certain way to essentially quiet background noise” and “allow those voices that need to connect with each other the space to do so.” The #BlackoutTuesday affair exemplified an increasing awareness of how digital tactics have material consequences.

    These networks suggest that digital activism has entered a second act, in which the tools of the Internet have been increasingly integrated into the hard-won structure of older movements. Though, as networked protest grows in scale and popularity, it still risks being hijacked by the mainstream. Any urgent circulation of information—the same memes filtering through your Instagram stories, the same looping images retweeted into your timeline—can be numbing, and any shift in the Overton window means that hegemony drifts with it.

    In “Twitter and Tear Gas,” Tufekci wrote, “The Black Lives Matter movement is young, and how it will develop further capacities remains to be seen.” The movement is older now. It has developed its tactics, its messaging, its reach—but perhaps its most striking new capacity is a sharper recognition of social media’s limits. “This movement has mastered what social media is good for,” Deva Woodly, a professor of politics at the New School, told me. “And that’s basically the meme: it’s the headline.” Those memes, Woodley said, help “codify the message” that leads to broader, deeper conversations offline, which, in turn, build on a long history of radical pedagogy. As more and more of us join those conversations, prompted by the words and images we see on our screens, it’s clear that the revolution will not be tweeted—at least, not entirely.

    #Activisme_connecté #Black_lives_matter #Zeynep_Tufekci #Mèmes #Hashtag_movments #Médias_sociaux

  • The Civic Hijinks of K-pop’s Super Fans - Data & Society: Points
    https://points.datasociety.net/the-civic-hijinks-of-k-pops-super-fans-ae2e66e28c6

    K-pop fandoms, normally known for their dedication to South Korean music “idols,” made headlines this past month, between their social media manipulation to “defuse racist hashtags” and amplify the circulation of “petitions and fundraisers” for victims during the Black Lives Matter (BLM) movement, and their apparent foiling of Trump’s recent political rally in Tulsa, Oklahoma. The social media manipulation strategies of K-pop fandoms have been so impactful that hashtag trends such as #BanKpopAccounts have accused them of ruining user experiences and called to ban them. But some recent coverage on the power and sway that K-pop fans have over social media information ecologies has presented (unwittingly) truncated histories, (parochially) centered American K-pop fans, and cast these fan activities as somehow novel or even surprising.

    Yet, the opposite is true.

    K-pop fans, many of whom have mastered the power of social media manipulation and (mis)information via their intensely intimate relationships with their beloved idols, have a long history of utilizing their platforms in the service of social justice. It is absolutely necessary that the recent BLM activism of K-pop fans be historicized within this broader, global narrative, and that K-pop fans be recognized as more than just “bandwagoners” jumping at a media movement to simply “promote their faves.”

    South Korean entertainment companies recognized early on the transformative potentials of the internet, from the late-1990s uses of first generation Social Networking Site Cyworld to present-day mobilizations of social media. The K-pop industry played an influential role in the development of digital fandom, deploying social media services such as Twitter, Instagram, and the live-streaming app VLive to provide fans opportunities to interact directly with idols. For instance, it is routine for idols to interact with fans in live broadcast countdowns upon the release of each new song, just as it’s common for agencies to release poster and video teasers/trailers on Twitter and Instagram in the lead up to a ‘comeback’ or new release. Such intense social media interactions in turn boosted the strong sense of intimacy between idols and their fans, as well as allowed fans to regularly commune with each other in digital spaces. As a result, K-pop fans formed “tribes” who strategically draw upon the affordances of social media to promote their favorite idols on the world stage, allowing K-pop to go global.

    For instance, K-pop fans often facilitate ‘bulk pre-orders’ to increase album sales; host mass ‘streaming parties’ on YouTube, Spotify, and Shazam to increase music chart impact in a move known as “chart jacking”; plan “coordinated hashtag campaigns” on Twitter to signal boost their favorite group; or “keyword stuff” search terms on Twitter to alter SEO results and clear or bury bad press. Fans are also concerned over the wellbeing of idols, closely monitoring their personal safety and petitioning for agencies to take action, calling for fair representation in promotional material, and demanding for choreographies to be modified for the health of idols.

    However, idol support initiatives have also culminated in elaborate schemes, such as the BLACKPINK Starbucks hoax of April 2019: A rumour claimed that streaming any song from BLACKPINK would earn listeners a free drink from Starbucks through a digital voucher claimed via Twitter direct messaging or by showing “receipts” to the barista in the form of screen grabs of the streaming. Various Starbucks social media managers had their hands full clarifying this misinformation.

    K-pop fans have always been political

    K-pop fans deploy their networks and social media clout to consistently raise awareness of charitable causes, sharing resources across the globe to make the world a better place. K-pop fan activism within the BLM movement emerges from this broader history.

    Fans have mobilized support networks in the service of social justice as acts of cybervigilantism, with many clubs hosting charity events in honor of idols that are tied to these broader support projects. The recent Australian bushfires in January 2020 saw dozens of fandoms join forces to raise relief funds, with some even adopting wildlife in the name of their favorite idol. Fans of BTS alone have reportedly engaged with over 600 charity projects around the globe addressing a variety of issues. In fact, charity work is so essential to K-pop fandom that an app exists in South Korea where fans can record the amount of donations made on behalf of an idol group to develop a “charity angel” ranking.

    Fans have mobilized support networks in the service of social justice as acts of cybervigilantism…

    Social media campaigns have also regularly been hosted by K-pop fans seeking to hold K-pop stars and the industry accountable. As an expression of their strong support for idols, fans consistently call on K-pop groups to do better when they perceive that they have slipped up. For instance, fans were vocal in calling out racially insensitive performances such as when fans pressured girl group MAMAMOO to apologize for performing in blackface during a concert in 2017. Agencies, media outlets, and fandoms have also been called out for colorism and photo-editing idols’ images to preference fairer, whiter skin.

    …the activism of K-pop fans within the BLM movement is situated within broader social media debates surrounding anti-blackness within the K-pop fandom itself.

    Likewise, Black K-pop fans regularly express frustration at the persistent appropriation of Black culture and hip-hop fashion within the K-pop industry, for instance the persistent appropriation of braids, cornrows, and dreadlocks in K-pop styling. Recently, fans voiced dissatisfaction with BTS’s J-Hope, who was criticized for appropriating dreadlocks in the music video of the song “Chicken Noodle Soup ft. Becky G.” Indeed, the activism of K-pop fans within the BLM movement is situated within broader social media debates surrounding anti-blackness within the K-pop fandom itself.

    Apart from racism, several other K-pop fan initiatives focus on combating misogyny and abuse, in light of the rise of ‘molka’ or spycam incidents that prey on women and digital sex crimes (like the April 2020 Nth Room scandal) in South Korea. Considering the fact that young women make up a significant demographic in K-pop fandom, it is unsurprising that fans’ activism has evolved to also address discrimination against women around the world.
    K-pop fandom as subversive frivolity

    K-pop consumption is not an apolitical act and its fans are not disengaged or obsessive teenagers seeking to troll the world due to their sense of millennial ennui. Rather, K-pop fans in South Korea, Asia, and beyond are critical consumers who deliberately and explicitly act to address social justice concerns by harnessing their high visibility and strong community on social media networks. As noted by The Korea Herald reporter Hyunsu Yim, “the largely female, diverse & LGBT makeup” of K-pop fandoms are primed to push back against the “male dominant/less diverse/more right-wing” online discourses through their social media activism.

    The vernacular social media manipulation expertise of these fans has been honed since K-pop’s humble beginnings on websites and forums, where their fan activity is often cast as playful and feminized activity; but it is exactly this underestimation and under-valuation of K-pop fan networks, knowledge, and labor that has allowed millions of K-pop fandoms to evade sociocultural surveillance, optimize platforms’ algorithmic radars, and spread their messages far and wide in acts of subversive frivolity.

    Whether it is to persuade you to stream a song or to protest against social injustice, you can be sure that K-pop fandoms are always ready to mobilize, fueled by ferocious fan dedication, and remain extremely social media savvy.

    Dr. Crystal Abidin is Senior Research Fellow & ARC DECRA Fellow in Internet Studies at Curtin University (Perth, Australia). Learn more at wishcrys.com.

    Dr. Thomas Baudinette is Lecturer in International Studies, Department of International Studies: Languages and Cultures, Macquarie University (Sydney, Australia). Learn more at thomasbaudinette.wordpress.com.

    #K-pop #Culture_participative #Médias_sociaux #Politique

  • Une exploration de la « Raoultsphère » sur Facebook
    https://www.lemonde.fr/les-decodeurs/article/2020/07/03/une-exploration-de-la-raoultsphere-sur-facebook_6045017_4355770.html

    Sur le réseau social, les groupes favorables à l’infectiologue marseillais ont attiré plus d’un million d’internautes depuis mars. Qui sont les soutiens du promoteur de l’hydroxychloroquine ? Cartographie d’un phénomène social d’envergure et complexe.

    #Facebook #Médias_sociaux #Didier_Raoult #Complotisme

  • La fausse application 👁👄👁 qui a affolé Twitter soutient Black Lives Matter
    https://www.ladn.eu/tech-a-suivre/emojis-fausse-appli-twitter

    Un groupe de jeunes salariés américains de la tech a créé un buzz autour d’une fausse application baptisée 👁👄👁. Une blague virale qui s’est transformée en message politique. Le tout en 48 heures.

    Une bouche entourée de deux yeux ébahis. Il n’en a pas fallu plus pour intriguer la communauté tech pendant quelques jours. Jeudi 25 et vendredi 26 juin, plusieurs milliers de personnes ont partagé sur Twitter la combinaison d’émojis 👁👄👁 suivie de la phrase « It Is What It Is ».

    Cette étrange tendance a été initiée par le site https://👁👄👁.fm et son compte Twitter associé @itiseyemoutheye. Sur ces derniers, les curieux sont invités à donner leur adresse mail, ajouter 👁👄👁 à leur nom Twitter et partager l’URL du site sur le réseau social.

    Une blague pour se moquer des applis ultra-confidentielles...

    Vendredi 26 juin, un texte est finalement publié sur le site https://👁👄👁.fm et donne le fin mot de l’histoire : il n’y a pas et n’aura pas d’application. L’équipe derrière ce site web, qui se décrit comme un groupe de jeunes professionnels de la tech, voulait, au départ, simplement s’amuser en reprenant un mème de TikTok. L’idée est aussi de se moquer de la culture du FOMO (la peur de rater quelque chose) dans la tech et de l’engouement artificiel pour certaines applis hyper-confidentielles. À l’image du réseau social ClubHouse, réservé à quelques privilégiés de la Silicon Valley. La blague rappelle celle d’Oobah Butler il y a quelques années. Ce critique gastronomique avait réussi à classer un faux restaurant numéro 1 sur TripAdvisor.
    ... et défendre la communauté noire

    Mais notre histoire d’emojis ne s’arrête pas là. L’équipe de 👁👄👁.fm a voulu utiliser la « hype » créée autour de leur projet pour la bonne cause. Les personnes qui s’intéressent à 👁👄👁 sont invitées à donner à trois associations qui défendent la communauté noire : Loveland Foundation Therapy Fund, The Okra Project, et The Innocence Project. 200 000 dollars ont été déjà récoltés, assure l’équipe. Le site propose désormais des articles de merchandising à l’effigie de l’émoji dont les ventes serviront à soutenir le mouvement Black Lives Matter.

    #TikTok #Memes #Politique #Médias_sociaux

  • Reddit, Acting Against Hate Speech, Bans ‘The_Donald’ Subreddit - The New York Times
    https://www.nytimes.com/2020/06/29/technology/reddit-hate-speech.html

    SAN FRANCISCO — Reddit, one of the largest social networking and message board websites, on Monday banned its biggest community devoted to President Trump as part of an overhaul of its hate speech policies.

    The community or “subreddit,” called “The_Donald,” is home to more than 790,000 users who post memes, viral videos and supportive messages about Mr. Trump. Reddit executives said the group, which has been highly influential in cultivating and stoking Mr. Trump’s online base, had consistently broken its rules by allowing people to target and harass others with hate speech.

    “Reddit is a place for community and belonging, not for attacking people,” Steve Huffman, the company’s chief executive, said in a call with reporters. “‘The_Donald’ has been in violation of that.”

    Reddit said it was also banning roughly 2,000 other communities from across the political spectrum, including one devoted to the leftist podcasting group “Chapo Trap House,” which has about 160,000 regular users. The vast majority of the forums that are being banned are inactive.

    “The_Donald,” which has been a digital foundation for Mr. Trump’s supporters, is by far the most active and prominent community that Reddit decided to act against. For years, many of the most viral Trump memes that broke through to Facebook, Twitter and elsewhere could be traced back to “The_Donald.” One video, “The Trump Effect,” originated on “The_Donald” in mid-2016 before bubbling up to Mr. Trump, who tweeted it to his 83 million followers.

    Social media sites are facing a reckoning over the types of content they host and their responsibilities to moderate and police that content. While Facebook, Twitter, YouTube, Reddit and others originally positioned themselves as neutral sites that simply hosted people’s posts and videos, users are now pushing them to take steps against hateful, abusive and false speech on their platforms.

    Some of the sites have recently become more proactive in dealing with these issues. Twitter started adding labels last month to some of Mr. Trump’s tweets to refute their accuracy or call them out for glorifying violence. Snap also said it would stop promoting Mr. Trump’s Snapchat account after determining that his public comments off the site could incite violence.

    On Monday, the streaming website Twitch suspended Mr. Trump’s account for violating its policies against hateful conduct. Mr. Trump’s channel had rebroadcast one of his campaign rallies from 2015, in which he denigrated Mexicans and immigrants, among other streams. Twitch removed the videos from the president’s account.

    YouTube also said on Monday that it was barring six channels for violating its policies. They included those of two prominent white supremacists, David Duke and Richard Spencer, and American Renaissance, a white supremacist publication. Stefan Molyneux, a podcaster and internet commentator who had amassed a large audience on YouTube for his videos about philosophy and far-right politics, was also kicked off the site.

    Facebook , the world’s largest social network, has said it refuses to be an arbiter of content. The company said it would allow all speech from political leaders to remain on its platform, even if the posts were untruthful or problematic, because such content was newsworthy and in the public’s interest to read.

    Facebook has since come under increasing fire for its stance. Over the past few weeks, many large advertisers, including Coca-Cola, Verizon, Levi Strauss and Unilever, have said they plan to pause advertising on the social network because they were unhappy with its handling of hate speech and misinformation.

    Reddit, which was founded 15 years ago and has more than 430 million regular users, has long been one corner of the internet that was willing to host all kinds of communities. No subject — whether it was video games or makeup or power-washing driveways — was too small to discuss. People could simply sign up, browse the site anonymously and participate in any of the 130,000 active subreddits.

    Yet that freewheeling position led to many issues of toxic speech and objectionable content across the site, for which Reddit has consistently faced criticism. In the past, the company hosted forums that promoted racism against black people and openly sexualized underage children, all in the name of free speech.

    Mr. Huffman said users on “The_Donald” had frequently violated its first updated rule: “Remember the human.”

    Reddit executives said the site remained a place that they hoped could be a forum for civil political discourse in the future, as long as users played by its rules.

    “There’s a home on Reddit for conservatives, there’s a home on Reddit for liberals,” said Benjamin Lee, Reddit’s general counsel. “There’s a home on Reddit for Donald Trump.”

    #Reddit #Médias_sociaux #Politique

  • Les règles de fonctionnement de reddit
    human_reddiquette - reddit.com
    https://www.reddit.com/wiki/human_reddiquette

    Reddiquette is an informal expression of the values of many redditors, as written by redditors themselves. Please abide by it the best you can. This is a shortened version that mainly focuses on civil discourse.
    Please do
    Remember the human . When you communicate online, all you see is a computer screen. When talking to someone you might want to ask yourself “Would I say it to the person’s face?” or “Would I get jumped if I said this to a buddy?”
    Adhere to the same standards of behavior online that you follow in real life.
    Read the rules of a community before making a submission . These are usually found in the sidebar.
    Moderate/Vote based on quality, not opinion . Well written and interesting content can be worthwhile, even if you disagree with it.
    Consider posting constructive criticism / an explanation when you downvote something, and do so carefully and tactfully.
    Use an “Innocent until proven guilty” mentality. Unless there is obvious proof that a submission is fake, or is whoring karma, please don’t say it is. It ruins the experience for not only you, but the millions of people that browse reddit every day.
    Please do not
    Post someone’s personal information, or post links to personal information. This includes links to public Facebook pages and screenshots of Facebook pages with the names still legible. We all get outraged by the ignorant things people say and do online, but witch hunts and vigilantism hurt innocent people too often, and such posts or comments will be removed. Users posting personal info are subject to an immediate account deletion. If you see a user posting personal info, please contact the admins. Additionally, on pages such as Facebook, where personal information is often displayed, please mask the personal information and personal photographs using a blur function, erase function, or simply block it out with color. When personal information is relevant to the post (i.e. comment wars) please use color blocking for the personal information to indicate whose comment is whose.
    Do not repost deleted/removed information. Remember that comment someone just deleted because it had personal information in it or was a picture of gore? Resist the urge to repost it. It doesn’t matter what the content was. If it was deleted/removed, it should stay deleted/removed.
    Be intentionally rude at all. By choosing not to be rude, you increase the overall civility of the community and make it better for all of us.
    Conduct personal attacks on other commenters. Ad hominem and other distracting attacks do not add anything to the conversation.
    Start a flame war. Just report and “walk away”. If you really feel you have to confront them, leave a polite message with a quote or link to the rules, and no more.
    Insult others. Insults do not contribute to a rational discussion. Constructive Criticism, however, is appropriate and encouraged.
    Troll. Trolling Does not contribute to the conversation.

    #Reddit #Comportement #Médias_sociaux

  • OnlyFans, l’Instagram payant qui pourrait révolutionner l’industrie porno
    https://www.ladn.eu/media-mutants/reseaux-sociaux/onlyfans-instagram-payant-revolutionner-industrie-porno

    Avec une augmentation de 75% des inscriptions durant le mois de mars (soit une estimation de 35 millions d’inscrits) et plus de 105 millions de tweets échangés sur le sujet, OnlyFans est bien LA plateforme sociale gagnante de la crise du coronavirus. Cet Instagram payant s’est imposé comme un nouvel acteur incontournable du Web et plus précisément de l’industrie du porno avec en ligne de mire, un modèle économique plus juste pour les femmes.
    Financement participatif du porno

    Créée en 2016 par une discrète entreprise technologique – Fenix International Limited – OnlyFans avait pour vocation première de concurrencer d’autres services de financement participatif comme Patreon ou Tipeee. Le principe est d’ailleurs sensiblement le même : une fois inscrits sur la plateforme, les internautes peuvent choisir un ou une créatrice de contenu et s’abonner à son fil d’actualité contre une somme allant de 5 à 20 dollars par mois. Mais, contrairement aux réseaux sociaux classiques, sur OnlyFans, il est possible de poster des photos dénudées ou des vidéos à caractère pornographique.

    OnlyFans est donc naturellement devenue une plateforme centrée sur cette pratique même si d’autres thématiques existent à la marge. « On peut dire que la nudité ou le porno constituent 85% du contenu, explique Jean-Baptiste Bourgeois, planeur stratégique chez We are Social. Mais on y trouve aussi des coach de yoga, des danseuses ou des performeuses notamment dans le domaine du strip-tease. Dans tous les cas, il faut comprendre que 95 % des créateurs de contenu sont des femmes et que leur public est composé de 95% d’hommes. »
    Quand Beyoncé adoube OnlyFans

    Jusqu’en 2020, la plateforme est restée relativement sous les radars. L’explosion a eu lieu à partir du mois de mars 2020, notamment à cause du confinement. Et on peut trouver plusieurs explications au phénomène. Subitement, les tournages de film porno ont été interdits et beaucoup d’acteurs de ce secteur vont se réfugier sur le réseau pour s’assurer des revenus. Un autre évenement va aussi assurer la popularité d’OnlyFans : il s’agit de la chanteuse Beyoncé qui, dans le morceau Savage remix sorti le 16 mars dernier, évoque le réseau dans son couplet « On that Demon Time, she might start a OnlyFans ». Le « Demon Time » en question est un phénomène qui a démarré lui aussi avec la crise du Covid-19 et avec la fermeture des clubs de strip-tease des grandes villes américaines. De nombreuses performeuses se sont alors réunies pour proposer des danses érotiques dans des vidéos live d’Instagram en partenariat avec l’application CashApp pour assurer une rémunération. « Avant Beyoncé, OnlyFans était un réseau de niche, poursuit Jean-Baptiste Bourgeois. Grâce à elle, c’est devenu une plateforme cool ».

    À partir de ce moment, le hashtag #OnlyFans s’est mis à décoller

    #Médias_sociaux #Pornographie #Only_Fans

  • Twitter relance la certification des comptes
    https://www.rtl.fr/actu/futur/twitter-relance-la-certification-des-comptes-7800587025

    Twitter va relancer la fonctionnalité des « comptes certifiés ». Le réseau social confirme travailler sur une refonte du système de vérification des comptes. L’obtention du badge bleu sera plus transparente.

    C’est une ingénieure, Jane Manchun Wong, qui a repéré une nouvelle possibilité « Demande de vérification » dans les paramètres de l’application Twitter, dans la section « Informations personnelles ». L’information a été confirmée par le réseau social un peu plus tard.
    Twitter semble donc proposer une « vérification individuelle ». Cette refonte devrait s’accompagner d’un guide pour éclairer les utilisateurs quant à la démarche à suivre, et son fonctionnement. Pour l’instant, le réseau social n’accepte aucune requête supplémentaire, et confirme que le programme est en attente.

    Le petit badge bleu avait toujours suscité des discussions. Apparemment réservé aux personnalités et aux comptes d’intérêt public, Twitter avait suspendu le processus de vérification après que Jason Kessler, activiste à l’origine du rassemblement Unite The Right à Charlottesville, ait tweeté des commentaires sur la mort de Heather Heyer, décédée lors des violences survenues lors de cette manifestation.

    Verification was meant to authenticate identity & voice but it is interpreted as an endorsement or an indicator of importance. We recognize that we have created this confusion and need to resolve it. We have paused all general verifications while we work and will report back soon
    — Twitter Support (@TwitterSupport) November 9, 2017

    Début 2020, des badges de vérification ont été attribués à des responsables de santé publique pour prouver l’authenticité des comptes en pleine pandémie du coronavirus.

    #Twitter #Certification #Médias_sociaux

  • The Case for Social Media Mobs - The Atlantic
    https://www.theatlantic.com/technology/archive/2020/05/case-social-media-mobs/612202

    par Zeynep Tufekci

    There is no doubt that social-media fury can go wrong. In one infamous instance, a young woman made a joke to her small circle on Twitter, just before boarding a plane to South Africa, about white people not getting AIDS. The joke was either racist or making fun of racism depending on your interpretation, but Twitter didn’t wait to find out. By the time the woman had landed, her name was trending worldwide, and she’d been fired from her job.

    Throngs on social media violate fundamental notions of fairness and due process: People may be targeted because of a misunderstanding or an out-of-context video. The punishment online mobs can mete out is often disproportionate. Being attacked and ridiculed by perhaps millions of people whom you have never met, and against whom you have no defenses, can be devastating and lead to real trauma.

    The vagaries of human nature and the scale and algorithms of social-media platforms fuel case after case of people finding themselves in the midst of such whirlwinds, but sometimes these mobs perform an important function. Sometimes the social-media mob isn’t just justified or understandable, but necessary because little else is available to protect the real victims. Such is the case with Amy Cooper, the woman now famous for making a false police report claiming that an African American man was threatening her life, when in fact he had merely asked her to leash her dog in Central Park, where he was bird-watching.

    Deterrence is an important focus here, because the consequences of these fake cries can be dire. Black Americans have suffered a range of fates when police arrive thinking they’re dangerous from the outset, whether it’s needless arrest or being killed on the spot, like 12-year-old Tamir Rice, whom a police officer shot within two seconds of getting out of his (still not fully stopped) patrol car. Just this week, a black man in Minneapolis, George Floyd, was choked to death by a police officer who pressed his knee on Floyd’s neck for more than seven minutes while Floyd repeatedly said, “I can’t breathe,” and bystanders begged the officer to stop, to no avail.

    Amy Cooper’s case is remarkably straightforward. We don’t need to read her mind or speculate about her motives. She tells us exactly what they are. The minute-long video of the encounter, filmed by the bird-watcher, Christian Cooper (no relation), starts with Amy Cooper walking up to and lunging at him. He steps back, saying, “Please don’t come close to me.” She lunges at him again and demands that he stop recording, and he steps back again. Amy Cooper then looks at him, takes out her phone, and matter-of-factly tells him, “I’m going to call the cops, and I’m going to tell them there’s an African American man threatening my life.” Christian Cooper surely knows his own race and did not need a reminder. Her statement was meant as a deliberate threat.

    But life doesn’t end there. Amy Cooper’s 911 call was realistic enough that an NYPD unit showed up to what they thought was a “possible assault.” A tall black man suspected of assault, perhaps holding a shiny black object—bird-watching binoculars—may not even have had the two seconds Tamir Rice had. Thankfully, Christian Cooper had left by then, otherwise it might have been his name, not hers, that became a hashtag.

    During the Arab Spring and its aftermath, which I studied in the field as a scholar, in places such as Tahrir Square, Cairo, and Taksim Gezi Park, Istanbul, I witnessed numerous examples of social-media fury as protesters’ only tool of deterrence against wrongdoing by the powerful. Does it work? Not always, but sometimes there’s nothing else. For example, in the years before millions took to Egypt’s streets in 2011, many videos of police torturing victims surfaced and went viral online, provoking anger. Online comments may not have teeth against the Egyptian police, perhaps, in such a repressive state, but they made an important statement, the only statement available to the otherwise voiceless, powerless masses. Sometimes the social-media mob is the voice of the unheard, and sometimes it’s the only one they have.

    What Amy Cooper did was swatting-adjacent in both intent, execution, and possible consequences—calling 911 to make a false report of being in danger as a way to target someone. As a result of the publicity, she was fired from her job as the vice president at an investment firm, and she “voluntarily” surrendered her dog to the shelter she had adopted him from. I’m sure it’s a difficult time for her, but is it enough of a deterrent to future Amy Coopers? Absent a prosecution, I’m not so sure. And NYPD officials have already told us that they are “not going to pursue” any charges against her, that they have “bigger fish to fry,” and the district attorney “would never prosecute that.”

    If protecting black people’s lives from blatant false reports that may endanger them is not big enough fish to fry, what is? Social-media rage is not an unalloyed good. It has its excesses. But until there is sufficient lawful deterrence for this particular crime, I’m not ready to condemn this mob or this fury.

    #Zeynep_Tufekci #Swatting #Media_mob #Racisme #Médias_sociaux

  • Trump, Twitter, and the failed politics of appeasement

    https://link.wired.com/view/5cec29ba24c17c4c6465ed0bc6h9l.wnj/55e32496

    par Steven Levy

    Lately, my pandemic reading has included Munich, a historical novel by Robert Harris involving the tragic 1938 attempt by UK prime minister Neville Chamberlain to appease Adolph Hitler, hoping to stave off a world war that the Führer was hellbound to trigger. Chamberlain’s efforts (which Harris portrays sympathetically) were doomed.

    That reading now has an odd resonance with current events. For years, Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey have donned kid gloves to handle complaints of conservative bias from Donald Trump, other Republicans, and far-right wingnuts. Despite this appeasement, the executives are now facing a Trump executive order that will potentially impose government controls on what users can and cannot say on their platforms.

    Specifically, Trump is attempting to unilaterally reinterpret the meaning of Section 230, the part of the 1996 Telecommunications Bill that gives the platforms the ability to police the user-created content on their sites for safety and security without bearing the legal responsibility for anything those billions of people might say. His order explicitly echoes his claim—a bogus one—that the platforms are using the 1996 provision to censor conservatives. According to the order, Trump gives the government the power to strip companies of their protection under Section 230. Trump also wants to use something called the “Tech Bias Reporting Tool” to examine platforms for political bias and report offenders to the DOJ and FTC for possible action. It’s a bold move that would create government monitors to make sure Facebook, Twitter, and the rest give conservative speech more than its due. (One hopes that if this does come to pass, the courts will overturn the effort because, well, the constitution.)

    The longstanding claim that the platforms censor conservative speech is ridiculous. Facebook and Twitter remove content that violates community standards by spreading harmful misinformation or hate speech. A lot of that comes from elements of the right wing. Yeah, those standards aren’t perfect, and those platforms make mistakes in executing them, but there’s never been any evidence of an algorithmic bias. But instead of vigorously defending themselves, the leaders of the platforms keep assuring politicians that they take those gripes very seriously.

    Trump himself gets a pass when it comes to moderation because what a president says is newsworthy. That’s a defendable stance, but as he increasingly violates standards and norms, his posts have become a firehose of toxicity. In 2017, Dorsey told me, “I think it’s really important that we maintain open channels to our leaders, whether we like what they’re saying or not, because I don’t know of another way to hold them accountable.” He also implied that newsworthiness might have to be balanced with community standards. That was many tweets ago, and it wasn’t until this week that Twitter provided a fact-check to a Trump tweet that told falsehoods about voting by mail. (Still, Twitter left standing a Trump tweet spreading a bogus charge that former congressperson Joe Scarborough once killed an aide.)

    Zuckerberg has given Trump and other conservatives an even wider berth, beginning with his 2015 decision to leave up Trump’s anti-Muslim post that seemingly violated the company’s hate speech policy. During the 2016 election, Facebook did not remove false news stories from make-believe publications, even though it was clear that such information overwhelmingly benefited Trump. Despite this, the right kept complaining of bias, with Republicans blasting Zuckerberg in his April 2018 appearance in Congress. Zuckerberg knew full well that there was no statistical basis for the charge. But when I asked him about that soon after, his response was shockingly timid. “That depth of concern that there might be some political bias really struck me,” he said. “I was like, ‘Wow, we need to make sure we bring in independent, outside folks to help us do an audit and give us advice on making sure our systems are not biased in ways that we don’t understand.’”

    Later, Facebook commissioned a study led by conservative senator John Kyl which offered no data to back up any systematic bias. Instead of demanding that this should end the complaints, Facebook made some general adjustments in its policies that gave the anecdotal gripes in the report more credibility than they warranted. Appeasement!

    Look, I get it—who wants to take on the president and the ruling party, especially when regulation is in the air? But instead of avoiding conflict, Facebook and Twitter leaders should have been emphasizing that they have just as much right to set their own standards as television stations, newspapers, and other corporations. Despite the fact that they are popular enough to be considered a “public square,” they are still private businesses, and the government has no business determining what legal speech can and cannot occur there. That is the essence of the First Amendment. But even as Mark Zuckerberg goes on about how he values free expression—as he was doing on television the same day Trump issued his order—he still refrains from demanding that the government respect Facebook’s own right to free speech.

    To be sure, Trump is wading—no, make that belly-flopping—into a controversy over internet speech that is already fraught with intractable problems. The very act of giving bullhorns to billions is both a boon and a menace. Even with the purest intentions—and obviously those growth-oriented platforms are not pure—figuring out how to deal with it involves multiple shades of gray. But the current threat comes in clear black and white: the president of the United States is attempting a takeover of internet speech and asserting a federal privilege to topple truth itself.

    Munich has failed. It’s time for the internet moguls to stop acting like Chamberlain—and start channeling Churchill.

    #Trump #Twitter #Médias_sociaux #Régulation

    • But instead of avoiding conflict, Facebook and Twitter leaders should have been emphasizing that they have just as much right to set their own standards as television stations, newspapers, and other corporations. Despite the fact that they are popular enough to be considered a “public square,” they are still private businesses, and the government has no business determining what legal speech can and cannot occur there.

      Justement non : c’est soit l’un, soit l’autre. Les télévision et journaux sont responsables de ce qu’ils publient. Les plateformes sont des moyens de communication, et sont donc protégées des contenus publiés par des tiers.

      Et donc rappeler la position de Chemla : soit les plateformes sont des supports neutres et peuvent donc se prévaloir de l’irresponsabilité éditoriale, soit elles interviennent dans ce qui est publié, donc sont des éditeurs, et deviennent responsables des contenus.

    • Oui, c’est ce qui en fait des « public square ». Et c’est toute la complexité de l’affaire. Car ils ne sont justement pas dans le même temps « publics », c’est-à-dire qu’ils sont guidés (leurs algorithmes sont écrits pour..) par leurs intérêts.
      Je note ici des points de vue, qui ne sont pas forcément les miens ;-) J’enregistre de l’info pour le jour où j’aurais le courage d’écrire.

  • Trump’s Attacks on Twitter Are Part of a Plot to Keep Social Media in His Pocket – Mother Jones
    https://www.motherjones.com/2020-elections/2020/05/donald-trump-attacks-twitter

    If you’re a Twitter user, by now you’ve probably seen the news. After years of complaints about President Donald Trump broadcasting falsehoods over the platform, the company finally took a small step to mitigate his misinformation. On Tuesday, the social media giant appended a “get the facts” link to two Trump tweets in which he claimed that mail-in-ballots would result in fraudulent election outcomes.

    The link led to a page with several bullet points that refute the president—who, to be clear, was not telling the truth—along with links to reputable news stories providing context and correct information.

    Since 2016, the social media companies have taken some steps to rein in the worst behavior on their services, including setting up guardrails specifically related to the disinformation around voting. Facebook banned false information and suppressive content on elections in ads. Twitter rolled out its election integrity policy in January. Meanwhile, Trump, his campaign, and Republican lawmakers have engaged in a campaign to keep those guardrails off. Part of this pressure campaign is backed by the threat of regulation. The Justice Department under Attorney General Bill Barr is overseeing anti-trust investigations into major social media companies, and Republican lawmakers claiming, without evidence, conservative censorship have proposed regulations.

    And if such policies are ever acted upon, as they were Tuesday, the claims of bias will resonate because Trump has already created a context in which his supporters believe social media is working against them. I wrote about this how this strategy has been deployed in my recent profile of Brad Parscale, Trump’s campaign manager

    These tweets do exactly what the campaign has prepared to do in its battle with social media companies: Trump accuses them of bias, threatens regulation, and then goes ahead and repeats the false claim he was aiming to spread, in this case that voting by mail will delegitimize the election. He reminds Twitter of his power over the platform, then dares it to once again fact-check his false claim. How will Twitter respond?

    We already know how the company and its peers have responded to this exact treatment over the last several years. Caught between civil rights pushing to get voter suppression and hate off the platforms and Trump and his crew on the right pushing to let repugnant content stay up, the platforms have largely catered to Trump’s concerns.

    Facebook, for example, now allows politicians to lie in their posts and ads. On Tuesday, the Wall Street Journal reported that Facebook had internally determined that its algorithms increased polarization and radicalization but chose to do nothing, largely because of pressure from Republicans. And this Tuesday, the same day Twitter finally put its “get the facts” tag on Trump’s tweets, it refused to take down his tweets accusing the MSNBC host Joe Scarborough of involvement in the 2001 death of an employee. The president continued making the claim on Twitter on Wednesday.

    Facebook, in particular, has repeatedly shown itself to be more interested in pleasing conservatives than cracking down on extremists using its platform. While the company’s own content policies ban hate groups, for example, just last week, a report by the Tech Transparency Project found 153 white supremacist groups’ pages on Facebook. (Many were removed after the report’s publication.)

    With Trump reportedly growing more worried about his re-election prospects and Election Day in less than six months, his campaign is expected to unleash its war chest. That will include massive spending, particularly on Facebook, where he’ll seek to connect with his audience without filter by the platform. His attack on Twitter is just the latest chapter in a years-long campaign to work the refs to make sure the social media giants feel they have no other option.

    #Trump #Twitter #Médias_sociaux #Politique

  • How to Set Your Facebook, Twitter, and Instagram to Control Who Sees What | WIRED
    https://www.wired.com/story/lock-down-social-media-privacy-security-facebook-twitter

    12 règles pour protéger au mieux sa vie privée sur les médias sociaux (mais pas « des » médias sociaux).

    Social media can bring us together, and even distract us sometimes from our troubles—but it also can expose us to scammers, hackers, and...less than pleasant experiences.

    Don’t panic though: you can keep the balance towards the positive with just a few common-sense steps, and we have some of the most vital ones below. When it comes to staying safe on Facebook, Instagram and Twitter, a lot of it is common sense, with a sprinkling of extra awareness.

    #Médias_sociaux #Vie_privée

  • How covid-19 conspiracy theorists are exploiting YouTube culture | MIT Technology Review
    https://www.technologyreview.com/2020/05/07/1001252/youtube-covid-conspiracy-theories/?truid=a497ecb44646822921c70e7e051f7f1a

    Covid-19 conspiracy theorists are still getting millions of views on YouTube, even as the platform cracks down on health misinformation.

    The answer was obvious to Kennedy, one of many anti-vaccination leaders trying to make themselves as visible as possible during the covid-19 pandemic. “I’d love to talk to your audience,” he replied.

    Kennedy told Bet-David that he believes his own social-media accounts have been unfairly censored; making an appearance on someone else’s popular platform is the next best thing. Bet-David framed the interview as an “exclusive,” enticingly titled “Robert Kennedy Jr. Destroys Big Pharma, Fauci & Pro-Vaccine Movement.” In two days, the video passed half a million views.

    As of Wednesday, advertisements through YouTube’s ad service were playing before the videos, and Bet-David’s merchandise was for sale in a panel below the video’s description. Two other interviews, in which anti-vaccine figures aired several debunked claims about coronavirus and vaccines (largely unchallenged by Bet-David), were also showing ads. Bet-David said in an interview that YouTube had limited ads on all three videos, meaning they can generate revenue, but not as much as they would if they were fully monetized.

    We asked YouTube for comment on all three videos on Tuesday afternoon. By Thursday morning, one of the three (an interview with anti-vaccine conspiracy theorist Judy Mikovits) had been deleted for violating YouTube’s medical misinformation policies. Before it was deleted, the video had more than 1 million views.

    YouTube said that the other two videos were borderline, meaning that YouTube decided they didn’t violate rules, but would no longer be recommended or show up prominently in search results.

    I asked Bet-David whether he felt any responsibility over airing these views on his channel—particularly potentially harmful claims by his guests, urging viewers to ignore public health recommendations.

    “I do not,” he said. “I am responsible for what comes out of my mouth. I’m not responsible for what comes out of your mouth”

    For him, that lack of responsibility extends to misinformation that could be harmful to his audience. He is just giving people what they are asking for. That, in turn, drives attention, which allows him to make money from ads, merchandise, speaking gigs, and workshops. “It’s up to the audience to make the decision for themselves,” he says. Besides, he thinks he’s done interviewing anti-vaccine activists for now. He’s trying to book some “big name” interviews of what he termed “pro-vaccine” experts.

    #YouTube #Complotisme #Vaccins #Médias_sociaux #Fake_news

  • Inside the Early Days of China’s Coronavirus Coverup | WIRED
    https://www.wired.com/story/inside-the-early-days-of-chinas-coronavirus-coverup

    Seasoned journalists in China often say “Cover China as if you were covering Snapchat”—in other words, screenshot everything, under the assumption that any given story could be deleted soon. For the past two and half months, I’ve been trying to screenshot every news article, social media post, and blog post that seems relevant to the coronavirus. In total, I’ve collected nearly 100 censored online posts: 40 published by major news organizations, and close to 60 by ordinary social media users like Yue. In total, the number of Weibo posts censored and WeChat accounts suspended would be virtually uncountable. (Despite numerous attempts, Weibo and WeChat could not be reached for comment.)

    Taken together, these deleted posts offer a submerged account of the early days of a global pandemic, and they indicate the contours of what Beijing didn’t want Chinese people to hear or see. Two main kinds of content were targeted for deletion by censors: Journalistic investigations of how the epidemic first started and was kept under wraps in late 2019 and live accounts of the mayhem and suffering inside Wuhan in the early days of the city’s lockdown, as its medical system buckled under the world’s first hammerstrike of patients.

    It’s not hard to see how these censored posts contradicted the state’s preferred narrative. Judging from these vanished accounts, the regime’s coverup of the initial outbreak certainly did not help buy the world time, but instead apparently incubated what some have described as a humanitarian disaster in Wuhan and Hubei Province, which in turn may have set the stage for the global spread of the virus. And the state’s apparent reluctance to show scenes of mass suffering and disorder cruelly starved Chinese citizens of vital information when it mattered most.

    On January 20, 2020, Zhong Nanshan, a prominent Chinese infectious disease expert, essentially raised the curtain on China’s official response to the coronavirus outbreak when he confirmed on state television that the pathogen could be transmitted from human to human. Zhong was, in many ways, an ideal spokesperson for the government’s effort; he had become famous for being a medical truth-teller during the 2003 SARS outbreak.

    Immediately following Zhong’s announcement, the Chinese government allowed major news organizations into Wuhan, giving them a surprising amount of leeway to report on the situation there. In another press conference on January 21, Zhong praised the government’s transparency. Two days after that, the government shut down virtually all transportation into and out of Wuhan, later extending the lockdown to other cities.

    The sequence of events had all the appearances of a strategic rollout: Zhong’s January 20 TV appearance marked the symbolic beginning of the crisis, to which the government responded swiftly, decisively, and openly.

    But shortly after opening the information floodgates, the state abruptly closed them again—particularly as news articles began to indicate a far messier account of the government’s response to the disease. “The last couple of weeks were the most open Weibo has ever been and [offered] the most freedom many media organizations have ever enjoyed,” one Chinese Weibo user wrote on February 2. “But it looks like this has come to an end.”

    On February 5, a Chinese magazine called China Newsweek published an interview with a doctor in Wuhan, who said that physicians were told by hospital heads not to share any information at the beginning of the outbreak. At the time, he said, the only thing that doctors could do was to urge patients to wear masks.

    Various frontline reports that were later censored supported this doctor’s descriptions: “Doctors were not allowed to wear isolation gowns because that might stoke fears,” said a doctor interviewed by the weekly publication Freezing Point. The interview was later deleted.

    By January, according to Caixin, a gene sequencing laboratory in Guangzhou had discovered that the novel virus in Wuhan shared a high degree of similarity with the virus that caused the SARS outbreak in 2003; but, according to an anonymous source, Hubei’s health commission promptly demanded that the lab suspend all testing and destroy all samples. On January 6, according to the deleted Caixin article, China’s National Center for Disease Control and Prevention initiated an “internal second-degree emergency response”—but did not alert the public. Caixin’s investigation disappeared from the Chinese internet only hours after it was published.

    Among journalists and social critics in China, the 404 error code, which announces that the content on a webpage is no longer available, has become a badge of honor. “At this point, if you haven’t had a 404 under your belt, can you even call yourself a journalist?” a Chinese reporter, who requested anonymity, jokingly asked me.

    However, the crackdown on reports out of Wuhan was even more aggressive against ordinary users of social media.

    On January 24, a resident posted that nurses at a Hubei province hospital were running low on masks and protective goggles. Soon after that post was removed, another internet user reposted it and commented: “Sina employees—I’m begging you to stop deleting accounts. Weibo is an effective way to offer help. Only when we are aware of what frontline people need can we help them.”

    Only minutes later, the post was taken down. The user’s account has since vanished.

    But the real war between China’s censors and its social media users began on February 7.

    That day, a Wuhan doctor named Li Wenliang—a whistleblower who had raised alarms about the virus in late December, only to be reprimanded for “spreading rumors”—died of Covid-19.

    Within hours, his death sparked a spectacular outpouring of collective grief on Chinese social media—an outpouring that was promptly snuffed out, post by post, minute by minute. With that, grief turned to wrath, and posts demanding freedom of speech erupted across China’s social media platforms as the night went on.

    A number of posts directly challenged the party’s handling of Li’s whistleblowing and the government’s relentless suppression of the freedom of speech in China. Some Chinese social media users started to post references to the 2019 Hong Kong protests, uploading clips of “Do You Hear People Sing” from Les Miserables, which became a protest anthem during last year’s mass demonstrations. Even more daringly, some posted photos from the 1989 Tiananmen Square protest and massacre, one of the most taboo subjects in China.

    One image that surfaced from Tiananmen was an image of a banner from the 1989 protest that reads: “We shall not let those murderers stand tall so they will block our wind of freedom from blowing.”

    The censors frantically kept pace. In the span of a quarter hour from 23:16 to around 23:30, over 20 million searches for information on the death of Li Wenliang were winnowed down to fewer than 2 million, according to a Hong Kong-based outlet The Initium. The #DrLiWenLiangDied topic was dragged from number 3 on the trending topics list to number 7 within roughly the same time period.

    Since the night of February 7, whole publications have fallen to the scythe. On January 27, an opinion blog called Dajia published an article titled “50 Days into the Outbreak, The Entire Nation is Bearing the Consequence of the Death of the Media.” By February 19, the entire site was shut down, never to resurface.

    On March 10, an article about another medical whistleblower in Wuhan—another potential Li—was published and then swiftly wiped off the internet, which began yet another vast cat-and-mouse game between censors and Chinese social media users. The story, published by People, profiled a doctor, who, as she put it, had “handed out the whistle” by alerting other physicians about the emergence of a SARS-like virus in late December. The article reported that she had been scolded by hospital management for not keeping the information a secret.

    Soon after it was deleted, Chinese social media users started to recreate the article in every way imaginable: They translated it into over 10 languages; transcribed the piece in Morse code; wrote it out in ancient Chinese script; incorporated its content into a scannable QR code; and even rewrote it in Klingon—all in an effort to evade the censorship machine. All of these efforts were eradicated from the internet.

    But it’s unlikely that the masses of people who watched posts being expunged from the internet will forget how they were governed in the pandemic. On March 17, I picked up my phone, opened my Weibo account, and typed out the following sentence: “You are waiting for their apology, and they are waiting for your appreciation.” The post promptly earned me a 404 badge.

    Shawn Yuan is a Beijing-based freelance journalist and photographer. He travels between the Middle East and China to report on human rights and politics issues.

    #Chine #Censure #Médias_sociaux #Journalisme

  • Pourquoi les soignants ont rejoint les ados sur Tik Tok
    https://www.lefigaro.fr/actualite-france/pourquoi-les-soignants-ont-rejoint-les-ados-sur-tik-tok-20200430

    #jerestechezmoi … et je me filme en musique. Depuis plus d’un mois, l’application Tik Tok est une fenêtre d’évasion en cette période de confinement. Et les soignants ont rejoint les ados pour s’y divertir.

    « J’avais arrêté de l’utiliser » explique Alexandra, une lycéenne de 16 ans. « Mais avec le confinement, mes amis se sont tous réinscrits. Ça nous occupe ». Il faut dire que depuis son lancement, en 2016, l’application séduit surtout les plus jeunes, qui s’y filment pour reproduire chorégraphies et chansons en playback. Or, l’ex-Musical.ly a vu son audience s’élargir - jusqu’à totaliser 365 millions de téléchargements depuis le premier janvier sur l’App Store d’Apple et sur Google Play, selon le rapport trimestriel Sensor Tower.

    Sarah, chercheuse en histoire de 29 ans, avoue « s’être inscrite parce qu’il y avait des heures à perdre » - elle reconnaît y passer certains week-end jusqu’à 3 ou 4 heures d’affilés avec ses colocataires pour reproduire les danses et autres défis proposés par les autres utilisateurs de la plateforme. Parmi ces challenges, le hashtag “#jerestechezmoi”, lancé par les développeurs de l’application pour la France, totalise déjà près de 332 millions de vues en quelques semaines.

    Cette arrivée massive de nouveaux utilisateurs ne surprend pas Anne Cordier, enseignante-chercheuse en sciences de l’information et de la communication à l’Université de Rouen : « Les réseaux sociaux grand public sont quasi toujours investis en premier par les jeunes. C’est la logique de toute pratique culturelle nouvelle : les plus jeunes s’en emparent, créant un effet de mode, et dans un second temps, les plus âgés viennent s’y greffer ». Un phénomène comparable selon elle à l’évolution d’autres réseaux sociaux, comme Facebook ou Snapchat. Elle constate toutefois une évolution dans le discours : « Avant le confinement, certains réseaux sociaux comme Tik Tok étaient plutôt diabolisés. Aujourd’hui, on ne parle plus que de la formidable opportunité qu’offrent ces plateformes pour créer du lien ! »

    Un succès insolent dont il est encore difficile d’estimer s’il se prolongera après le confinement. « Je vais continuer à l’utiliser », nous dit Alexandra, qui a d’ailleurs inscrite sa mère sur l’application pour lui permettre de « regarder ce qu’elle y fait ». Néanmoins « c’est sûr, j’aurai beaucoup moins de temps ». La chercheuse Anne Cordier veut cependant croire que l’impact sera durable : « Le regard de certaines personnes sur les réseaux sociaux et leur usage va changer », souligne-t-elle. « Le fait qu’ils soient utilisés comme un outil de partage, d’amusement positif intergénérationnel, fait qu’il ne sera pas possible de faire comme si rien ne s’était passé. »

    #Tik_Tok #Médias_sociaux #Confinement

  • IVG : en Pologne, la lutte d’Anja Rubik contre l’obscurantisme - Le monde bouge - Télérama.fr
    https://www.telerama.fr/monde/ivg-en-pologne,-la-lutte-danja-rubik-contre-lobscurantisme,n6611524.php

    MENACES SUR L’IVG EN EUROPE – En Pologne, le sida, la syphilis et les grossesses précoces explosent. En cause, les assauts des ultra-conservateurs au pouvoir pour limiter l’IVG et pénaliser l’éducation sexuelle. Mais des citoyens relèvent la tête, comme Anja Rubik et ses campagnes d’information devenues virales sur le Net.

    La vidéo affiche cinq cent cinquante-huit mille vues sur YouTube. On y voit la top-modèle polonaise Anja Rubik, 36 ans, faux airs de Debbie Harry et d’Uma Thurman, assise dans une chambre qui pourrait être celle d’une ado, murs recouverts de photos, guitare électrique dans un coin. « Quand je pense à mes propres expériences… J’ai commencé vers 7 ou 8 ans, raconte la jeune femme. Je jouais avec mon ours en peluche […]. Je ne me souviens pas vraiment de cette période, mis à part de cet ours. » De quoi parle Anja Rubik, aux côtés de l’éducatrice sexuelle Natalia Trybus et du youtubeur Maciej Dabrowski ? De masturbation. Pas de ricanements gênés ni de sous-entendus graveleux, le ton de la discussion est bienveillant et décontracté.

    À l’image de #sexedpl (Sex Education Poland), la vaste campagne que la jeune femme a organisée dans son pays natal. Son but : promouvoir l’éducation sexuelle, et même la rendre « cool », dans une Pologne gouvernée par les ultraconservateurs du parti Droit et Justice (PiS), et où la mainmise de l’Église sur la vie privée, et publique, est plus forte que jamais. Le pari, lancé en 2017, était osé. Moins de trois ans plus tard, #sexedpl est devenu un phénomène culturel aux multiples facettes : des campagnes sur Internet, des vidéos éducatives imaginées en association avec Netflix, de multiples interventions dans des festivals de musique et de cinéma, et un livre qui caracole en tête des ventes.

    #Education_sexuelle #Médias_sociaux #Pologne

  • Coronavirus : le top des meilleurs challenges TikTok du confinement
    https://www.ladn.eu/media-mutants/reseaux-sociaux/confinement-challenges-tiktok

    Drôles, créatifs et parfois un peu idiots… ces challenges occupent les tiktokeurs pendant le confinement.

    L’économie est à l’arrêt. Pas les tiktokeurs. Tout va bien pour le réseau social préféré de la Génération Z. Il fait toujours partie des applications les plus téléchargées pendant le confinement et va même faire un don de 62 millions d’euros pour lutter contre la pandémie de Covid-19. Et surtout, les vidéos de 15 secondes continuent de nous faire rire pendant que nous sommes enfermés entre quatre murs. Petite sélection de challenges qui occupent les tiktokeurs pendant le confinement.

    #Tik_Tok #Médias_sociaux #Coronavirus

  • Comment fonctionne l’algorithme de TikTok pour générer de l’engagement
    https://www.ladn.eu/tech-a-suivre/comment-algorithme-tiktok-engagement-viralite

    Dissection des vidéos

    Lors de chaque nouvelle publication, TikTok, lancée par l’entreprise chinoise ByteDance, décortique le contenu de la vidéo pour analyser plusieurs éléments clés : combien de personnes figurent dans la vidéo, leur genre, la nature des objets que l’on y voit et le décor en arrière-plan. Les sons sont retranscrits et étudiés, ainsi que des données comme le titre de la vidéo et les hashtags utilisés.

    « L’intelligence artificielle alimente toutes les plateformes de contenus de ByteDance. Nous construisons des machines intelligentes capables de comprendre et d’analyser du texte, des images et des vidéos en utilisant le traitement du langage naturel et la technologie de vision par ordinateur. Cela nous permet de proposer aux utilisateurs le contenu qu’ils trouvent le plus intéressant et, pour les créateurs, de partager des moments importants de la vie quotidienne avec une audience à l’échelle mondiale », explique ByteDance.
    Plus c’est viral, plus ça marque des points

    Une fois la vidéo proprement étiquetée, le réseau social va tester sa viralité sur un panel d’utilisateurs. En fonction des réactions obtenues, TikTok attribuera à chaque contenu des points sur la base d’un baromètre prenant en compte différents critères : taux de revisionnage et de complétion, nombre de partages, de commentaires et de likes. En fonction du score obtenu, la vidéo sera diffusée au plus grand nombre.
    Vous connaître mieux que vous-même

    En fait, c’est un peu comme quand on commence à se dater et qu’on attend de mieux connaître l’autre avant de mettre en avant son goût pour la poterie, son dernier voyage en Birmanie ou tout autre trait de sa personnalité qui ferait mouche.
    Lorsqu’un nouvel utilisateur ouvre l’application, TikTok va lui proposer des contenus à fort taux d’engagement, et ce sans qu’il n’ait à se connecter. De la sorte, le réseau social capte le nouvel utilisateur et analyse ses comportements (ce qu’il like, partage…) et ses connexions (géolocalisation…) pour établir un profil qui sera affiné au fil des visionnages.

    Équation gagnante donc : plus vous consommez de contenu, plus il répondra à vos attentes. Le plan de séduction est bien rodé.

    #Tik_Tok #Médias_sociaux #Algorithme #Engagement

  • Serge Tisseron, psychiatre : « Nous sommes physiquement confinés, mais désenclavés relationnellement »
    https://www.lemonde.fr/idees/article/2020/04/11/nous-sommes-physiquement-confines-mais-desenclaves-relationnellement_6036286

    Dans un entretien au « Monde », le psychiatre Serge Tisseron analyse le nouveau rapport à l’autre que permettent les technologies numériques en période de confinement. Il estime que celles-ci ne nous déshumanisent pas, bien au contraire.

    Sur ce plan comme sur d’autres, cette crise est un catastrophique accélérateur des inégalités sociales. Pour les familles qui n’ont ni tablette ni ordinateur – dans certains foyers, seul le père a un téléphone mobile –, ou qui n’ont pas d’imprimante, il est quasiment impossible de gérer l’école à la maison.

    Quant aux élèves, ils collaborent spontanément sur leurs réseaux.

    Il faudrait que l’éducation nationale profite de cette rupture pour dire à ses enseignants : surtout, ne reprenez pas vos cours comme avant ! Exploitez les ressources d’Internet, faites du collaboratif, encouragez le tutorat entre les élèves et privilégiez les interactions à chaque fois qu’elles sont possibles, en présence physique ou en ligne !

    La frontière que la culture du XXe siècle tendait à établir entre le monde réel et le monde virtuel est en train de s’effacer. On a beaucoup critiqué les réseaux sociaux, et parfois à raison : les fausses nouvelles, le harcèlement en ligne, l’info en continu, l’économie de l’attention, tout cela est en effet problématique. Mais, dans ce moment où nous sommes physiquement séparés les uns des autres, on s’aperçoit qu’ils présentent aussi d’énormes avantages pour rester en lien les uns avec les autres.

    En 2017, le Fonds des Nations unies pour l’enfance (Unicef) soulignait quant à lui, dans un rapport sur « Les enfants dans un monde numérique », que l’utilisation de ces technologies a essentiellement pour eux des effets positifs. Sans omettre les possibles effets néfastes de la vie en ligne, il notait trois avantages des réseaux sociaux : ils augmentent le sentiment d’être en lien avec les camarades, réduisent la sensation d’isolement et favorisent les amitiés existantes.

    Aujourd’hui, ce constat devient valable pour nous tous : nous sommes physiquement confinés, mais désenclavés relationnellement. Après cette expérience collective, il ne sera plus possible de parler des réseaux sociaux comme avant.

    Beaucoup d’entre nous le découvrent aujourd’hui, car ils passent sur leurs écrans un temps qu’ils auraient d’ordinaire considéré comme folie. Il sera sans doute difficile, après cela, de retomber dans l’idée que la seule manière de gérer les écrans est de limiter le temps d’utilisation. Il devient évident qu’il ne faut pas seulement apprendre à s’en passer, mais aussi apprendre à mieux s’en servir.

    Les enceintes connectées, qui font déjà leur entrée dans nos appartements, vont être dotées d’une voix de plus en plus réaliste, et elles nous parleront avec un degré croissant d’intelligence sociale et émotionnelle.

    Ma crainte, c’est que ces chatbots deviennent, pour un certain nombre de personnes, connectées mais isolées, des substituts de relations humaines. De ce point de vue, la gigantesque activité conversationnelle sur Internet à laquelle nous pousse le confinement est une bonne chose. Beaucoup découvrent de nouvelles formes de communication pour être en liaison avec leurs proches, même s’ils ne sont pas près d’eux physiquement.

    Cela remet les machines parlantes à leur juste place, qui est de nous rendre de menus services et non pas d’être des substituts d’interlocuteurs. Cela nous rappelle que l’utilisation des machines transforme les humains et les relations entre eux, et qu’il faut savoir leur poser des limites

    En quelques jours, l’expression « Prenez soin de vous » est devenue la formule de salutation la plus répandue dans les échanges de messages numériques. Qu’en pensez-vous ?

    C’est magnifique, parce que cela donne l’idée d’une réciprocité : je ne dis pas à l’autre de prendre soin de lui si je ne prends pas également soin de moi.

    Cela souligne que chacun d’entre nous est porteur d’une valeur, que chacun est indispensable et que nous avons besoin de tout le monde. Cette formule met chaque humain au centre des choses. Gardons-la !

    #Coronavirus #Usages_confinement #Médias_sociaux #Communication

  • La mondialisation des infox et ses effets sur la santé en Afrique : l’exemple de la chloroquine
    https://theconversation.com/la-mondialisation-des-infox-et-ses-effets-sur-la-sante-en-afrique-l

    Au Cameroun et dans tous les pays enquêtés, la demande en chloroquine a augmenté dans les pharmacies à l’annonce du premier cas de Covid-19. Au Bénin comme au Burkina Faso, la chloroquine est disponible chez des vendeurs de médicaments dans un marché informel ou dans la rue, qui disent avoir écoulé leur stock au cours des derniers jours.
    Les infox ajoutent le risque médicamenteux au risque infectieux

    En Afrique, l’attente pour un traitement ou un vaccin qui traite définitivement la pandémie est très forte ; la popularité des médicaments et leur aura d’efficacité et de modernité expliquent en partie cet attrait.

    La circulation d’informations sur les réseaux sociaux en faveur de la chloroquine trouve une prolongation dans sa circulation sur le marché informel, où les produits ne sont pas contrôlés et peuvent être périmés ou « de qualité inférieure ou falsifiée » (selon la catégorisation de l’OMS), la vente n’étant par ailleurs pas accompagnée de l’information d’un médecin ou d’un pharmacien sur leur toxicité.

    Avant même une éventuelle autorisation de mise sur le marché de la chloroquine pour traiter la maladie Covid-19, nos enquêtes exploratoires au Sénégal, au Bénin, au Cameroun et au Burkina Faso montrent que la population peut donc déjà se la procurer, à ses risques et périls, dans le circuit informel. Cette situation ajoute au risque infectieux un risque médicamenteux qui n’est pas a priori traité par les services focalisés sur la réponse à la pandémie et pourrait passer “sous le radar” en temps de crise sanitaire.

    Cet exemple montre que les dommages provoqués par l’infodémie ne sont pas que le fait des fake news, ces fausses informations entièrement dépourvues de fondement, mais qu’ils peuvent résulter d’informations s’avérant plus tard exactes. Lorsque des essais thérapeutiques auront validé la chloroquine (qui après des débats a été rajoutée aux protocoles initiés par Reacting) ou d’autres traitements du COVID-19, un circuit officiel d’accès aux produits et un dispositif de prévention et contrôle des risques médicamenteux seront définis, qui n’existent pas encore aujourd’hui. L’exemple montre aussi la porosité entre secteurs formels et informels dans le champ de la santé et dans le champ des médias, jusqu’au niveau des médias internationaux.

    #Infox #sante_publique #Afrique #Médias_sociaux

  • Instagram, Facebook : Jeunes et réseaux sociaux, clap de fin ?
    https://www.femina.fr/article/instagram-facebook-jeunes-et-reseaux-sociaux-clap-de-fin

    Certes, ils quittent Facebook, un « truc de vieux » – comprendre « un truc de parents » – pour Gabin, 17 ans. D’après une étude Diplomeo publiée en 2019, c’est un fait : le réseau social aux 2,5 milliards d’utilisateurs dans le monde, dont 37 millions en France, n’attire plus les ados. Près de 17 % des jeunes Français confient avoir supprimé Facebook de leur smartphone, 22 % chez les 16-18 ans et 15 % chez les 19-25 ans. Plus surprenant, ils auraient aussi tendance à bouder leur smart-phone et même à quitter Instagram et Snapchat. « Je n’y crois pas ! » tranche la mère de Gabin. Et pourtant… Tous les jeunes ne forment pas un groupe uniforme de « digital natives » (enfants du numérique) scotchés à leur portable. Certains, en effet, se déconnectent et d’autres refusent d’être trop connectés.
    Les prémices du ras-le-bol

    Une étude publiée en 2018 dans le quotidien britannique The Guardian avait déjà confirmé cette tendance, précisant même que 63 % des collégiens et lycéens britanniques seraient contents si les réseaux sociaux n’avaient jamais été inventés ! Parmi eux, Amanuel, une lycéenne de 16 ans, qui expliquait : « Sur Instagram, je présentais comme la plupart des gens une version malhonnête de moi-même. » Mais aussi Sharp, 13 ans : « Je préfère ne pas savoir ce que les autres pensent de moi. » Et en France ? « Je préfère passer mon temps dans le monde réel plutôt que sur mon téléphone, assure Khady, 19 ans, qui, au passage, confie avoir rencontré une situation de cyberharcèlement quand elle était au collège. Forcément, ça m’a vaccinée… » Les jeunes se déclarent rarement anti-réseaux sociaux sans un déclic. Parfois, la prise de conscience peut aussi prendre du temps. Quand, pour leur livre-enquête Portables : la face cachée des ados (Flammarion), les journalistes Céline Cabourg et Boris Manenti ont rencontré des centaines d’ados, ceux-ci s’interrogeaient moins sur une possible déconnexion que sur leurs usages hyperconnectés. « Mais c’était en 2016 », nuance Boris Manenti. Depuis, une enquête de l’institut de recherche Ampere Analysis, menée auprès de 9 000 internautes, a confirmé que les 18-24 ans avaient considérablement changé d’attitude à l’égard des médias sociaux en peu de temps. Alors que 66 % de cette tranche d’âge étaient d’accord en 2016 avec l’affrmation « les médias sociaux sont importants pour moi », ils ne sont plus que 57 % en 2018.

    Une saturation observée par Anne Cordier, maîtresse de conférences en sciences de l’information et de la communication et auteure de Grandir connectés (C & F) : « Depuis sept ans, je surveille l’évolution d’une quinzaine de jeunes, actuellement âgés de 24 ans et plutôt issus de milieux défavorisés. Tous évoquent depuis leurs 17 ans ce flux d’informations qui les bombarde, des diffcultés à se concentrer, ainsi que le désir de renouer avec des liens qu’ils estiment plus authentiques. Ils ont commencé par mettre en place des rituels de déconnexion très ponctuels, comme “oublier” le portable dans une autre pièce lorsqu’ils travaillent ou le retourner pour être tranquilles et ne pas être dérangés par les alertes de notifications. Une jeune fille me confiait : “C’est comme la glace. Quand on en mange trop et que l’on a fait le tour de tous les parfums, on frôle l’indigestion !” »

    La chercheuse Mary Jane Kwok Choon montre ainsi que tous les étudiants qui ont déconnecté finissent certes par revenir sur les réseaux sociaux au bout de cinq à quatorze jours mais toujours plus « responsables ». « Par exemple, ils “nettoient” leur profil sur Facebook ou ailleurs, veillent à ne pas être identifiés sur les photos, à moins publier ou à moins “liker” les statuts des autres », détaille Anne Cordier, pour qui la déconnexion absolue serait au fond un fantasme d’adulte. Lola, 18 ans, qui organise régulièrement chez elle des soirées détox digitale pour doper l’ambiance, l’a bien compris : « On éteint nos portables… seulement après avoir prévenu nos parents qui pourraient s’inquiéter ! » sourit-elle. D’après un rapport américain**, quatre adolescents sur dix ont peur que leur père ou leur mère soit « accro » au portable !

    #Médias_sociaux #Culture_numérique #Anne_Cordier #Adolescents

    • à 20 piges je détestais les forums et refusais d’avoir un mail. En vrai l’équation sous-jacente jeune-alors-devrait-aimer-la-tech n’a évidemment aucun fondement, à part la tech qui se croit jeune parce que toujours plus neuve.

    • Je me souviens un jour d’une couv de Télérama sur « les jeunes » : n’étaient figurés que des appareils électroniques. J’étais encore à peu près jeune à l’époque, et je ne comprenais pas pourquoi il n’y avait pas de bières et de capotes sur leur couv de vieux cons néophiles.

      Sinon Sherry Turkle a déjà pas mal parlé de jeunes et des réseaux sociaux : saturation, angoisse liée à l’image, déconnexion, tout y était en 2012.
      http://blog.ecologie-politique.eu/post/Seuls-ensemble

  • Le chinois TikTok fait un tabac en Inde
    https://www.lemonde.fr/international/article/2020/03/06/le-chinois-tiktok-fait-un-tabac-en-inde_6031994_3210.html

    Par un drôle de retournement de situation, l’Inde est devenue en moins d’un an la championne mondiale de TikTok. Ce réseau social chinois, qui est à la vidéo ce que Snapchat est à la photo, avait plutôt mal démarré dans le sous-continent. Accusé de véhiculer des images soit pornographiques, soit de nature à encourager les jeunes à prendre des risques au péril de leur vie, TikTok avait été interdit au printemps 2019 dans le sud par le tribunal de Madras (Tamil Nadu). Dans la foulée, le gouvernement Modi avait exigé du groupe ByteDance, propriétaire de l’application controversée, qu’il contrôle ses contenus, faute de quoi TikTok serait interdit de séjour en Inde.
    Lire aussi Comprendre TikTok, l’application préférée des ados fans de play-back

    L’intéressé a obtempéré et vite obtenu libre accès à ce gigantesque marché de 1,35 milliard d’habitants. Aujourd’hui, la plate-forme de partage de vidéos courtes (elles durent quelques secondes en général) compte 277 millions d’abonnés dans le pays, soit déjà plus de la moitié du nombre d’utilisateurs de la messagerie WhatsApp, et plus du triple du nombre d’inscrits à Instagram.

    #TikTok #Médias_sociaux

  • How TikTok Holds Our #Attention | Jia Tolentino, The New Yorker (30/09/2019)
    https://www.newyorker.com/magazine/2019/09/30/how-tiktok-holds-our-attention

    #ByteDance has more than a dozen products, a number of which depend on A.I. recommendation engines. These platforms collect data that the company aggregates and uses to refine its algorithms, which the company then uses to refine its platforms; rinse, repeat. This feedback loop, called the “virtuous cycle of A.I.,” is what each TikTok user experiences in miniature. The company would not comment on the details of its recommendation algorithm, but ByteDance has touted its research into computer vision, a process that involves extracting and classifying visual information; on the Web site of its research lab, the company lists “short video recommendation system” among the applications of the computer-vision technology that it’s developing. Although TikTok’s algorithm likely relies in part, as other systems do, on user history and video-engagement patterns, the app seems remarkably attuned to a person’s unarticulated interests. Some social algorithms are like bossy waiters: they solicit your preferences and then recommend a menu. #TikTok orders you dinner by watching you look at food.

    Article très complet sur le réseau social qui a le vent en poupe. #médias_sociaux