• The National-Security Case for Fixing Social Media | The New Yorker
    https://www.newyorker.com/tech/annals-of-technology/the-national-security-case-for-fixing-social-media

    On Wednesday, July 15th, shortly after 3 P.M., the Twitter accounts of Barack Obama, Joe Biden, Jeff Bezos, Bill Gates, Elon Musk, Warren Buffett, Michael Bloomberg, Kanye West, and other politicians and celebrities began behaving strangely. More or less simultaneously, they advised their followers—around two hundred and fifty million people, in total—to send Bitcoin contributions to mysterious addresses. Twitter’s engineers were surprised and baffled; there was no indication that the company’s network had been breached, and yet the tweets were clearly unauthorized. They had no choice but to switch off around a hundred and fifty thousand verified accounts, held by notable people and institutions, until the problem could be identified and fixed. Many government agencies have come to rely on Twitter for public-service messages; among the disabled accounts was the National Weather Service, which found that it couldn’t send tweets to warn of a tornado in central Illinois. A few days later, a seventeen-year-old hacker from Florida, who enjoyed breaking into social-media accounts for fun and occasional profit, was arrested as the mastermind of the hack. The F.B.I. is currently investigating his sixteen-year-old sidekick.

    In its narrowest sense, this immense security breach, orchestrated by teen-agers, underscores the vulnerability of Twitter and other social-media platforms. More broadly, it’s a telling sign of the times. We’ve entered a world in which our national well-being depends not just on the government but also on the private companies through which we lead our digital lives. It’s easy to imagine what big-time criminals, foreign adversaries, or power-grabbing politicians could have done with the access the teen-agers secured. In 2013, the stock market briefly plunged after a tweet sent from the hacked account of the Associated Press reported that President Barack Obama had been injured in an explosion at the White House; earlier this year, hundreds of armed, self-proclaimed militiamen converged on Gettysburg, Virginia, after a single Facebook page promoted the fake story that Antifa protesters planned to burn American flags there.

    When we think of national security, we imagine concrete threats—Iranian gunboats, say, or North Korean missiles. We spend a lot of money preparing to meet those kinds of dangers. And yet it’s online disinformation that, right now, poses an ongoing threat to our country; it’s already damaging our political system and undermining our public health. For the most part, we stand defenseless. We worry that regulating the flow of online information might violate the principle of free speech. Because foreign disinformation played a role in the election of our current President, it has become a partisan issue, and so our politicians are paralyzed. We enjoy the products made by the tech companies, and so are reluctant to regulate their industry; we’re also uncertain whether there’s anything we can do about the problem—maybe the price of being online is fake news. The result is a peculiar mixture of apprehension and inaction. We live with the constant threat of disinformation and foreign meddling. In the uneasy days after a divisive Presidential election, we feel electricity in the air and wait for lightning to strike.

    In recent years, we’ve learned a lot about what makes a disinformation campaign effective. Disinformation works best when it’s consistent with an audience’s preconceptions; a fake story that’s dismissed as incredible by one person can appear quite plausible to another who’s predisposed to believe in it. It’s for this reason that, while foreign governments may be capable of more concerted campaigns, American disinformers are especially dangerous: they have their fingers on the pulse of our social and political divisions.

    As cyber wrongdoing has piled up, however, it has shifted the balance of responsibility between government and the private sector. The federal government used to be solely responsible for what the Constitution calls our “common defense.” Yet as private companies amass more data about us, and serve increasingly as the main forum for civic and business life, their weaknesses become more consequential. Even in the heyday of General Motors, a mishap at that company was unlikely to affect our national well-being. Today, a hack at Google, Facebook, Microsoft, Visa, or any of a number of tech companies could derail everyday life, or even compromise public safety, in fundamental ways.

    Because of the very structure of the Internet, no Western nation has yet found a way to stop, or even deter, malicious foreign cyber activity. It’s almost always impossible to know quickly and with certainty if a foreign government is behind a disinformation campaign, ransomware implant, or data theft; with attribution uncertain, the government’s hands are tied. China and other authoritarian governments have solved this problem by monitoring every online user and blocking content they dislike; that approach is unthinkable here. In fact, any regulation meant to thwart online disinformation risks seeming like a step down the road to authoritarianism or a threat to freedom of speech. For good reason, we don’t like the idea of anyone in the private sector controlling what we read, see, and hear. But allowing companies to profit from manipulating what we view online, without regard for its truthfulness or the consequences of its viral dissemination, is also problematic. It seems as though we are hemmed in on all sides, by our enemies, our technologies, our principles, and the law—that we have no choice but to learn to live with disinformation, and with the slow erosion of our public life.

    We might have more maneuvering room than we think. The very fact that the disinformation crisis has so many elements—legal, technological, and social—means that we have multiple tools with which to address it. We can tackle the problem in parts, and make progress. An improvement here, an improvement there. We can’t cure this chronic disease, but we can manage it.

    Online, the regulation of speech is governed by Section 230 of the Communications Decency Act—a law, enacted in 1996, that was designed to allow the nascent Internet to flourish without legal entanglements. The statute gives every Internet provider or user a shield against liability for the posting or transmission of user-generated wrongful content. As Anna Wiener wrote earlier this year, Section 230 was well-intentioned at the time of its adoption, when all Internet companies were underdogs. But today that is no longer true, and analysts and politicians on both the right and the left are beginning to think, for different reasons, that the law could be usefully amended.

    Technological progress is possible, too, and there are signs that, after years of resistance, social-media platforms are finally taking meaningful action. In recent months, Facebook, Twitter, and other platforms have become more aggressive about removing accounts that appear inauthentic, or that promote violence or lawbreaking; they have also moved faster to block accounts that spread disinformation about the coronavirus or voting, or that advance abhorrent political views, such as Holocaust denial. The next logical step is to decrease the power of virality. In 2019, after a series of lynchings in India was organized through the chat program WhatsApp, Facebook limited the mass forwarding of texts on that platform; a couple of months ago, it implemented similar changes in the Messenger app embedded in Facebook itself. As false reports of ballot fraud became increasingly elaborate in the days before and after Election Day, the major social media platforms did what would have been unthinkable a year ago, labelling as misleading messages from the President of the United States. Twitter made it slightly more difficult to forward tweets containing disinformation; an alert now warns the user about retweeting content that’s been flagged as untruthful. Additional changes of this kind, combined with more transparency about the algorithms they use to curate content, could make a meaningful difference in how disinformation spreads online. Congress is considering requiring such transparency.

    #Désinformation #Fake_news #Propositions_légales #Propositions_techniques #Médias_sociaux

  • Was E-mail a Mistake? | The New Yorker
    https://www.newyorker.com/tech/annals-of-technology/was-e-mail-a-mistake

    The problem is that some of the computers might crash. If that happens, the rest of the group will end up waiting forever to hear from peers that are no longer operating. In a synchronous system, this issue is easily sidestepped: if you don’t hear from a machine fast enough, you can assume that it has crashed and ignore it going forward. In asynchronous systems, these failures are more problematic. It’s difficult to differentiate between a computer that’s crashed and one that’s delayed. At first, to the engineers who studied this problem, it seemed obvious that, instead of waiting to learn the preference of every machine, one could just wait to hear from most of them. And yet, to the surprise of many people in the field, in a 1985 paper, three computer scientists—Michael Fischer, Nancy Lynch (my doctoral adviser), and Michael Paterson—proved, through a virtuosic display of mathematical logic, that, in an asynchronous system, no distributed algorithm could guarantee that a consensus would be reached, even if only a single computer crashed.

    A major implication of research into distributed systems is that, without synchrony, such systems are just too hard for the average programmer to tame. It turns out that asynchrony makes coördination so complicated that it’s almost always worth paying the price required to introduce at least some synchronization. In fact, the fight against asynchrony has played a crucial role in the rise of the Internet age, enabling, among other innovations, huge data centers run by such companies as Amazon, Facebook, and Google, and fault-tolerant distributed databases that reliably process millions of credit-card transactions each day. In 2013, Leslie Lamport, a major figure in the field of distributed systems, was awarded the A. M. Turing Award—the highest distinction in computer science—for his work on algorithms that help synchronize distributed systems. It’s an irony in the history of technology that the development of synchronous distributed computer systems has been used to create a communication style in which we are always out of synch.

    Anyone who works in a standard office environment has firsthand experience with the problems that followed the enthusiastic embrace of asynchronous communication. As the distributed-system theorists discovered, shifting away from synchronous interaction makes coördination more complex. The dream of replacing the quick phone call with an even quicker e-mail message didn’t come to fruition; instead, what once could have been resolved in a few minutes on the phone now takes a dozen back-and-forth messages to sort out. With larger groups of people, this increased complexity becomes even more notable. Is an unresponsive colleague just delayed, or is she completely checked out? When has consensus been reached in a group e-mail exchange? Are you, the e-mail recipient, required to respond, or can you stay silent without holding up the decision-making process? Was your point properly understood, or do you now need to clarify with a follow-up message? Office workers pondering these puzzles—the real-life analogues of the theory of distributed systems—now dedicate an increasing amount of time to managing a growing number of never-ending interactions.

    Last year, the software company RescueTime gathered and aggregated anonymized computer-usage logs from tens of thousands of people. When its data scientists crunched the numbers, they found that, on average, users were checking e-mail or instant-messenger services like Slack once every six minutes. Not long before, a team led by Gloria Mark, the U.C. Irvine professor, had installed similar logging software on the computers of employees at a large corporation; the study found that the employees checked their in-boxes an average of seventy-seven times a day. Although we shifted toward asynchronous communication so that we could stop wasting time playing phone tag or arranging meetings, communicating in the workplace had become more onerous than it used to be. Work has become something we do in the small slivers of time that remain amid our Sisyphean skirmishes with our in-boxes.

    There’s nothing intrinsically bad about e-mail as a tool. In situations where asynchronous communication is clearly preferable—broadcasting an announcement, say, or delivering a document—e-mails are superior to messengered printouts. The difficulties start when we try to undertake collaborative projects—planning events, developing strategies—asynchronously. In those cases, communication becomes drawn out, even interminable. Both workplace experience and the theory of distributed systems show that, for non-trivial coördination, synchrony usually works better. This doesn’t mean that we should turn back the clock, re-creating the mid-century workplace, with its endlessly ringing phones. The right lesson to draw from distributed-system theory is that useful synchrony often requires structure. For computer scientists, this structure takes the form of smart distributed algorithms. For managers, it takes the form of smarter business processes.

    #Mail #Communication_asynchrone #Management #Culture_numérique

  • The Hidden Costs of Automated Thinking
    https://www.newyorker.com/tech/annals-of-technology/the-hidden-costs-of-automated-thinking

    Like many medications, the wakefulness drug modafinil, which is marketed under the trade name Provigil, comes with a small, tightly folded paper pamphlet. For the most part, its contents—lists of instructions and precautions, a diagram of the drug’s molecular structure—make for anodyne reading. The subsection called “Mechanism of Action,” however, contains a sentence that might induce sleeplessness by itself : “The mechanism(s) through which modafinil promotes wakefulness is unknown.” Provigil (...)

    #algorithme #solutionnisme

    • This approach to discovery—answers first, explanations later—accrues what I call intellectual debt. It’s possible to discover what works without knowing why it works, and then to put that insight to use immediately, assuming that the underlying mechanism will be figured out later. In some cases, we pay off this intellectual debt quickly. But, in others, we let it compound, relying, for decades, on knowledge that’s not fully known.

  • The Fight for the Future of YouTube | The New Yorker
    https://www.newyorker.com/tech/annals-of-technology/the-fight-for-the-future-of-youtube

    Earlier this year, executives at YouTube began mulling, once again, the problem of online speech. On grounds of freedom of expression and ideological neutrality, the platform has long allowed users to upload videos endorsing noxious ideas, from conspiracy theories to neo-Nazism. Now it wanted to reverse course. “There are no sacred cows,” Susan Wojcicki, the C.E.O. of YouTube, reportedly told her team. Wojcicki had two competing goals: she wanted to avoid accusations of ideological bias while also affirming her company’s values. In the course of the spring, YouTube drafted a new policy that would ban videos trafficking in historical “denialism” (of the Holocaust, 9/11, Sandy Hook) and “supremacist” views (lauding the “white race,” arguing that men were intellectually superior to women). YouTube planned to roll out its new policy as early as June. In May, meanwhile, it started preparing for Pride Month, turning its red logo rainbow-colored and promoting popular L.G.B.T.Q. video producers on Instagram.

    Francesca Tripodi, a media scholar at James Madison University, has studied how right-wing conspiracy theorists perpetuate false ideas online. Essentially, they find unfilled rabbit holes and then create content to fill them. “When there is limited or no metadata matching a particular topic,” she told a Senate committee in April, “it is easy to coördinate around keywords to guarantee the kind of information Google will return.” Political provocateurs can take advantage of data vacuums to increase the likelihood that legitimate news clips will be followed by their videos. And, because controversial or outlandish videos tend to be riveting, even for those who dislike them, they can register as “engaging” to a recommendation system, which would surface them more often. The many automated systems within a social platform can be co-opted and made to work at cross purposes.

    Technological solutions are appealing, in part, because they are relatively unobtrusive. Programmers like the idea of solving thorny problems elegantly, behind the scenes. For users, meanwhile, the value of social-media platforms lies partly in their appearance of democratic openness. It’s nice to imagine that the content is made by the people, for the people, and that popularity flows from the grass roots.

    In fact, the apparent democratic neutrality of social-media platforms has always been shaped by algorithms and managers. In its early days, YouTube staffers often cultivated popularity by hand, choosing trending videos to highlight on its home page; if the site gave a leg up to a promising YouTuber, that YouTuber’s audience grew. By spotlighting its most appealing users, the platform attracted new ones. It also shaped its identity: by featuring some kinds of content more than others, the company showed YouTubers what kind of videos it was willing to boost. “They had to be super family friendly, not copyright-infringing, and, at the same time, compelling,” Schaffer recalled, of the highlighted videos.

    Today, YouTube employs scores of “partner managers,” who actively court and promote celebrities, musicians, and gamers—meeting with individual video producers to answer questions about how they can reach bigger audiences,

    Last year, YouTube paid forty-seven ambassadors to produce socially conscious videos and attend workshops. The program’s budget, of around five million dollars—it also helps fund school programs designed to improve students’ critical-thinking skills when they are confronted with emotionally charged videos—is a tiny sum compared to the hundreds of millions that the company reportedly spends on YouTube Originals, its entertainment-production arm. Still, one YouTube representative told me, “We saw hundreds of millions of views on ambassadors’ videos last year—hundreds of thousands of hours of watch time.” Most people encountered the Creators for Change clips as automated advertisements before other videos.

    On a channel called AsapScience, Gregory Brown, a former high-school teacher, and his boyfriend, Mitchell Moffit, make animated clips about science that affects their viewers’ everyday lives; their most successful videos address topics such as the science of coffee or masturbation. They used their Creators for Change dollars to produce a video about the scientifically measurable effects of racism, featuring the Black Lives Matter activist DeRay Mckesson. While the average AsapScience video takes a week to make, the video about racism had taken seven or eight months: the level of bad faith and misinformation surrounding the topic, Brown said, demanded extra precision. “You need to explain the study, explain the parameters, and explain the result so that people can’t argue against it,” he said. “And that doesn’t make the video as interesting, and that’s a challenge.” (Toxic content proliferates, in part, because it is comparatively easy and cheap to make; it can shirk the burden of being true.)

    One way to make counterspeech more effective is to dampen the speech that it aims to counter. In March, after a video of a white-supremacist mass shooting at a mosque in Christchurch, New Zealand, went viral, Hunter Walk, a former YouTube executive, tweeted that the company should protect “freedom of speech” but not “freedom of reach.” He suggested that YouTube could suppress toxic videos by delisting them as candidates for its recommendation engine—in essence, he wrote, this would “shadowban” them. (Shadow-banning is so-called because a user might not know that his reach has been curtailed, and because the ban effectively pushes undesirable users into the “shadows” of an online space.) Ideally, people who make such shadow-banned videos could grow frustrated by their limited audiences and change their ways; videos, Walk explained, could be shadow-banned if they were linked to by a significant number of far-right Web havens, such as 8chan and Gab. (Walk’s tweets, which are set to auto-delete, have since disappeared.)

    Shadow-banning is an age-old moderation tool: the owners of Internet discussion forums have long used it to keep spammers and harassers from bothering other users. On big social-media platforms, however, this kind of moderation doesn’t necessarily focus on individuals; instead, it affects the way that different kinds of content surface algorithmically. YouTube has published a lengthy list of guidelines that its army of raters can use to give some types of content—clips that contain “extreme gore or violence, without a beneficial purpose,” for example, or that advocate hateful ideas expressed in an “emotional,” “polite,” or even “academic-sounding” way—a low rating. YouTube’s A.I. learns from the ratings to make objectionable videos less likely to appear in its automated recommendations. Individual users won’t necessarily know how their videos have been affected. The ambiguities generated by this system have led some to argue that political shadow-banning is taking place. President Trump and congressional Republicans, in particular, are alarmed by the idea that some version of the practice could be widely employed against conservatives. In April, Ted Cruz held a Senate subcommittee hearing called “Stifling Free Speech: Technological Censorship and the Public Discourse.” In his remarks, he threatened the platforms with regulation; he also brought in witnesses who accused them of liberal bias. (YouTube denies that its raters evaluate recommendations along political lines, and most experts agree that there is no evidence for such a bias.)

    Engineers at YouTube and other companies are hesitant to detail their algorithmic tweaks for many reasons; among them is the fact that obscure algorithms are harder to exploit. But Serge Abiteboul, a computer-science professor who was tasked by the French government to advise legislators on online hate speech, argues that verifiable solutions are preferable to hidden ones. YouTube has claimed that, since tweaking its systems in January, it has reduced the number of views for recommended videos containing borderline content and harmful misinformation by half. Without transparency and oversight, however, it’s impossible for independent observers to confirm that drop. “Any supervision that’s accepted by society would be better than regulation done in an opaque manner, by the platforms, themselves, alone,” Abiteboul said.

    The company featured videos it liked, banned others outright, and kept borderline videos off the home page. Still, it allowed some toxic speech to lurk in the corners. “We thought, if you just quarantine the borderline stuff, it doesn’t spill over to the decent people,” he recalled. “And, even if it did, it seemed like there were enough people who would just immediately recognize it was wrong, and it would be O.K.” The events of the past few years have convinced Schaffer that this was an error. The increasing efficiency of the recommendation system drew toxic content into the light in ways that YouTube’s early policymakers hadn’t anticipated. In the end, borderline content changed the tenor and effect of the platform as a whole. “Our underlying premises were flawed,” Schaffer said. “We don’t need YouTube to tell us these people exist. And counterspeech is not a fair burden. Bullshit is infinitely more difficult to combat than it is to spread. YouTube should have course-corrected a long time ago.”

    Some experts point out that algorithmic tweaks and counterspeech don’t change the basic structure of YouTube—a structure that encourages the mass uploading of videos from unvetted sources. It’s possible that this structure is fundamentally incompatible with a healthy civic discourse.

    There are commercial reasons, it turns out, for fighting hate speech: according to a survey by the Anti-Defamation League, fifty-three per cent of Americans reported experiencing online hate or harassment in 2018—rates of bigoted harassment were highest among people who identified as L.G.B.T.Q.—and, in response, many spent less time online or deleted their apps. A study released last year, by Google and Stanford University, identified toxic speech as a “rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion.” As part of the Creators for Change program, YouTube has drawn up lesson plans for teachers which encourage students to “use video to find your voice and bring people together.” Teen-agers posting videos disputing toxic ideas are engaged users, too.

    I asked YouTube’s representatives why they didn’t use the Redirect Method to serve Creators for Change videos to people who search for hate speech. If they valued what their ambassadors had to say, why wouldn’t they disseminate those messages as effectively as possible? A representative explained that YouTube doesn’t want to “pick winners.” I brought that message back to Libby Hemphill, the computer-science professor. “I wish they would recognize that they already do pick winners,” she said. “Algorithms make decisions we teach them to make, even deep-learning algorithms. They should pick different winners on purpose.” Schaffer suggested that YouTube’s insistence on the appearance of neutrality is “a kind of Stockholm syndrome. I think they’re afraid of upsetting their big creators, and it has interfered with their ability to be aggressive about implementing their values.”

    Brown, for his part, wanted the platform to choose a point of view. But, he told me, “If they make decisions about who they’re going to prop up in the algorithm, and make it more clear, I think they would lose money. I think they might lose power.” He paused. “That’s a big test for these companies right now. How are they going to go down in history?”

    #YouTube #Modération #Régulation #Algorithme

  • Will California’s New Bot Law Strengthen Democracy ? | The New Yorker
    https://www.newyorker.com/tech/annals-of-technology/will-californias-new-bot-law-strengthen-democracy

    Une loi très intéressante en Californie qui va entrer en vigueur aujourd’hui. On va voir comment cela se passe pour la déclaration du caractère robotique d’un compte Twitter ou Facebook...

    California is the first state to try to reduce the power of bots by requiring that they reveal their “artificial identity” when they are used to sell a product or influence a voter.Photograph by Emma Innocenti / Getty
    When you ask experts how bots influence politics—that is, what specifically these bits of computer code that purport to be human can accomplish during an election—they will give you a list: bots can smear the opposition through personal attacks; they can exaggerate voters’ fears and anger by repeating short simple slogans; they can overstate popularity; they can derail conversations and draw attention to symbolic and ultimately meaningless ideas; they can spread false narratives. In other words, they are an especially useful tool, considering how politics is played today.

    On July 1st, California became the first state in the nation to try to reduce the power of bots by requiring that they reveal their “artificial identity” when they are used to sell a product or influence a voter. Violators could face fines under state statutes related to unfair competition. Just as pharmaceutical companies must disclose that the happy people who say a new drug has miraculously improved their lives are paid actors, bots in California—or rather, the people who deploy them—will have to level with their audience.

    We are in new terrain, where the microtargeting of audiences on social networks, the perception of false news stories as genuine, and the bot-led amplification of some voices and drowning-out of others have combined to create angry, ill-informed online communities that are suspicious of one another and of the government.

    Regulating bots should be low-hanging fruit when it comes to improving the Internet. The California law doesn’t even ban them outright but, rather, insists that they identify themselves in a manner that is “clear, conspicuous, and reasonably designed.”

    The point where economic self-interest stops and libertarian ideology begins can be hard to identify. Mark Zuckerberg, of Facebook, speaking at the Aspen Ideas Festival last week, appealed to personal freedom to defend his platform’s decision to allow the microtargeting of false, incendiary information. “I do not think we want to go so far towards saying that a private company prevents you from saying something that it thinks is factually incorrect,” he said. “That to me just feels like it’s too far and goes away from the tradition of free expression.”

    In the 2016 Presidential campaign, bots were created to support both Donald Trump and Hillary Clinton, but pro-Trump bots outnumbered pro-Clinton ones five to one, by one estimate, and many were dispatched by Russian intermediaries. Twitter told a Senate committee that, in the run-up to the 2016 election, fifty thousand bots that it concluded had Russian ties retweeted Trump’s tweets nearly half a million times, which represented 4.25 per cent of all his retweets, roughly ten times the level of Russian bot retweets supporting Clinton.

    Bots also gave Trump victories in quick online polls asking who had won a Presidential debate; they disrupted discussions of Trump’s misdeeds or crude statements; and they relentlessly pushed dubious policy proposals through hashtags like #draintheswamp.

    They have also aided Trump during his Presidency. Suspected bots created by unidentified users drove an estimated forty to sixty per cent of the Twitter discussion of a “caravan” of Central American migrants headed to the U.S., which was pushed by the President and his supporters prior to the 2018 midterm elections. Trump himself has retweeted accounts that praise him and his Presidency, and which appear to be bots. And last week a suspected bot network was discovered to be smearing Senator Kamala Harris, of California, with a form of “birtherism” after her strong showing in the first round of Democratic-primary debates.

    Hertzberg, the state senator who authored the legislation, told me that he was glad that the changes to the bill before passage were related to the implementation of the law, rather than to its central purpose of requiring that bots reveal themselves to the public when used politically or commercially. A lawyer by training, Hertzberg said that he resented the accusation that he didn’t care about First Amendment concerns. “There is no effort in this bill to have a chilling effect on speech—zero,” he said. “The argument you go back to is, Do bots have free speech? People have free speech. Bots are not people.”

    #régulation #Robots #Californie

  • Mark Zuckerberg’s Plans to Capitalize on Facebook’s Failures | The New Yorker
    https://www.newyorker.com/tech/annals-of-technology/mark-zuckerbergs-plans-to-capitalize-on-facebooks-failures

    On Wednesday, a few hours before the C.E.O. of Facebook, Mark Zuckerberg, published a thirty-two-hundred-word post on his site titled “A privacy-focused vision for social networking,” a new study from the market research firm Edison Research revealed that Facebook had lost fifteen million users in the United States since 2017. “Fifteen million is a lot of people, no matter which way you cut it,” Larry Rosin, the president of Edison Research, said on American Public Media’s “Marketplace.” “This is the second straight year we’ve seen this number go down.” The trend is likely related to the public’s dawning recognition that Facebook has become both an unbridled surveillance tool and a platform for propaganda and misinformation. According to a recent Harris/Axios survey of the hundred most visible companies in the U.S., Facebook’s reputation has taken a precipitous dive in the last five years, with its most acute plunge in the past year, and it scores particularly low in the categories of citizenship, ethics, and trust.

    While Zuckerberg’s blog post can be read as a response to this loss of faith, it is also a strategic move to capitalize on the social-media platform’s failures. To be clear, what Zuckerberg calls “town square” Facebook, where people post updates about new jobs, and share prom pictures and erroneous information about vaccines, will continue to exist. (On Thursday, Facebook announced that it would ban anti-vaccine advertisements on the site.) His new vision is to create a separate product that merges Facebook Messenger, WhatsApp, and Instagram into an encrypted and interoperable communications platform that will be more like a “living room.” According to Zuckerberg, “We’ve worked hard to build privacy into all our products, including those for public sharing. But one great property of messaging services is that, even as your contacts list grows, your individual threads and groups remain private. As your friends evolve over time, messaging services evolve gracefully and remain intimate.”

    This new Facebook promises to store data securely in the cloud, and delete messages after a set amount of time to reduce “the risk of your messages resurfacing and embarrassing you later.” (Apparently, Zuckerberg already uses this feature, as Tech Crunch reported, in April, 2018.) Its interoperability means, for example, that users will be able to buy something from Facebook Marketplace and communicate with the seller via WhatsApp; Zuckerberg says this will enable the buyer to avoid sharing a phone number with a stranger. Just last week, however, a user discovered that phone numbers provided for two-factor authentication on Facebook can be used to track people across the Facebook universe. Zuckerberg does not address how the new product will handle this feature, since “town square” Facebook will continue to exist.

    Once Facebook has merged all of its products, the company plans to build other products on top of it, including payment portals, banking services, and, not surprisingly, advertising. In an interview with Wired’s editor-in-chief, Nicholas Thompson, Zuckerberg explained that “What I’m trying to lay out is a privacy-focused vision for this kind of platform that starts with messaging and making that as secure as possible with end-to-end encryption, and then building all of the other kinds of private and intimate ways that you would want to interact—from calling, to groups, to stories, to payments, to different forms of commerce, to sharing location, to eventually having a more open-ended system to plug in different kinds of tools for providing the interaction with people in all the ways that you would want.”

    L’innovation vient maintenant de Chine, en voici une nouvelle mention

    If this sounds familiar, it is. Zuckerberg’s concept borrows liberally from WeChat, the multiverse Chinese social-networking platform, popularly known as China’s “app for everything.” WeChat’s billion monthly active users employ the app for texting, video conferencing, broadcasting, money transfers, paying fines, and making medical appointments. Privacy, however, is not one of its attributes. According to a 2015 article in Quartz, WeChat’s “heat map” feature alerts Chinese authorities to unusual crowds of people, which the government can then surveil.

    “I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever,” Zuckerberg tells us. “This is the future I hope we will help bring about.” By announcing it now, and framing it in terms of privacy, he appears to be addressing the concerns of both users and regulators, while failing to acknowledge that a consolidated Facebook will provide advertisers with an even richer and more easily accessed database of users than the site currently offers. As Wired reported in January, when the merger of Facebook’s apps was floated in the press, “the move will unlock huge quantities of user information that was previously locked away in silos.”

    Le chiffrage des messages est loin d’être une panacée pour la vie privée, ni pour la responsabilité sociale des individus.

    Zuckerberg also acknowledged that an encrypted Facebook may pose problems for law enforcement and intelligence services, but promised that the company would work with authorities to root out bad guys who “misuse it for truly terrible things like child exploitation, terrorism, and extortion.” It’s unclear how, with end-to-end encryption, it will be able to do this. Facebook’s private groups have already been used to incite genocide and other acts of violence, suppress voter turnout, and disseminate misinformation. Its pivot to privacy will not only give such activities more space to operate behind the relative shelter of a digital wall but will also relieve Facebook from the responsibility of policing them. Instead of more—and more exacting—content moderation, there will be less. Instead of removing bad actors from the service, the pivot to privacy will give them a safe harbor.

    #facebook #Cryptographie #Vie_privée #Médias_sociaux #Mark_Zuckerberg

  • How Voting-Machine Lobbyists Undermine the Democratic Process | The New Yorker
    https://www.newyorker.com/tech/annals-of-technology/how-voting-machine-lobbyists-undermine-the-democratic-process

    Earlier this month, Georgia’s Secure, Accessible & Fair Elections Commission voted to recommend that the state replace its touch-screen voting machines with newer, similarly vulnerable machines, which will be produced by E.S. & S. at an estimated cost of a hundred million dollars. In doing so, the panel rejected the advice of computer scientists and election-integrity advocates, who consider hand-marked ballots to be the “most reliable record of voter intent,” and also the National Academies of Sciences, Engineering, and Medicine, which recommended that all states adopt paper ballots and conduct post-election audits. The practice of democracy begins with casting votes; its integrity depends on the inclusivity of the franchise and the accurate recording of its will. Georgia turns out to be a prime example of how voting-system venders, in partnership with elected officials, can jeopardize the democratic process by influencing municipalities to buy proprietary, inscrutable voting devices that are infinitely less secure than paper-ballot systems that cost three times less.

    The influence-peddling that has beset Georgia’s voting-system procurement began years earlier, in 2002, when the legislature eliminated a requirement that the state’s voting machines produce an independent audit trail of each vote cast. That same year, the secretary of state, Cathy Cox, signed a fifty-four-million-dollar contract with the election-machine vender Diebold. The lobbyist for Diebold, the former Georgia secretary of state Lewis Massey, then joined the lobbying firm of Bruce Bowers. The revolving door between the Georgia state government and the election venders was just beginning to spin.

    Something similar happened last fall in Delaware, where the Voting Equipment Selection Task Force also voted to replace its aging touch-screen machines with a variant of the ExpressVote system. When Jennifer Hill, at Common Cause Delaware, a government-accountability group, obtained all the bids from a public-records request, she found that “the Department of Elections had pretty much tailored the request for proposal in a way that eliminated venders whose primary business was to sell paper-ballot systems.” Hill also noted that a lobbyist for E.S. & S., who was “well-connected in the state,” helped “to shepherd this whole thing through.” Elaine Manlove, the Delaware elections director, told me that the twelve members of the election task force each independently concluded that ExpressVote was the best system for the state. “It’s not a big change for Delaware voters,” she said. “They’re voting on the screen, just like they do now.” (A representative from E.S. & S. told me that the the company “follows all state and federal guidelines for procurement of government contracts.”)

    The ExpressVote machines use what are known as ballot-marking devices. Once a vote is cast on the touch screen, the machine prints out a card that summarizes the voter’s choice, in both plain English and in the form of a bar code. After the voter reviews the card, it is returned to the machine, which records the information symbolized by the bar code. It’s a paper trail, but one that a voter can’t actually verify, because the bar codes can’t be read. “If you’re tallying based on bar codes, you could conceivably have software that [flips] the voter’s choices,” Buell said. “If you’re in a target state using these devices and the computer security isn’t very good, this becomes more likely.” This is less of a concern in states that require manual post-election audits. But neither Georgia nor Delaware do.

    #Voting_machine #Elections #Démocratie

  • The Unlikely Politics of a Digital Contraceptive | The New Yorker
    https://www.newyorker.com/tech/annals-of-technology/the-unlikely-politics-of-a-digital-contraceptive

    In August, the F.D.A. announced that it had allowed a new form of contraception on the market: a mobile app called Natural Cycles. The app, which was designed by a Swedish particle physicist, asks its users to record their temperature with a Natural Cycles-branded thermometer each morning, and to log when they have their periods. Using a proprietary algorithm, the app informs its users which days they are infertile (green days—as in, go ahead, have fun) and which they are fertile (red days—proceed with caution), so that they can either abstain or use a backup method of birth control. In clearing the app as a medical device, the F.D.A. inaugurated “software application for contraception” as a new category of birth control under which similar products can now apply to be classified. The F.D.A.’s press release quotes Terri Cornelison, a doctor in its Center for Devices and Radiological Health, who said, “Consumers are increasingly using digital health technologies to inform their everyday health decisions and this new app can provide an effective method of contraception if it’s used carefully and correctly.”

    On touche vraiment au grand Ogin’importe quoi.

    In January, a single hospital in Stockholm alerted authorities that thirty-seven women who had sought abortions in a four-month period had all become pregnant while using Natural Cycles as their primary form of contraception. The Swedish Medical Products Agency agreed to investigate. Three weeks ago, that agency concluded that the number of unwanted pregnancies was consistent with the “typical use” failure rate of the app, which they found to be 6.9 per cent. During the six-month investigation, six hundred and seventy-six additional Natural Cycle users in Sweden reported unintended pregnancies, a number that only includes the unwanted pregnancies disclosed directly to the company.

    Berglund’s story—a perfect combination of technology, ease, and self-discovery, peppered with the frisson of good fortune and reliance on what’s natural—has helped convince more than nine hundred thousand people worldwide to register an account with Natural Cycles. But the idea of determining fertile days by tracking ovulation, known as a fertility-awareness-based method of birth control, is anything but new. Fertility awareness is also sometimes called natural family planning, in reference to the Catholic precept that prohibits direct interventions in procreation. The most familiar form of fertility awareness is known as the rhythm method. First designated in the nineteen-thirties, the rhythm or calendar method was based on research by two physicians, one Austrian and one Japanese. If a woman counted the number of days in her cycle, she could make a statistical estimate of when she was most likely to get pregnant. Those methods evolved over the years: in 1935, a German priest named Wilhelm Hillebrand observed that body temperature goes up during ovulation. He recommended that women take their temperature daily to determine their fertile period.

    Plenty of doctors remain unconvinced about Natural Cycles. “It’s as if we’re asking women to go back to the Middle Ages,” Aimee Eyvazzadeh, a fertility specialist in San Francisco, said. Technology, she warned, “is only as reliable as the human being behind it.” Forman, from Columbia, said that “one of the benefits of contraception was being able to dissociate intercourse from procreation.” By taking a pill or inserting a device into an arm or uterus, a woman could enjoy her sexuality without thinking constantly about what day of the month it was. With fertility awareness, Forman said, “it’s in the opposite direction. It’s tying it back together again. You’re having to change your life potentially based on your menstrual cycle. Whereas one of the nice benefits of contraception is that it liberated women from that.”

    #Médecine #Hubris_technologique #Contraception #Comportements