• Steven Levy : The problem with Big Tech’s wartime push against Putin
    https://link.wired.com/view/5cec29ba24c17c4c6465ed0bg0n01.2a26/250890de

    The Plain View

    In 1942, smokers of one of the leading cigarette brands noticed a change in the packaging. The green background on Lucky Strike boxes was now white. The American Tobacco Company’s official explanation was that copper, used to produce the green pigment, was at a premium during wartime. To support the Allied troops, the cigarette maker “sacrificed” by abandoning the green dye. In what might be called midcentury virtue signaling, the firm rolled out a massive ad campaign with the slogan, “Lucky Strike Green Has Gone to War.”

    The current Russian invasion of Ukraine has offered a similar opportunity for a contemporary industry that has occasionally been compared to the tobacco cartel—Big Tech. Multiple times a day, we read about how technology companies ranging from trillionaire giants to startups are prioritizing wartime responsibilities and denying services to Russia or aiding Ukraine. Some of these moves have directly impacted the battlefield, if you extend the war theater to digital cyberstructure as well as the global fight for hearts and minds. The decisions of Meta, Twitter, Google, and Microsoft to block or constrain the Russian news agencies Sputnik and RT, for instance, represents an attempt to preemptively mitigate disinformation. Other measures fall in the category of an overall boycott of a country brutally attacking another without provocation, or offering aid to the disenfranchised. Apple, for instance, closed its stores in Moscow. Airbnb is offering free lodging to refugees. SpaceX is sending Starlink internet terminals to Ukraine. And just as the American Tobacco Company did in 1942, those launching such initiatives are making sure we know about it.

    It’s heartening to see how almost all the Western world, except maybe Tucker Carlson and Donald Trump, is united in condemning Putin’s invasion, and that corporations by and large are making moves to back that up. Yet some of those decisions aren’t so clear cut in who they benefit, and what precedents they might establish. In some cases, their responses are state-requested, originating from the US, EU, or Ukraine itself. Those are hard to turn down. But companies like Meta, Twitter, and Google have spent years devising policies to guide their actions, and those rules were intended to be applied regardless of where political winds are blowing. I am reminded of the exuberance inside the company then called Facebook when its products helped power the Arab Spring. In the excitement of aligning with a liberation movement, Facebook’s leaders failed to see how the same protocols could later empower deadly misinformation in Myanmar and at the US Capitol.

    Our big tech companies are so powerful that even actions that seem morally clear-cut can bite back later on. Take the question of how Facebook operates inside of Russia. Meta is defying Putin’s objections to fact-checking, and has blocked state-backed ads. In response, Russia itself is slowing down access to the platform. If Meta decided to pull Facebook from Russia altogether, would it be a punishment or a reward to Putin? Removal might signify a Zuckerbergian solidarity with the emerging corporate boycott, and also provide a means to fully shut down disinformation circulating on Cyrillic News Feeds. (Whoops, change that to “Feeds.”) But it would also preclude the possibility that Russians unhappy with Putin’s actions might organize protests, share stories of young soldiers at risk, or at least complain about the effects of sanctions.

    Responding to a question I asked in a press call this week, Meta’s policy czar Nick Clegg shared how the company views such contradictions. “We are a private sector company, which runs apps or services, which happen to be relied upon by millions of people in Russia and Ukraine, at a moment of great distress and military conflict,” he said. “And also, we’re having demands made of us by governments in numerous different jurisdictions. That is quite a difficult balancing trick for us to strike.”

    Maybe the best example of this conundrum in the tech world are the demands on the crypto community to deny services to Russians. If it doesn’t, the argument goes, cryptocurrencies will be the loophole by which Putin’s oligarchs will shelter their ill-gotten fortunes. But one of the pillars of crypto technology is that no state action can constrain the much-touted decentralized digital commerce. On one hand, it seems like a great idea to freeze the wallets of Russian kleptocrats, just as banks are doing with their offshore accounts. But doing the “right thing” in this case would be like pulling a weight-bearing Jenga piece out of a delicately balanced crypto architecture. If crypto is not amoral, is it really crypto? (So far, some crypto exchanges are holding out.)

    While there may be no right answers for a lot of these questions, one truth shines through: These platforms are scarily intertwined in the body politic and the global economic machinery. And actions taken now, even with the best of intentions, might wind up repeated to our disadvantage. Cooperation with state-issued requests that skirt established corporate policies could set a troubling precedent.

    In general, consumers should be wary of corporate actions taken in the guise of righteousness. Take Lucky Strike. Post-armistice, it came out that the tobacco company’s vaunted switch from copper pigment had actually been planned long before Pearl Harbor. Surveys had shown that its female customers didn’t like green. The war provided cover for something the company was intending to do anyway. So, as I page through the press releases of tech companies mobilized against Putin, my first question is, “Got a light?”

    #Guerre #Ethique_et_numérique #Ethique_washing #Russie

  • Letter to Lina Khan by Steven Levy
    https://link.wired.com/view/5cec29ba24c17c4c6465ed0bfqmpt.wsu/3833f86a

    The Plain View

    Two key figures in Biden’s murderers row of tech regulators—FTC chair Lina Khan and the DOJ’s assistant attorney general for antitrust, Jonathan Kanter—emerged from their hideouts this week to announce that they are preparing new guidelines on how mergers should be evaluated, kicking off the process with a 60-day call for comments.

    In an apparent accident of timing (of course, skeptics would say there are no coincidences), Microsoft announced that same day that it was making the biggest merger in its history, capturing one of the bosses of the game world, Activision, for $69 billion. Clearly, Big Tech has already decided what guidelines bind them on acquisitions: whatever they can get away with.

    Obviously, the two sides have differences in opinion. To clear up matters, I thought I’d take up Kahn and Kantor’s offer and make my own public comment, sent right to the inbox of you lucky Plaintext subscribers!

    Dear Antitrust Czars,

    I’m not a lawyer or an investor, but as a longtime observer of bad behavior and predatory mergers in the tech field, I have Thoughts. I’m not sure how much impact my view will have, though, because it seems to me you’ve already made up your minds on how you want to change merger guidelines, as well as what’s considered anticompetitive behavior. But that’s OK! It doesn’t mean, Chair Khan, that you should recuse yourself from your antitrust lawsuits against Amazon and Meta, just because you have Jeff Bezos and Mark Zuckerberg on your dartboard. They are there for policy reasons, not because you can’t stand Bezos’ laugh or Zuckerberg’s sunscreen. The president appointed you because he wants to get tougher with the likes of those corporate barons, and the judge in the Meta case has already rejected the argument that you’re conflicted.

    So I’m betting that all the comments the two of you get, including mine, won’t divert you from the course you basically set out this week. When you talk about modernizing the guidelines, the headline of your press release makes clear your agenda: to “strengthen enforcement against illegal mergers.” You already have your road map—expanding the definition of anticompetitive to include cases where products are free to consumers, considering the future impact on mergers in nascent markets, and assessing the eventual effects of a dominating company’s entry into a new business. In practice, you don’t necessarily need new guidelines—you’ve already been more aggressively challenging mergers in industries from publishing to computer chips. And those guidelines can be ephemeral. After all, Chair Khan, you’d hardly taken your seat at the agency when you tossed out a merger guideline established just last year by your predecessor. Maybe a future administration will trash your new guidelines just as blithely. But I get it—revising the guidelines to give you more power provides ammunition when companies challenge you in court, which they undoubtedly will.

    You’re right in saying you need new weapons, especially since the forces stacked against you are so formidable. That’s your biggest problem: the unholy bigness of Big Tech. I know that an oft-used canard in antitrust law is that humongous size doesn’t necessarily equal anticompetitiveness. But Big Tech’s bulk has thrown everything out of whack. The combined market cap of Apple, Microsoft, Amazon, Google, and Meta is around $7 trillion. That would fund the Defense Department for a decade.

    That size means that every one of those giants’ substantial mergers is arguably anticompetitive on its face, because their acquisitions immediately become more powerful by virtue of being tied to those dominating platforms. When, for instance, a tech giant like Amazon or Apple decides to become a movie studio, it isn’t like a bunch of film students setting up a back lot somewhere. The new content, financed by the mother ship’s Brobdingnagian profits, has an immediate pipeline to existing consumers already locked into those ecosystems—ecosystems that might favor in-house productions over traditional fare.

    Now let’s talk about how that bigness plays into the Activision bid. In terms of dollars, it’s the most expensive acquisition in Microsoft’s history. Even so, Microsoft doesn’t have to stretch to make the purchase. For perspective, let’s look back to the unsuccessful $45 billion bid for Yahoo that Microsoft made in 2008. If it had gone through, it would have remained the biggest acquisition in the company’s history to date. Capturing Yahoo would have required Microsoft to squander a fifth of its value. (Buying the troubled Yahoo would also have been a huge mistake, but that’s another story.) But the Activision price tag eats up less than 3 percent of Microsoft’s current $2.25 trillion market cap. That’s pocket change for Satya Nadella.

    That sum brings an anticompetitive bounty to Microsoft. It is one of two producers of high-end game consoles, and potentially it could limit Activision titles to Xbox. No wonder Sony took a $20 billion hit after the announcement. Activision also has tens of millions of users who now will find it easier to use Microsoft’s other offerings. Most importantly, camo gear might prove the fashion choice in the next generation of computing, as armies of Call of Duty warriors could use the popular Activision game as a gateway to Microsoft’s metaverse.

    The only way you are going to temper Big Tech—forget about taming it—is to challenge those companies early and often. Guts, not guidelines, might prove more decisive. I suspect you know this. You are right to push hard for Congress to increase your resources, in both financial power and new hires, because you need more regulators, more investigators, more lawyers, more analysts, and more pizzas delivered for late-night brainstorming. These titans will not slow down unless they know there’s a price to be paid. If a tech giant knows that an investigation, and then a lawsuit, could stand in the way of an acquisition, that bid might not be offered in the first place.

    Chair Khan, you acknowledged in a television interview this week that because of your limited tenure, you have a “fierce sense of urgency.” But with the Activision merger announcement, Microsoft laughed in your face. Don’t let them have the last laugh.

    #Lina_Khan #Antitrust #Microsoft #Activision #Jeu_vidéo #Monopoles

  • WIRED : Letter from the editor, janvier 2022
    https://link.wired.com/view/5be9e2833f92a40469f786bcfmjro.2bkc/db68cd59

    Intéressante cette forme d’auto-critique du WIRED des années 1990_2000 et le nouveau positionnement de cette revue. Sous forme très euphémisée bien évidemment, mais c’est quand même un sacré revirement par rapport aux idéologies néo-con de Rosetto, Brand et Kelly des débuts de WIRED (cf. Aux sources de l’utopie numérique de Fred Turner sur cette période).

    In the next few decades, virtually every financial, social, and governmental institution in the world is going to be radically upended by one small but enormously powerful invention: the blockchain.

    Do you believe that? Or are you one of those people who think the blockchain and crypto boom is just a massive, decade-long fraud—the bastard child of the Dutch tulip bubble, Bernie Madoff’s Ponzi scheme, and the wackier reaches of the libertarian internet? More likely, you—like me—are at neither of these extremes. Rather, you’re longing for someone to just show you how to think about the issue intelligently and with nuance instead of always falling into the binary trap.

    Binaries have been on my mind a lot since I took over the editor’s chair at WIRED last March. That’s because we’re at what feels like an inflection point in the recent history of technology, when various binaries that have long been taken for granted are being called into question.

    When WIRED was founded in 1993, it was the bible of techno-utopianism. We chronicled and championed inventions that we thought would remake the world; all they needed was to be unleashed. Our covers featured the brilliant, renegade, visionary—and mostly wealthy, white, and male—geeks who were shaping the future, reshaping human nature, and making everyone’s life more efficient and fun. They were more daring, more creative, richer and cooler than you; in fact, they already lived in the future. By reading WIRED, we hinted, you could join them there!

    If that optimism was binary 0, since then the mood has switched to binary 1. Today, a great deal of media coverage focuses on the damage wrought by a tech industry run amok. It’s given us Tahrir Square, but also Xinjiang; the blogosphere, but also the manosphere; the boundless opportunities of the Long Tail, but also the unremitting precariousness of the gig economy; mRNA vaccines, but also Crispr babies. WIRED hasn’t shied away from covering these problems. But they’ve forced us—and me in particular, as an incoming editor—to ponder the question: What does it mean to be WIRED, a publication born to celebrate technology, in an age when tech is often demonized?

    To me, the answer begins with rejecting the binary. Both the optimist and pessimist views of tech miss the point. The lesson of the last 30-odd years is not that we were wrong to think tech could make the world a better place. Rather, it’s that we were wrong to think tech itself was the solution—and that we’d now be equally wrong to treat tech as the problem. It’s not only possible, but normal, for a technology to do both good and harm at the same time. A hype cycle that makes quick billionaires and leaves a trail of failed companies in its wake may also lay the groundwork for a lasting structural shift (exhibit A: the first dotcom bust). An online platform that creates community and has helped citizens oust dictators (Facebook) can also trap people in conformism and groupthink and become a tool for oppression. As F. Scott Fitzgerald famously said, an intelligent person should be able to hold opposed ideas in their mind simultaneously and still function.

    Yet debates about tech, like those about politics or social issues, still seem to always collapse into either/or. Blockchain is either the most radical invention of the century or a worthless shell game. The metaverse is either the next incarnation of the internet or just an ingeniously vague label for a bunch of overhyped things that will mostly fail. Personalized medicine will revolutionize health care or just widen its inequalities. Facebook has either destroyed democracy or revolutionized society. Every issue is divisive and tribal. And it’s generally framed as a judgment on the tech itself—“this tech is bad” vs. “this tech is good”—instead of looking at the underlying economic, social, and personal forces that actually determine what that tech will do.

    There’s been even more of this kind of binary, tech-centered thinking as we claw our way out of the pandemic. Some optimists claim we’re on the cusp of a “Roaring 2020s” in which mRNA and Crispr will revolutionize disease treatment, AI and quantum computers will exponentially speed up materials science and drug discovery, and advances in battery chemistry will make electric vehicles and large-scale energy storage (and maybe even flying taxis) go mainstream. If you want to see a gloomy future, on the other hand, there’s no shortage of causes: Digital surveillance is out of control, the carbon footprint of cryptocurrency mining and large AI models is expanding, the US–China tech arms race is accelerating, the gig-work precariat is swelling, and the internet itself is balkanizing.

    This tug-of-war between optimism and pessimism is the reason why I said this feels like an inflection point in the history of tech. But even that term, “inflection point,” falls into the binary trap, because it presumes that things will get either worse or better from here. It is, yet again, a false dichotomy. This kind of thinking helps nobody make sense of the future that’s coming. To do that—and to then push that future in the right direction—we need to reject this 0-or-1 logic.

    Which brings me to the question of what WIRED is for.

    Fundamentally, WIRED has always been about a question: What would it take to build a better future?* We exist to inspire people who want to build that future. We do it not by going into Pollyannaish raptures about how great the future is going to be, nor dire jeremiads about how bad things could get, but by taking an evenhanded, clear-eyed look at what it would take to tackle the severe challenges the world faces. Our subject matter isn’t technology, per se: It’s those challenges—like climate change, health care, global security, the future of democracy, the future of the economy, and the dizzying speed of cultural change as our offline and online worlds mingle and remix. Technology plays a starring role in all of these issues, but what’s clearer today than ever is that it’s people who create change, both good and bad. You cannot explain the impacts of technology on the world without deeply understanding the motives, incentives, and limitations of the people who build and use it. And you cannot hope to change the world for the better unless you can learn from the achievements and the mistakes other people have made.

    So I think WIRED’s job is to tell stories about the world’s biggest problems, the role tech plays in them—whether for good or bad—and the people who are trying to solve them. These aren’t all feel-good stories by any means: there are villains as well as heroes, failures as well as successes. Our stance is neither optimism nor pessimism, but rather the belief that it’s worth persisting even when things seem hopeless. (I call it “Greta Thunberg optimism.”) But whatever the story, you should find something to learn from it—and, ideally, the inspiration to make a positive difference yourself.

    Of course, that’s not all we exist to do. WIRED has also always been a home for ambitious, farsighted ideas—sometimes prescient, sometimes wild, sometimes both at the same time. (Fitzgerald again!) We shouldn’t get carried away by hype; too many of our covers in the past promised that this or that invention would “change everything.” But we shouldn’t shy away from pushing the envelope either, stretching people’s minds and showing them possible futures that they might not otherwise dare to imagine. We’ll be critical but not cynical; skeptical but not defeatist. We won’t tell you what to think about the future, but how to think about it.

    Finally, we exist to do the basic hard work of journalism—following the important news, explaining how to think about it, and holding power, particularly tech power, accountable.

    Over the next few months, you should see our coverage starting to coalesce more clearly around those core global challenges—climate, health, and so on. Because these issues are indeed global, you should also start to see a more international range of stories: One of the less obvious but very big changes is that we are merging the US and UK editions of WIRED, previously two entirely separate publications, into a single site at WIRED.com. (If you’re a regular visitor to the site, you may have noticed that we recently launched a new homepage, designed to make it easier for us to showcase the work we’re most proud of and for you to find stories that interest you.) We’ll still publish two separate print editions, though they’ll share many stories. Our US and UK newsrooms are already working as one, and you’ll see all their journalism here on this site. With more writers making up a single team, we’ll be able to go deeper into some of these key areas.

    Above all, we’ll continue to do what WIRED is best at—bringing you delightful, fascinating, weird, brilliantly told stories from all around the world of people taking on extraordinary problems. Our founder Louis Rossetto wrote that WIRED was where you would discover “the soul of our new society in wild metamorphosis.” The wild metamorphosis continues, and while its mechanisms may be technological, the soul behind them is deeply and unavoidably human. Where the human and the technological meet: That’s where WIRED lives, and it’s where we aim to take you, every day.

    Gideon Lichfield | Global Director, WIRED

    Note: I owe a big debt of gratitude to Tom Coates, who was pivotal in helping me think about the history of WIRED and see the opportunity for the role it can play today.

  • https://link.wired.com/view/5cec29ba24c17c4c6465ed0bffdso.28nk/1c4969e8
    Par Steven Levy

    Sam asks, “Is space trash a legitimate problem that needs to be addressed in this decade?”

    Sam, I hope you are not referring to space tourism companies’ policies for selecting their self-styled “astronauts.” Admittedly, those who casually shell out ludicrous sums to sample space may not be the cream of humanity, and Blue Origin is dangerously veering toward stunt casting. Ex-football players, Alan Shepard’s daughter, the oldest astronaut, the youngest astronaut … how long before we’re blasting off centenarians and infants? Plus, after William Shatner, what’s left for anyone to say? But I would never, never, never call these people trash.

    And I suspect that’s not what you mean. You are talking about debris. Right? Yes, this is a problem! While space is infinitely vast, the band around Earth where one can reasonably orbit is tiny in comparison. And we’ve used it as a dump. NASA now tracks about 27,000 shards of litter circling Earth and admits there are countless other detritus too small to monitor yet dangerous enough to cause havoc if they hit something. When a piece of space trash hits a satellite at 15,700 miles an hour, it not only takes the orbiter out of commission but causes more space trash—in 2009 a defunct satellite collided with an Iridium unit and created 2,300 more pieces of trackable garbage, and a lot of other tiny projectiles capable of ruining a Space Station astronaut’s day. Meanwhile, we’re sending up more satellites with abandon. Elon Musk wants to launch at least 60 satellites for his Starlink internet service. Sooner or later someone is going to get walloped. I hope it’s not a space tourist—we need those executive/philanthropists, actors, and progeny of Project Mercury! And when is Elon going to space? Scared of a little debris, Mr. Musk?

    #Espace #Commons #Espace_communs #Débris_spatiaux

  • Facebook et la reporterre philippine Maria Ressa : architecture toxique.
    https://link.wired.com/view/5cec29ba24c17c4c6465ed0bcpe69.322g/9be04645

    Earlier this week, I spoke to Maria Ressa. She is the CEO of Rappler, a publication in the Philippines that steadfastly reports the truth. This enterprise is made more difficult because the Filipino president, Rodrigo Duterte, actively opposes a free press, and Ressa in particular. (“Fake news!” he cries.) Aided by social media supporters, mainly on Facebook, the Duterte regime has harassed her, spread lies about her, and charged her with criminal behavior, including a spectacularly dubious charge of cyber-libel. (Rappler’s seemingly accurate reporting was published before the cyber-libel law even existed, and the arrest came after the paper corrected a typo.) On June 15, a Filipino court convicted her. There are also impending charges of tax evasion and possibly anti-terrorism. The total possible sentence could exceed 100 years. All for telling the truth.

    She now tells me that despite Facebook’s recent efforts, the platform is still what she calls a “behavioral modification system.” She explains: “It’s the way they have not paid attention to the influence operations. I think they take all of our data, and then they take our most vulnerable moments for a message, whether that is from an advertiser or a country, and they serve that to us, right? And then look at how we react and the algorithms adjust to that.”

    After her conviction, “the propaganda machine of the government went into high gear,” she says. “They went even further in terms of dehumanizing me, and that makes it more dangerous for me.” One meme superimposed her face on a scrotum. “It’s sexualized, it’s gendered,” she says. While Facebook did respond to her pleas to remove those, the question was why they ever appeared in the first place. “Sometimes it gets taken down, but it still gets up,” she says. Many of the posts, she says, simply misreport the facts about her. And consistent repetition of a falsehood can obliterate truth. “You repeat a million times that I’m a liar or a criminal, which one is real?” she says.

    Ressa’s plight has drawn attention. She, along with Jamal Khashoggi and a few other courageous journalists, was named Time Magazine’s Person of the Year in 2018. Her speaking appearances have brought crowds to their feet. She is an international symbol for free speech and resistance to authoritarianism. Yet Facebook, which sometimes likes to celebrate heroes who stand up to oppressive bullies (their faces are often on posters hanging on headquarters), had no official statement about Maria Ressa’s shameful prosecution. Speaking on his own, Facebook’s security head Nathan Gleitcher posted a tweet on the day of her conviction: “This is a dark day for press freedom. Maria Ressa is a fearless reporter and an inspiration.” But his remark stood alone: not a peep from Zuckerberg, Sandberg, or other top executives, many of whom have met with her previously and looked her in the eye.

    Facebook gave me a statement saying, “We believe strongly in press freedom and the rights of journalists to work without fear for their personal safety or other repercussions. We continue to support journalists and news organizations by removing any content that violates our policies, disrupting coordinated networks, and limiting the spread of misinformation.” (It also notes that Rappler is one of its fact-checking partners.) So why not speak up for a journalist who works in fear and has suffered repercussions? Facebook’s explanation is that it doesn’t normally single out free-speech heroes and that it did meet privately with Ressa after her conviction. The company has said repeatedly that it has taken measures to address the toxic organized misinformation campaigns boosted by its platform. But Ressa—and many critics—believe that those efforts fall short because they don’t address fundamental aspects of its platform that rewards provocative and even toxic content. “What does fixing it mean?” she says, “In the end, their business model is flawed. How will they still make money without killing democracy?”

    #Facebook #Cyberharcèlement #Philippines

  • Steven Levy Mark Zuckerberg has long said he doesn’t want to be the arbiter of truth.
    https://link.wired.com/view/5cec29ba24c17c4c6465ed0bc85rb.28hc/698b9ee1

    par Steven Levy

    But despite all his protestations, Zuckerberg is not only the arbiter in chief of the world’s dominant social media platform, he’s an active one. That was never more clear than in the nearly two-hour remote session he had with thousands of concerned employees on Tuesday, when he defended his decision not to take down, mitigate, or fact-check several posts by Donald Trump that seemed, in the eyes of employees, to violate Facebook’s policies. In a transcript of the session—the leak of an internal meeting was once an unthinkable act of disloyalty at Facebook, but now it’s an inevitability—Zuckerberg talks in detail about how he consulted with key aides and painstakingly analyzed his community standards, all to make the final call himself. In this case, he decided that Trump’s use of the phrase “When the looting starts, the shooting starts” was not a call to violence or a racist “dog whistle,” despite arguments to the contrary.

    The drama was heightened by two factors. First, the internal opposition to Zuckerberg’s choices was unprecedented, as employees publicly tweeted their displeasure and staged a “virtual walkout” on Monday. Some even quit the company. Also, a group of the company’s earliest employees published a letter lamenting Facebook’s departure from its original ideals. As I wrote earlier this week, what bothered them was not just the two tweets Trump had cross-posted to Facebook. The frustration came from the fact that, for years now, the “free expression” Zuckerberg celebrates has meant hosting misinformation, hate, and divisiveness.

    The second factor is an external threat: a movement to tamper with or repeal legislation that gives Zuckerberg the power to make those decisions without taking legal responsibility for everything that his almost 3 billion users post. That law is known as section 230(c) of the 1996 Telecommunications Act. It frees platforms like Facebook and Twitter of liability for what people share, distinguishing them from publishers like The New York Times or WIRED. But it also gives platforms the editorial discretion to police the content to make their platforms safe and civil. In reaction to the power of big tech companies, some politicians are arguing that platforms should be treated more like publications than, say, phone lines. One is Donald Trump, who last week issued an executive order dictating that the government should strip platforms of that sanctuary status if they’re deemed politically biased. Another declared foe of Section 230 is Joe Biden, though he hasn’t called for a government truth squad like Trump has.

    Zuckerberg’s decision on the president’s posts wasn’t affected by Trump’s threatened executive order, but it certainly favored Trump and the conservative cause. More significantly, it was well in keeping with Facebook’s tendency to allow and even promote content that divides and inflames. Zuckerberg tried to contextualize this for his employees, saying that while his free-expression tilt might allow toxic content to thrive, it also gives voice to the powerless, allowing them to post things like video evidence of police brutality. “I would urge people not to look at the moral impact of what we do just through the lens of harm and mitigation,” he told employees.

    At Twitter, though, CEO Jack Dorsey did look at Donald Trump’s tweets through that lens. After too long a period of keeping his hands off of Trump’s discordant content, he ordered that Twitter tag two disputed tweets. And Snap’s CEO Evan Spiegel went even farther, removing Trump’s posts from the Discover section of the platform, on the grounds that the president’s words are divisive and racist. In a letter to employees Spiegel explained:

    As for Snapchat, we simply cannot promote accounts in America that are linked to people who incite racial violence, whether they do so on or off our platform. Our Discover content platform is a curated platform, where we decide what we promote … This does not mean that we will remove content that people disagree with, or accounts that are insensitive to some people … But there is simply no room for debate in our country about the value of human life and the importance of a constant struggle for freedom, equality, and justice. We are standing with all those who stand for peace, love, and justice and we will use our platform to promote good rather than evil.

    Trump supporters—and certainly Trump himself—might complain about what Twitter and Snap did. But the companies are exercising their rights under 230 exactly in the way that the law permits.

    Zuckerberg should take note. Yes, it’s crazy for one person to have such massive control over what people say online. But like it or not, our system gives leaders of huge corporations massive power. In his total control of Facebook, he must be the arbiter—of harm. We must demand that he perform that role in the best possible way, minimizing the toxic speech posted by his customers, whether they are peons or presidents. His employees are speaking out. His billions of users should let him know as well. And the government should back off.

    #Facebook #Liberte_expression #Division

  • Trump, Twitter, and the failed politics of appeasement

    https://link.wired.com/view/5cec29ba24c17c4c6465ed0bc6h9l.wnj/55e32496

    par Steven Levy

    Lately, my pandemic reading has included Munich, a historical novel by Robert Harris involving the tragic 1938 attempt by UK prime minister Neville Chamberlain to appease Adolph Hitler, hoping to stave off a world war that the Führer was hellbound to trigger. Chamberlain’s efforts (which Harris portrays sympathetically) were doomed.

    That reading now has an odd resonance with current events. For years, Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey have donned kid gloves to handle complaints of conservative bias from Donald Trump, other Republicans, and far-right wingnuts. Despite this appeasement, the executives are now facing a Trump executive order that will potentially impose government controls on what users can and cannot say on their platforms.

    Specifically, Trump is attempting to unilaterally reinterpret the meaning of Section 230, the part of the 1996 Telecommunications Bill that gives the platforms the ability to police the user-created content on their sites for safety and security without bearing the legal responsibility for anything those billions of people might say. His order explicitly echoes his claim—a bogus one—that the platforms are using the 1996 provision to censor conservatives. According to the order, Trump gives the government the power to strip companies of their protection under Section 230. Trump also wants to use something called the “Tech Bias Reporting Tool” to examine platforms for political bias and report offenders to the DOJ and FTC for possible action. It’s a bold move that would create government monitors to make sure Facebook, Twitter, and the rest give conservative speech more than its due. (One hopes that if this does come to pass, the courts will overturn the effort because, well, the constitution.)

    The longstanding claim that the platforms censor conservative speech is ridiculous. Facebook and Twitter remove content that violates community standards by spreading harmful misinformation or hate speech. A lot of that comes from elements of the right wing. Yeah, those standards aren’t perfect, and those platforms make mistakes in executing them, but there’s never been any evidence of an algorithmic bias. But instead of vigorously defending themselves, the leaders of the platforms keep assuring politicians that they take those gripes very seriously.

    Trump himself gets a pass when it comes to moderation because what a president says is newsworthy. That’s a defendable stance, but as he increasingly violates standards and norms, his posts have become a firehose of toxicity. In 2017, Dorsey told me, “I think it’s really important that we maintain open channels to our leaders, whether we like what they’re saying or not, because I don’t know of another way to hold them accountable.” He also implied that newsworthiness might have to be balanced with community standards. That was many tweets ago, and it wasn’t until this week that Twitter provided a fact-check to a Trump tweet that told falsehoods about voting by mail. (Still, Twitter left standing a Trump tweet spreading a bogus charge that former congressperson Joe Scarborough once killed an aide.)

    Zuckerberg has given Trump and other conservatives an even wider berth, beginning with his 2015 decision to leave up Trump’s anti-Muslim post that seemingly violated the company’s hate speech policy. During the 2016 election, Facebook did not remove false news stories from make-believe publications, even though it was clear that such information overwhelmingly benefited Trump. Despite this, the right kept complaining of bias, with Republicans blasting Zuckerberg in his April 2018 appearance in Congress. Zuckerberg knew full well that there was no statistical basis for the charge. But when I asked him about that soon after, his response was shockingly timid. “That depth of concern that there might be some political bias really struck me,” he said. “I was like, ‘Wow, we need to make sure we bring in independent, outside folks to help us do an audit and give us advice on making sure our systems are not biased in ways that we don’t understand.’”

    Later, Facebook commissioned a study led by conservative senator John Kyl which offered no data to back up any systematic bias. Instead of demanding that this should end the complaints, Facebook made some general adjustments in its policies that gave the anecdotal gripes in the report more credibility than they warranted. Appeasement!

    Look, I get it—who wants to take on the president and the ruling party, especially when regulation is in the air? But instead of avoiding conflict, Facebook and Twitter leaders should have been emphasizing that they have just as much right to set their own standards as television stations, newspapers, and other corporations. Despite the fact that they are popular enough to be considered a “public square,” they are still private businesses, and the government has no business determining what legal speech can and cannot occur there. That is the essence of the First Amendment. But even as Mark Zuckerberg goes on about how he values free expression—as he was doing on television the same day Trump issued his order—he still refrains from demanding that the government respect Facebook’s own right to free speech.

    To be sure, Trump is wading—no, make that belly-flopping—into a controversy over internet speech that is already fraught with intractable problems. The very act of giving bullhorns to billions is both a boon and a menace. Even with the purest intentions—and obviously those growth-oriented platforms are not pure—figuring out how to deal with it involves multiple shades of gray. But the current threat comes in clear black and white: the president of the United States is attempting a takeover of internet speech and asserting a federal privilege to topple truth itself.

    Munich has failed. It’s time for the internet moguls to stop acting like Chamberlain—and start channeling Churchill.

    #Trump #Twitter #Médias_sociaux #Régulation

    • But instead of avoiding conflict, Facebook and Twitter leaders should have been emphasizing that they have just as much right to set their own standards as television stations, newspapers, and other corporations. Despite the fact that they are popular enough to be considered a “public square,” they are still private businesses, and the government has no business determining what legal speech can and cannot occur there.

      Justement non : c’est soit l’un, soit l’autre. Les télévision et journaux sont responsables de ce qu’ils publient. Les plateformes sont des moyens de communication, et sont donc protégées des contenus publiés par des tiers.

      Et donc rappeler la position de Chemla : soit les plateformes sont des supports neutres et peuvent donc se prévaloir de l’irresponsabilité éditoriale, soit elles interviennent dans ce qui est publié, donc sont des éditeurs, et deviennent responsables des contenus.

    • Oui, c’est ce qui en fait des « public square ». Et c’est toute la complexité de l’affaire. Car ils ne sont justement pas dans le même temps « publics », c’est-à-dire qu’ils sont guidés (leurs algorithmes sont écrits pour..) par leurs intérêts.
      Je note ici des points de vue, qui ne sont pas forcément les miens ;-) J’enregistre de l’info pour le jour où j’aurais le courage d’écrire.

  • Steven Levy : Streaming celebrates its 25th birthday. Here’s how it all began
    https://link.wired.com/view/5cec29ba24c17c4c6465ed0bc0nqt.1f2j/12bb6811

    So it’s a good time to say happy birthday to streaming media, which just celebrated its 25th anniversary. Two and a half decades ago, a company called Progressive Networks (later called Real Networks) began using the internet to broadcast live and on-demand audio.

    I spoke with its CEO, Rob Glaser, this week about the origins of streaming internet media. Glaser, with whom I have become friendly over the years, told me that he began pursuing the idea after attending a board meeting for a new organization called the Electronic Frontier Foundation in 1993. During the gathering, he saw an early version of Mosaic, the first web browser truly capable of handling images. “A light bulb went off,” Glaser says. “What if it could do the same for audio and video? Anybody could be a broadcaster, and anybody could hear it from anywhere in the world, anytime they wanted to.”

    Glaser believed it was time for a commercial service. When he launched his on April 25, 1995, the first customers were ABC News and NPR; you could listen to news headlines or Morning Edition. It wasn’t the user-friendliest—you had to download his Real Audio app to your desktop and then hope it made a successful connection to the browser. At that point, it worked only on demand. But in September 1995, Progressive Networks began live streaming. Its first real-time broadcast was the audio of a major league baseball game—the Seattle Mariners versus the New York Yankees. (The Mariners won.The losing pitcher was Mariano Rivera, then a starter.) The few who listened from the beginning had to reboot around the seventh inning, as the buffers filled up after two and a half hours or so. By the end of that year, thousands of developers were using Real.

    Other companies began streaming video before Glaser’s, which introduced RealVideo in 1997. The internet at that point wasn’t robust enough to handle high-quality video, but those in the know understood that it was just a matter of time. “It was clear to me that this was going to be the way that everything is going to be delivered,” says Glaser, who gave a speech around then titled “The Internet as the Next Mass Medium.” That same year, Glaser had a conversation with an entrepreneur named Reed Hastings, who told him of his long-range plan to build a business by shipping physical DVDs to people, and then shift to streaming when the infrastructure could support it. That worked out well. Today, our strong internet supports not only entertainment but social programming from YouTube, Facebook, TikTok and others.

    #Histoire_numérique #Streaming