/story

  • Perturbations à Dubaï au lendemain de pluies records - L’Orient-Le Jour
    https://www.lorientlejour.com/article/1410674/perturbations-a-dubai-au-lendemain-de-pluies-records.html

    Pour Friederike Otto, maître de conférences en sciences du climat au Grantham Institute de l’Imperial College de Londres, « les pluies meurtrières et destructrices à Oman et Dubaï » ont probablement été accentuées par le « changement climatique provoqué par l’homme ».

    254 mm de pluie, plus de deux fois le total annuel quand même
    Mais l’excès d’eau au sol est surtout dû à l’insuffisance, ou à l’absence, de réseaux pluviaux pour évacuer les trop-pleins. Les villes sont inadaptées aux conséquences aggravées du changement climatique. Voir plusieurs commentaires dans ce fil ( https://twitter.com/Pascal_Laurent_/status/1780262609477832994)
    Par ailleurs plusieurs twittos soupçonnent que ces pluies record ont été provoquées par l’ensemencement des nuages. Cette pratique est avérée à Dubai mais il n’y a pas pour l’instant de preuves que ces orages exceptionnels en seraient le résultat
    #changement_climatique
    #géoingénierie

    • Karma’s a bitch…
      https://seenthis.net/messages/1030033

      Pour le président émirati de la COP28, sortir des énergies fossiles ramènerait l’humanité “à l’âge des cavernes” (décembre 2023)
      https://www.courrierinternational.com/article/pour-le-president-emirati-de-la-cop28-mettre-fin-aux-energies

      “The Guardian” a diffusé dimanche une vidéo dans laquelle Sultan Al-Jaber estime qu’il n’existe “aucune étude scientifique” montrant qu’une élimination progressive des énergies fossiles permettrait de limiter le réchauffement à 1,5°C. Ces déclarations, qui émergent en pleine COP, ont provoqué l’indignation des scientifiques.

    • No, Dubai’s Floods Weren’t Caused By Cloud Seeding
      Heavy rain has triggered flash flooding in Dubai. But those pointing the finger at cloud seeding are misguided.
      https://www.wired.com/story/dubai-flooding-uae-cloud-seeding-climate-change

      News reports and social media posts were quick to point the blame at cloud seeding. The UAE has a long-running program for trying to squeeze more rain out of the clouds that pass over the normally arid region—it has a team of pilots who spray salt particles into passing storms to encourage more water to form. The floods were positioned as a cautionary tale by some: Here’s what happens when you mess with nature. Even Bloomberg reported that cloud seeding had worsened the flooding.

      The truth is more complicated. I’ve spent the last few months reporting on cloud seeding in the UAE for an upcoming WIRED feature, and while it’s true that the UAE has been running cloud seeding missions this week—it performs more than 300 a year—it’s a stretch to say that it was responsible for the floods. (In fact, as we were preparing this story for publication on Wednesday morning, the UAE’s National Center for Meteorology told CNBC it had not seeded any clouds before the storm struck on Tuesday.)

      There are a few reasons for this. First: Even the most optimistic assessments of cloud seeding say that it can increase rainfall by a maximum of 25 percent annually. In other words, it would have rained anyway, and if cloud seeding did have an impact, it would only have been to slightly increase the amount of precipitation that fell. The jury is still out on the effectiveness of cloud seeding in warm climates, and even if it does work, cloud seeding can’t produce rain out of thin air, it can only enhance what’s already in the sky.

      Secondly, seeding operations tend to take place in the east of the country, far from more populated areas like Dubai. This is largely because of restrictions on air traffic, but means that it was unlikely that any seeding particles were still active by the time the storms reached Dubai. Most of the scientists I’ve spoken to say the impact of cloud seeding has a very small, localized effect and is unlikely to cause flooding in other areas. But perhaps the best evidence that cloud seeding wasn’t involved in these floods is the fact that it rained all over the region. Oman didn’t do any cloud seeding, but it was even more badly affected by flooding, with a number of casualties.

      It’s exciting to point the finger at a scary technology, but the real cause of the flooding is likely more banal: Dubai is comically ill-equipped to deal with rainfall. The city has expanded rapidly over the last few decades, with little attention paid in the past to infrastructure like storm drains that could help it deal with a sudden influx of water. It’s largely concrete and glass, and there’s very little green space to soak up rainfall. The result is chaos whenever it rains—though to be fair, most cities would struggle to deal with a year’s worth of rain falling in 12 hours.

      However, climate change may also be playing a role. As the planet heats up, the complex weather dynamics of the region are shifting and changing in ways that may bring more violent storms. City planners around the world are trying to make their cities “spongier” to help deal with flash flooding and save more water for drier parts of the year. Instead of using cloud seeding to turn the sky into a sponge, Dubai would be better off trying to turn the city into one.

  • The Notorious Lockbit Ransomware Gang Has Been Disrupted by Law Enforcement | WIRED
    https://www.wired.com/story/lockbit-ransomware-takedown-website-nca-fbi

    In addition to the seizing of technical infrastructure, the law enforcement operations around LockBit also include arrests in Poland, Ukraine, and the United States, as well as sanctions for two alleged members of the group who are based in Russia. The group has members spread around the world, the officials said.

    Aucun iranien, ni de palestinien, ni de chinois, ni de coréen (du nord), ni d’écoloterroriste, ni d’islamogauchiste, le monde est mal fait.

  • US Lawmakers Tell DOJ to Quit Blindly Funding ‘Predictive’ Police Tools
    https://www.wired.com/story/doj-predictive-policing-lawmakers-demand

    The United States Department of Justice has failed to convince a group of US lawmakers that state and local police agencies aren’t awarded federal grants to buy AI-based “policing” tools known to be inaccurate, if not prone to exacerbating biases long observed in US police forces.

    #surveillance

  • Epic Games’ Sale of Bandcamp Has Left the Artist-Friendly Music Platform in Limbo | WIRED
    https://www.wired.com/story/epic-games-sale-bandcamp-music-platform-limbo

    Those employees were not included in Epic’s sale of Bandcamp. Songtradr purchased the platform’s business and operations but not its staff, according to Sandy Pope, bargaining director for the Office of Professional Employees International Union, which since March has represented 67 out of some 120 Bandcamp workers.

    #jeu_vidéo #jeux_vidéo #musique #bandcamp #songtradr #epic_games #cession #rachat #business #ressources_humaines #licenciements

  • Unhinged Conspiracies, AI Doppelgangers, and the Fractured Reality of Naomi Klein | WIRED
    https://www.wired.com/story/covid-conspiracies-ai-doppelgangers-naomi-klein

    Très intéressant interview de Naomi Klein

    The thing I find disingenuous is when you hear, oh, we’re going to have so much leisure time, the AI will do the grunt work. What world are you living in? That’s not what happens. Fewer people will get hired. And I don’t think this is a fight between humans and machines; that’s bad framing. It’s a fight between conglomerates that have been poisoning our information ecology and mining our data. We thought it was just about tracking us to sell us things, to better train their algorithms to recommend music. It turns out we’re creating a whole doppelganger world.

    We’ve provided just enough raw material.

    When Shoshana Zuboff wrote The Age of Surveillance Capitalism, it was more about convincing people who’d never had a sense that they had a right to privacy—because they’d grown up with the all-seeing eye of social media—that they did have a right to privacy. Now it’s not just that, even though privacy is important. It’s about whether anything we create is going to be weaponized against us and used to replace us—a phrase that unfortunately has different connotations right now.

    Take it back! The right stole “shock doctrine,” you can nab “replace us” for the AI age.

    These companies knew that our data was valuable, but I don’t even think they knew exactly what they were going to do with it beyond sell it to advertisers or other third parties. We’re through the first phase now, though. Our data is being used to train the machines.

    Fodder for a Doppelganger sequel.

    And about what it means for our ability to think new thoughts. The idea that everything is a remix, a mimicry—it relates to what you were talking about, the various Marvel and Mattel universes. The extent to which our culture is already formulaic and mechanistic is the extent to which it’s replaceable by AI. The more predictable we are, the easier it is to mimic. I find something unbearably sad about the idea that culture is becoming a hall of mirrors, where all we see is our own reflections back.

    #Naomi_Klein #Sosie #Doppelganger #Intelligence_artificielle

  • The Burning Man Fiasco Is the Ultimate Tech Culture Clash | WIRED
    https://www.wired.com/story/burning-man-diplo-chris-rock-social-media-culture-clash

    “Light weights.” That was the reply when Diplo posted a video of himself, Chris Rock, and several others escaping this year’s Burning Man after heavy rains left thousands of other Burners stranded and unable to leave. It was a small thing, but also encapsulated a growing divide between long-term attendees and those who show up expecting a weeklong Coachella in the Nevada desert.

    “Old-timers like myself tend to relish in the chaos,” says Eddie Codel, the San Francisco–based videographer who called Diplo and Rock lightweights on X, the social network formerly known as Twitter. “It allows us to lean into the principle of radical self-reliance a bit more.” Codel is on his 15th burn, he’s been coming since 1997, and Diplo wasn’t the only escaping Burner he called out. When someone else posted a video of RVs stuck in waterlogged sand, he posted, “They were warned.”

    ’Twas ever thus. Burning Man may have started as a gathering of San Francisco counterculture types, but in recent years it has morphed into a confab of tech bros, celebs, and influencers—many of whom fly in and spend the event’s crushingly hot days in RVs or air-conditioned tents, powered by generators. The Playa, as it’s known, is still orchestrated by the Burning Man Organization, otherwise known as “the Org,” and its core principles—gifting, self-reliance, decommodification (no commercial sponsorships)—remain in place.

    But increasingly the Burning Man tenet of “leave no trace” has found itself butting heads with growing piles of debris scattered in the desert following the bacchanal, which can draw more than 70,000 people every year. It’s an ideological minefield, one laid atop a 4-square-mile half-circle of tents and Dune-inspired art installations where everyone has a carbon footprint that’s two-thirds of a ton.

    A lot of this came to a head before rain turned Black Rock Desert into a freshly spun clay bowl. Last week, as festivalgoers were driving into Black Rock City, activists from groups like Rave Revolution, Extinction Rebellion, and Scientist Rebellion tried to halt their entry, demanding that the event cease allowing private jets, single-use plastics, and unlimited generator and propane use. They were met by attendees who said they could “go fuck themselves,” and ultimately the protest was shut down by the Pyramid Lake Paiute tribal police. (The route to the event passes through Pyramid Lake Paiute Reservation.)

    Last Sunday, as news began to spread about the Burners trapped by the rain, reactions grew more pointed. In one popular TikTok, since deleted, Alex Pearlman, who posts using the handle @pearlmania500, lambasted Burners for contributing to climate change while “building a temporary city in the middle of nowhere while we’re in the middle of an unhoused fucking homeless problem.” Reached by email, Pearlman said that TikTok took down the video, claiming it was mass reported for content violations. The creator challenged that, and it got reinstated—then it was removed again. “My reaction was, ‘I guess the community guideline enforcement manager hitched a ride with Diplo and Chris Rock out of Burning Man,’” Pearlman says.

    This sort of thing—a rant, about tech industry types at Burning Man, posted on a social media site, then shared on other social media sites—is essentially the rub, the irony of Burning Man in 2023. For years, the event was, and is, the playground of tech utopian types, the place where they got to unplug and get enlightened. Larry Page and Sergey Brin chose Eric Schmidt as Google’s CEO in part because of his Burner cred. But as mobile data on the Playa has gotten better—in 2016, new cell towers connected the desert like never before—more real-time information has come out of Burning Man as it’s happening, for better or worse.

    This year, that led to more than a little misinformation, says Matthew Reyes, who has, since 2013, volunteered to run Burning Man’s official live webcast. He didn’t go to the event this year but has been helping from his home near Dayton, Ohio. He says he’s had to file several Digital Millennium Copyright Act takedown notices to try to get fake Burning Man streams removed. It’s part of a larger trend of misinformation coming out of the festival, like the debunked rumor that there was an Ebola outbreak at the festival this year—one spread by blue-check X users. The tools so often used by attendees to share their adventures are now also the tools making the event look like a quagmire.

    “All of social media, it’s all about money, about serving custom ads or whatever the monetization scheme is,” Reyes says, adding that he believes internet discourse has hyped up what happened at this year’s event and that oftentimes things that are jokes on the Playa may get misunderstood on platforms. Reyes argues that many media outlets are further distorting the view of what’s happening on the Playa by reporting on what they see rise to the top of those very same social media platforms.

    For Reyes, what happened at this year’s Burning Man is actually proof that, for the most part, the festival’s tenets worked. People shared resources; they got out. And, as Codel put it, he had “the time of [his] life.” Climate change, and Burning Man’s potential impacts on it, are part of a crisis happening worldwide—though, as University of Pennsylvania environmental science professor Michael Mann told WIRED this week, “what took place at Burning Man speaks profoundly to the message of the climate protesters who were shouted down by Burning Man only days earlier.” (Burning Man aims to be carbon-negative by 2030, but some speculate the event won’t hit that target.)

    But even if the tenets of Burning Man worked, that doesn’t mean they were always followed—like, say, that decommodification one. Over the Labor Day weekend, when Burning Man attendees were stuck in the muck and unsure when they’d get out, a TikTokker posting on the handle @burningmanfashion told followers that her crew was safe and they had “enough tuna for a week.” The camp’s structures had fallen down, but they’d be OK. “The news is saying it’s pretty bad out here—it is,” she said. “Thank goodness we have a ModVan, so we’re safe inside of that. Sorry about the plug, I know we’re not supposed to talk about commercial things.”

    #Burning_man #Climat #Pop_culture

  • Rising Interest Rates Might Herald the End of the Open Internet | WIRED
    https://www.wired.com/story/rising-interest-rates-might-herald-the-end-of-the-open-internet

    Web 2.0 took off with help from the economic conditions of the 2000s. Recent moves from Reddit and Twitter signal that that era is coming to an end.

    Tim Hwang is a policy analyst and the author of Subprime Attention Crisis, a book about the global bubble of programmatic advertising. Follow him on Twitter @timhwang.

    Tianyu Fang is a writer and researcher. He was part of Chaoyang Trap, an experimental newsletter about culture and life on the Chinese internet. Follow him on Twitter @tianyuf.

    Photo-illustration: WIRED Staff; Getty Images

    The open internet once seemed inevitable. Now, as global economic woes mount and interest rates climb, the dream of the 2000s feels like it’s on its last legs. After abruptly blocking access to unregistered users at the end of last month, Elon Musk announced unprecedented caps on the number of tweets—600 for those of us who aren’t paying $8 a month—that users can read per day on Twitter. The move follows the platform’s controversial choice to restrict third-party clients back in January.

    This wasn’t a standalone event. Reddit announced in April that it would begin charging third-party developers for API calls this month. The Reddit client Apollo would have to pay more than $20 million a year under new pricing, so it closed down, triggering thousands of subreddits to go dark in protest against Reddit’s new policy. The company went ahead with its plan anyway.

    Leaders at both companies have blamed this new restrictiveness on AI companies unfairly benefitting from open access to data. Musk has said that Twitter needs rate limits because AI companies are scraping its data to train large language models. Reddit CEO Steve Huffman has cited similar reasons for the company’s decision to lock down its API ahead of a potential IPO this year.

    These statements mark a major shift in the rhetoric and business calculus of Silicon Valley. AI serves as a convenient boogeyman, but it is a distraction from a more fundamental pivot in thinking. Whereas open data and protocols were once seen as the critical cornerstone of successful internet business, technology leaders now see these features as a threat to the continued profitability of their platforms.

    It wasn’t always this way. The heady days of Web 2.0 were characterized by a celebration of the web as a channel through which data was abundant and widely available. Making data open through an API or some other means was considered a key way to increase a company’s value. Doing so could also help platforms flourish as developers integrated the data into their own apps, users enriched datasets with their own contributions, and fans shared products widely across the web. The rapid success of sites like Google Maps—which made expensive geospatial data widely available to the public for the first time—heralded an era where companies could profit through free, mass dissemination of information.

    “Information Wants To Be Free” became a rallying cry. Publisher Tim O’Reilly would champion the idea that business success in Web 2.0 depended on companies “disagreeing with the consensus” and making data widely accessible rather than keeping it private. Kevin Kelly marveled in WIRED in 2005 that “when a company opens its databases to users … [t]he corporation’s data becomes part of the commons and an invitation to participate. People who take advantage of these capabilities are no longer customers; they’re the company’s developers, vendors, skunk works, and fan base.” Investors also perceived the opportunity to generate vast wealth. Google was “most certainly the standard bearer for Web 2.0,” and its wildly profitable model of monetizing free, open data was deeply influential to a whole generation of entrepreneurs and venture capitalists.

    Of course, the ideology of Web 2.0 would not have evolved the way it did were it not for the highly unusual macroeconomic conditions of the 2000s and early 2010s. Thanks to historically low interest rates, spending money on speculative ventures was uniquely possible. Financial institutions had the flexibility on their balance sheets to embrace the idea that the internet reversed the normal laws of commercial gravity: It was possible for a company to give away its most valuable data and still get rich quick. In short, a zero interest-rate policy, or ZIRP, subsidized investor risk-taking on the promise that open data would become the fundamental paradigm of many Google-scale companies, not just a handful.

    Web 2.0 ideologies normalized much of what we think of as foundational to the web today. User tagging and sharing features, freely syndicated and embeddable links to content, and an ecosystem of third-party apps all have their roots in the commitments made to build an open web. Indeed, one of the reasons that the recent maneuvers of Musk and Huffman seem so shocking is that we have come to expect data will be widely and freely available, and that platforms will be willing to support people that build on it.

    But the marriage between the commercial interests of technology companies and the participatory web has always been one of convenience. The global campaign by central banks to curtail inflation through aggressive interest rate hikes changes the fundamental economics of technology. Rather than facing a landscape of investors willing to buy into a hazy dream of the open web, leaders like Musk and Huffman now confront a world where clear returns need to be seen today if not yesterday.

    This presages major changes ahead for the design of the internet and the rights of users. Twitter and Reddit are pioneering an approach to platform management (or mismanagement) that will likely spread elsewhere across the web. It will become increasingly difficult to access content without logging in, verifying an identity, or paying a toll. User data will become less exportable and less shareable, and there will be increasingly fewer expectations that it will be preserved. Third-parties that have relied on the free flow of data online—from app-makers to journalists—will find APIs ever more expensive to access and scraping harder than ever before.

    We should not let the open web die a quiet death. No doubt much of the foundational rhetoric of Web 2.0 is cringeworthy in the harsh light of 2023. But it is important to remember that the core project of building a participatory web where data can be shared, improved, critiqued, remixed, and widely disseminated by anyone is still genuinely worthwhile.

    The way the global economic landscape is shifting right now creates short-sighted incentives toward closure. In response, the open web ought to be enshrined as a matter of law. New regulations that secure rights around the portability of user data, protect the continued accessibility of crucial APIs to third parties, and clarify the long-ambiguous rules surrounding scraping would all help ensure that the promise of a free, dynamic, competitive internet can be preserved in the coming decade.

    For too long, advocates for the open web have implicitly relied on naive beliefs that the network is inherently open, or that web companies would serve as unshakable defenders of their stated values. The opening innings of the post-ZIRP world show how broader economic conditions have actually played the larger role in architecting how the internet looks and feels to this point. Believers in a participatory internet need to reach for stronger tools to mitigate the effects of these deep economic shifts, ensuring that openness can continue to be embedded into the spaces that we inhabit online.

    Tim Hwang est l’auteur de “Le grand krach de l’attention”
    https://cfeditions.com/krach

    #Tim_Hwang #Internet_ouvert #Open_data

  • Give Every AI a Soul—or Else | WIRED
    https://www.wired.com/story/give-every-ai-a-soul-or-else

    Quand on demande aux auteurs de science fiction d’imaginer des formes de régulation, on tombe parfois sur des idées étranges... qui viennent certainement de la conception d’IA comme des entités “human-like”, non pas comme chaque humain (sentient et ayant un corps - quoique ce dernier point est évoqué pour les IA aussi) mais comme les civilisations d’humains qui s’auto-contrôlent.

    Why this sudden wave of concern? Amid the toppling of many clichéd assumptions, we’ve learned that so-called Turing tests are irrelevant, providing no insight at all into whether generative large language models—GLLMs or “gollems”—are actually sapient beings. They will feign personhood, convincingly, long before there’s anything or anyone “under the skull.”

    Anyway, that distinction now appears less pressing than questions of good or bad—or potentially lethal—behavior.

    This essay is adapted from David Brin’s nonfiction book in progress, Soul on Ai.

    Some remain hopeful that a merging of organic and cybernetic talents will lead to what Reid Hoffman and Marc Andreesen have separately called “amplification intelligence.” Or else we might stumble into lucky synergy with Richard Brautigan’s “machines of loving grace.” But worriers appear to be vastly more numerous, including many elite founders of a new Center for AI Safety who fret about rogue AI misbehaviors, from irksome all the way to “existentially” threatening human survival.

    Some short-term remedies, like citizen-protection regulations recently passed by the European Union, might help, or at least offer reassurance. Tech pundit Yuval Noah Harari proposed a law that any work done by gollems or other AI must be so labeled. Others recommend heightened punishments for any crime that’s committed with the aid of AI, as with a firearm. Of course, these are mere temporary palliatives.

    Un peu de SF...

    By individuation I mean that each AI entity (he/she/they/ae/wae) must have what author Vernor Vinge, way back in 1981, called a true name and an address in the real world. As with every other kind of elite, these mighty beings must say, “I am me. This is my ID and home-root. And yes, I did that.”

    Hence, I propose a new AI format for consideration: We should urgently incentivize AI entities to coalesce into discretely defined, separated individuals of relatively equal competitive strength.

    Each such entity would benefit from having an identifiable true name or registration ID, plus a physical “home” for an operational-referential kernel. (Possibly “soul”?) And thereupon, they would be incentivized to compete for rewards. Especially for detecting and denouncing those of their peers who behave in ways we deem insalubrious. And those behaviors do not even have to be defined in advance, as most AI mavens and regulators and politicians now demand.

    Not only does this approach farm out enforcement to entities who are inherently better capable of detecting and denouncing each other’s problems or misdeeds. The method has another, added advantage. It might continue to function, even as these competing entities get smarter and smarter, long after the regulatory tools used by organic humans—and prescribed now by most AI experts—lose all ability to keep up.

    Putting it differently, if none of us organics can keep up with the programs, then how about we recruit entities who inherently can keep up? Because the watchers are made of the same stuff as the watched.

    Personally, I am skeptical that a purely regulatory approach would work, all by itself. First because regulations require focus, widely shared political attention, and consensus to enact, followed by implementation at the pace of organic human institutions—a sloth/snail rate, by the view of rapidly adapting cybernetic beings. Regulations can also be stymied by the “free-rider problem”—nations, corporations, and individuals (organic or otherwise) who see personal advantage in opting out of inconvenient cooperation.

    There is another problem with any version of individuation that is entirely based on some ID code: It can be spoofed. If not now, then by the next generation of cybernetic scoundrels, or the next.

    I see two possible solutions. First, establish ID on a blockchain ledger. That is very much the modern, with-it approach, and it does seem secure in theory. Only that’s the rub. It seems secure according to our present set of human-parsed theories. Theories that AI entities might surpass to a degree that leaves us cluelessly floundering.

    Another solution: A version of “registration” that’s inherently harder to fool would require AI entities with capabilities above a certain level to have their trust-ID or individuation be anchored in physical reality. I envision—and note: I am a physicist by training, not a cyberneticist—an agreement that all higher-level AI entities who seek trust should maintain a Soul Kernel (SK) in a specific piece of hardware memory, within what we quaintly used to call a particular “computer.”

    Yes, I know it seems old-fashioned to demand that instantiation of a program be restricted to a specific locale. And so, I am not doing that! Indeed, a vast portion, even a great majority, of a cyber entity’s operations may take place in far-dispersed locations of work or play, just as a human being’s attention may not be aimed within their own organic brain, but at a distant hand, or tool. So? The purpose of a program’s Soul Kernel is similar to the driver’s license in your wallet. It can be interrogated in order to prove that you are you.

    Again, the key thing I seek from individuation is not for all AI entities to be ruled by some central agency, or by mollusk-slow human laws. Rather, I want these new kinds of über-minds encouraged and empowered to hold each other accountable, the way we already (albeit imperfectly) do. By sniffing at each other’s operations and schemes, then motivated to tattle or denounce when they spot bad stuff. A definition that might readjust to changing times, but that would at least keep getting input from organic-biological humanity.

    Especially, they would feel incentives to denounce entities who refuse proper ID.

    If the right incentives are in place—say, rewards for whistle-blowing that grant more memory or processing power, or access to physical resources, when some bad thing is stopped—then this kind of accountability rivalry just might keep pace, even as AI entities keep getting smarter and smarter. No bureaucratic agency could keep up at that point. But rivalry among them—tattling by equals—might.

    Above all, perhaps those super-genius programs will realize it is in their own best interest to maintain a competitively accountable system, like the one that made ours the most successful of all human civilizations. One that evades both chaos and the wretched trap of monolithic power by kings or priesthoods … or corporate oligarchs … or Skynet monsters. The only civilization that, after millennia of dismally stupid rule by moronically narrow-minded centralized regimes, finally dispersed creativity and freedom and accountability widely enough to become truly inventive.

    David Brin is an astrophysicist whose international best-selling novels include The Postman, Earth, Existence, and Hugo Award winners Startide Rising and The Uplift War. He consults for NASA, companies, agencies, and nonprofits about the onrushing future. Brin’s first nonfiction book, The Transparent Society, won the Freedom of Speech Award. His new one is Vivid Tomorrows: Science Fiction and Hollywood.

    #Intelligence_artificielle #Individuation #Science_fiction #Régulation

  • Realistic Graphics Can Open Real Dialog Around Game Violence | WIRED
    https://www.wired.com/story/realistic-graphics-video-game-violence

    Questions around violence in games have a long history, spanning tabloid moral panics to concerted academic research. While the topic of whether playing violent games may lead to aggressive behavior in real life is still hotly debated, studies tend to show that any correlation is at most minuscule. Yet with the progress of visual fidelity in games, from the FLESH system to the recent trailer for Unrecord, which some thought looked too lifelike to be true, it’s no surprise if the question circles around again.

    Aaron Drummond, a senior lecturer at the School of Psychological Sciences at the University of Tasmania (and coauthor of the study linked above), believes that while the topic demands additional research, if increasing realism in game violence did lead to more aggressive behavior, the signs should already be present.

    “One would expect to see three things,” he explains. “One, an increase in the number of studies showing an effect of violent content on aggression; two, an increase in the effect sizes of violent games on aggressive behavior; and three, an increase in assaults and violent crimes.” None of these things have happened, he adds, with data in fact trending in the opposite direction.

    Paul Cairns, head of the Department of Computer Science at the University of York in the UK, has a similar view. “My instinct is that if violent video games really made people violent, we would be going to hell in a handcart right now,” he says. Cairns has explored the concept of “priming,” or the idea that game violence can somehow alter how we respond to violence elsewhere, potentially leading toward violent behavior. There’s no obvious evidence of priming, he says, and “if you manipulate the realism of games, it really doesn’t lead to any change of priming at all.” If there’s any path from playing games to violent behavior, then, it’s not merely down to violent content. “There’s got to be something else going on there.”

    Despite past research, though, it’s impossible to know for sure that increased realism won’t have a negative impact, Cairns says, simply because we’ve never seen the current levels of realism in interactive media before. Yet humans—at least adults—are very good at understanding what’s real and what isn’t, he continues, “which is why [some people] can bear a horror film but can’t even watch people have an injection.” So as long as we understand we aren’t taking part in a real scenario, it seems unlikely that even a highly realistic simulation will spark problematic behavior.

    #Jeux_vidéo #Violence

  • The ‘#Enshittification’ of TikTok | WIRED
    https://www.wired.com/story/tiktok-platforms-cory-doctorow

    HERE IS HOW platforms die: First, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.

    I call this enshittification, and it is a seemingly inevitable consequence arising from the combination of the ease of changing how a platform allocates value, combined with the nature of a “two-sided market,” where a platform sits between buyers and sellers, hold each hostage to the other, raking off an ever-larger share of the value that passes between them.

    (La merdisation ?)

  • Yet Another Problem With Recycling: It Spews Microplastics
    https://www.wired.com/story/yet-another-problem-with-recycling-it-spews-microplastics

    THE PLASTICS INDUSTRY has long hyped recycling, even though it is well aware that it’s been a failure. Worldwide, only 9 percent of plastic waste actually gets recycled. In the United States, the rate is now 5 percent. Most used plastic is landfilled, incinerated, or winds up drifting around the environment.

    Now, an alarming new study has found that even when plastic makes it to a recycling center, it can still end up splintering into smaller bits that contaminate the air and water. This pilot study focused on a single new facility where plastics are sorted, shredded, and melted down into pellets. Along the way, the plastic is washed several times, sloughing off microplastic particles—fragments smaller than 5 millimeters—into the plant’s wastewater.

    [...]

    Their microplastics tally was astronomical. Even with filtering, they calculate that the total discharge from the different washes could produce up to 75 billion particles per cubic meter of wastewater. Depending on the recycling facility, that liquid would ultimately get flushed into city water systems or the environment. In other words, recyclers trying to solve the plastics crisis may in fact be accidentally exacerbating the microplastics crisis, which is coating every corner of the environment with synthetic particles.

    It seems a bit backward, almost, that we do plastic recycling in order to protect the environment, and then end up increasing a different and potentially more harmful problem,” says plastics scientist Erina Brown, who led the research while at the University of Strathclyde.

    [...]

    The full extent of the problem isn’t yet clear, as this pilot study observed just one facility. But because it was brand-new, it was probably a best-case scenario, says Steve Allen, a microplastics researcher at the Ocean Frontiers Institute and coauthor of the new paper. “It is a state-of-the-art plant, so it doesn’t get any better,” he says. “If this is this bad, what are the others like?

    These researchers also found high levels of airborne microplastics inside the facility, ready for workers to inhale. Previous research has found that recycled pellets contain a number of toxic chemicals, including endocrine-disrupting ones. Plastic particles can be dangerous to human lung cells, and a previous study found that laborers who work with nylon, which is also made of plastic, suffer from a chronic disease known as flock worker’s lung. When plastics break down in water, they release “leachate”—a complex cocktail of chemicals, many of which are hazardous to life.

    Recycling a plastic bottle, then, isn’t just turning it into a new bottle. It’s deconstructing it and putting it back together again. The recycling centers are potentially making things worse by actually creating microplastics faster and discharging them into both water and air,” says Deonie Allen, a coauthor of the paper and a microplastics researcher at the University of Birmingham. I’m not sure we can technologically engineer our way out of that problem.

    #plastique #pollution #recyclage #eau #air

  • The Open Letter to Stop ‘Dangerous’ AI Race Is a Huge Mess | Chloe Xiang
    https://www.vice.com/en/article/qjvppm/the-open-letter-to-stop-dangerous-ai-race-is-a-huge-mess

    The letter was penned by the Future of Life Institute, a nonprofit organization with the stated mission to “reduce global catastrophic and existential risk from powerful technologies.” It is also host to some of the biggest proponents of longtermism, a kind of secular religion boosted by many members of the Silicon Valley tech elite since it preaches seeking massive wealth to direct towards problems facing humans in the far future. One notable recent adherent to this idea is disgraced FTX CEO Sam Bankman-Fried. Source: Motherboard

    • Gary Marcus a signé la lettre, il est très loin de la « AI Hype », et a un point de vue beaucoup plus pondéré.

      I am not afraid of robots. I am afraid of people.
      https://garymarcus.substack.com/p/i-am-not-afraid-of-robots-i-am-afraid

      For now, all the technolibertarians are probably cackling; if they had wanted to sabotage the “develop AI with care” crowd, they couldn’t have found a better way to divide and conquer.

      In truth, over 50,000 people signed the letter, including a lot of people who have nothing to do with the long term risk movement that the FLI itself is associated with. These include, for example, Yoshua Bengio (the most cited AI researcher in recent years), Stuart Russell (a well-known AI researcher at Berkeley), Pattie Maes (a prominent AI researcher at MIT), John Hopfield (a physicist whose original work on machine learning has been massively influential), Victoria Krakovna (a leading researcher at DeepMind working on how to get machines to do what we want them to do), and Grady Booch (a pioneering software architect who has been speaking out about the unreliability of current techniques as an approach to software engineering).

      But a few loud voices have overshadowed the 50,000 who have signed.

    • Un aspect qui me chagrine un peu, c’est que même chez Gary Marcus, ça se focalise sur des travers que seraient des utilisations frauduleuses de l’IA : désinformation et fishing essentiellement. (Et tout le monde nous fait un peu chier avec ces histoires de désinformation, comme si Trump, QAnon, les climatosceptiques et les covidiots, les gouvernements qui mentent, avaient besoin de la moindre IA pour générer et rendre « crédibles » leurs foutaises délirantes.)

      Pourtant il y a toutes les utilisations qui sont soit déjà légales, soit prochainement légales, et qui sont totalement épouvantables : « aide » à la justice (lui est noir et pauvre, il ira en prison parce que l’IA super-finaude a trouvé qu’il avait une tête à récidiver), « aide » aux contrôles des aides sociales (elle selon l’IA, elle a un profil à picoler sont argent de la CAAF, alors on va lui couper les vivres), pourquoi pas l’orientation des gamins avec des algorithmes qui font flipper tout le monde (je sais, Parcoursup est loin de l’IA, mais je n’ai aucun doute que c’est la prochaine étape), aide aux flics (celui-là, l’IA a décidé de te me le ficher S illico, vu qu’il est abonné au flux RSS de rezo.net et qu’il lit Bastamag…), automatisation complète de la médecine (au lieu d’une aide au diagnostic, on remplacera carrément le médecin avec une IA), etc.

      Automatisation des accès aux droits (immigration, solidarités, logement, éducation…), et incompétence organisées des personnels. Et renforcement de ce principe d’autorité (« le logiciel se trompe moins que les humains ») que déjà beaucoup de personnels ne sont plus en position de prendre la responsabilité d’aller à l’encontre d’une décision prise par un algorithme.

    • Ouais enfin quand tu t’impliques dans un débat, tu es censé te renseigner un peu sur ce qui s’est passé avant dans le champs.

    • Il faut que tu soies plus explicite.

      Ça fait un moment que je suis Gary Marcus, parce qu’il est justement opposé à la « AI Hype », qu’il a déjà publié plusieurs textes expliquant qu’il ne croit pas à l’avénement intelligence générale avec les outils actuels (ce n’est pas un gourou qui annonce la singularité grâce à Bitcoin et ChatGPT, ni un adepte du longtermisme). Et que dans le même temps, il avait déjà publié des textes de méfiance envers les outils actuels, avant de signer l’appel en question (dont il reconnaît explicitement des limites et des problèmes dans le texte qu’il a publié cette nuit – et il y évoque explicitement le texte de Timnit Gebru que tu as posté ci-dessus).

    • Je suppose que « se renseigner » fait référence au paragraphe 6.2 du document On the Dangers of Stochastic Parrots : Can Language Models Be Too Big ? (mars 2021)
      https://dl.acm.org/doi/pdf/10.1145/3442188.3445922

      6.2 Risks and Harms
      The ersatz fluency and coherence of LMs raises several risks, precisely because humans are prepared to interpret strings belonging to languages they speak as meaningful and corresponding to the communicative intent of some individual or group of individuals who have accountability for what is said. We now turn to examples, laying out the potential follow-on harms.

      Là où Gary Marcus a tendance à insister sur des usages plus volontairement nuisibles (« bad actors ») :
      https://www.theatlantic.com/technology/archive/2023/03/ai-chatbots-large-language-model-misinformation/673376

      Et quand ça passe au grand public, ça devient particulièrement éthéré. L’édito d’Ezra Klein dans le NY Times (il y a 15 jours) a peut-être influencé l’émergence de l’appel, et c’est très très flou sur les risques liés à l’AI (grosso modo : « c’est tellement puissant qu’on ne comprend pas vraiment », pas loin de la Hype AI) :
      https://www.nytimes.com/2023/03/12/opinion/chatbots-artificial-intelligence-future-weirdness.html%20https://www.nytimes.com/2023/03/12/opinion/chatbots-artificial-intelligence-future-weirdness.html

    • Je ne sais pas comment faire plus explicite. Une pétition sur l’IA cosignée par Melon Musk et pas par M. Mitchell ou T. Gebru, quand tu connais un tout petit peu le domaine, tu devrais juste te méfier avant d’engager ton nom. Mais bon… you do you, comme on dit.

  • The Out-of-Control Spread of Crowd-Control Tech | WIRED
    https://www.wired.com/story/out-of-control-spread-of-crowd-control-tech

    Broken bones. Eye trauma. Brain injuries. How America’s sketchy “less-lethal” weapons industry exports its insidious brand of violence around the world.

    (paywall facile à contourner en fenêtre anonyme)

    cc @davduf

  • Right-to-Repair Advocates Question John Deere’s New Promises | WIRED
    https://www.wired.com/story/right-to-repair-advocates-question-john-deeres-new-promises

    Quand « oui » = « non » et que ça sent l’arnaque...

    Early this week, tractor maker John Deere said it had signed a memorandum of understanding with the American Farm Bureau Federation, an agricultural trade group, promising to make it easier for farmers to access tools and software needed to repair their own equipment.

    The deal looked like a concession from the agricultural equipment maker, a major target of the right-to-repair movement, which campaigns for better access to documents and tools needed for people to repair their own gear. But right-to-repair advocates say that despite some good points, the agreement changes little, and farmers still face unfair barriers to maintaining equipment they own.

    Kevin O’Reilly, a director of the right-to-repair campaign run by the US Public Interest Research Group, a grassroots lobbying organization, says the timing of Deere’s deal suggests the company may be trying to quash recent interest in right-to-repair laws from state legislators. In the past two years, corn belt states including Nebraska and Missouri, and also Montana, have considered giving farmers a legal right to tools needed to repair their own equipment. But no laws have been passed. “The timing of this new agreement is no accident,” O’Reilly says. “This could be part of an effort to take the wind out of the sails of right-to-repair legislation.”

    Indeed, one section of the memorandum takes direct aim at proposals to enshrine the right to repair into law. It states that the American Farm Bureau Foundation “agrees to encourage state Farm Bureau organizations to recognize the commitments made in this MOU and refrain from introducing, promoting, or supporting federal or state Right to Repair legislation that imposes obligations beyond the commitments in this MOU.”

    Walter Schweitzer, a Montana-based cattle farmer and right-to-repair advocate, calls the new agreement “a Groundhog Day sort of thing”—a repeat of something he has seen before. The memorandum is similar to one signed in 2018 by the California Farm Bureau, the state’s largest organization for farmers’ interests, and the Equipment Dealers Association, which represents Deere, he says. But little changed afterward, in his view.

    Jen Hartmann, John Deere’s global director of strategic public relations, says the new memorandum of understanding emerged from years of discussions with the American Farm Bureau. It reaffirms the company’s “long-standing commitment to ensure farmers have access to tools and resources they need to diagnose, maintain, and repair their equipment,” she says.

    Deere dominates farming in the US, with 60 percent of farmers across 20 states owning at least one of the company’s combine harvesters. It has recently morphed products like tractors into mobile computers, investing hundreds of millions of dollars in robotics and adding AI tools to help farmers boost yields. But those computers on oversize wheels have become increasingly difficult to repair due to closed-off software, many farmers say. Being unable to repair your iPhone may be inconvenient, but a farmer at harvest time with a broken tractor faces potential ruin. 

    In 2022, three lawsuits alleged that Deere has been monopolizing the repair market, and a group of farming organizations filed a similar complaint with the US Federal Trade Commission. And in 2021, the FTC said it planned to ramp up enforcement against companies that used restrictive measures to prevent consumers from repairing their own electronics. 

    Deere’s new agreement states that it will ensure that farmers and independent repair shops can subscribe to or buy tools, software, and documentation from the company or its authorized repair facilities “on fair and reasonable terms.” The tractor giant also says it will ensure that any farmer, independent technician, or independent repair facility will have electronic access to Deere’s Customer Service Advisor, a digital database of operator and technical manuals that’s available for a fee.

    The memorandum also promises to give farmers the option to “reset equipment that has been immobilized”—something that can happen when a security feature is inadvertently triggered. Farmers could previously only reset their equipment by going to a John Deere dealer or having a John Deere-authorized technician come to them. “That’s been a huge complaint,” says Nathan Proctor, who leads US PIRG’s right-to-repair campaign. “Farmers will be relieved to know there might be a non-dealer option for that.”

    Other parts of the new agreement, however, are too vague to offer significant help to farmers, proponents of the right to repair say. Although the memorandum has much to say about access to diagnostic tools, farmers need to fix as well as identify problems, says Schweitzer, who raises cattle on his 3,000-acre farm, Tiber Angus, in central Montana. “Being able to diagnose a problem is great, but when you find out that it’s a sensor or electronic switch that needs to be replaced, typically that new part has to be reprogrammed with the electronic control unit on board,” he said. “And it’s unclear whether farmers will have access to those tools.”

    Deere’s Hartmann says that “as equipment continues to evolve and technology advances on the farm, Deere continues to be committed to meeting those innovations with enhanced tools and resources." The company this year will launch the ability to download software updates directly into some equipment with a 4G wireless connection, she said. But Hartmann declined to say whether farmers would be able to reprogram equipment parts without the involvement of the company or an authorized dealer.

    The new agreement isn’t legally binding. It states that should either party determine that the MOU is no longer viable, all they have to do is provide written notice to the other party of their intent to withdraw. And both US PIRG and Schweitzer note that other influential farmers groups are not party to the agreement, such as the National Farmers Union, where Schweitzer is a board member and runs the Montana chapter. 

    Schweitzer is also concerned by the way the agreement is sprinkled with promises to offer farmers or independent repair shops “fair and reasonable terms” on access to tools or information. “‘Fair and reasonable’ to a multibillion-dollar company can be a lot different for a farmer who is in debt, trying to make payments on a $200,000 tractor and then has to pay $8,000 to $10,000 to purchase hardware for repairs,” he says. 

    The agreement signed by Deere this week comes on the heels of New York governor Kathy Hochul signing into law the Digital Fair Repair Act, which requires companies to provide the same tools and information to the public that are given to their own repair technicians.

    However, while right-to-repair advocates mostly cheered the law as precedent-setting, it was weakened by last-minute compromises to the bill, such as making it applicable only to devices manufactured and sold in New York on or after July 1, 2023, and by excluding medical devices, automobiles, and home appliances.

    #John_Deere #Réparation #Tracteurs #Communs

  • #Alaska : des cours d’eau virent à l’orange, conséquence possible du réchauffement – Regard sur l’Arctique
    https://www.rcinet.ca/regard-sur-arctique/2022/12/19/alaska-des-cours-deau-virent-a-lorange-consequence-possible-du-rechauffement

    Lorsque le #pergélisol fond, les sédiments, qui peuvent contenir beaucoup de matière organique, mais aussi des minéraux et des métaux, dont du fer, entrent en contact avec l’eau et l’air environnant. Il se produit alors un phénomène d’oxydation. Le fer, lorsqu’oxydé, prend cette couleur orangée et est ensuite transporté par les cours d’eau. Les réactions chimiques entre l’eau et des minéraux contenus dans les sédiments peuvent aussi rendre les rivières plus acides.

    Alaska’s Arctic Waterways Are Turning a Foreboding Orange | WIRED
    https://www.wired.com/story/alaskas-arctic-waterways-are-turning-a-foreboding-orange

    For now, the researchers don’t know for sure whether the orange streams and rivers are an anomalous occurrence, coinciding with a handful of unseasonably warm seasons followed by high snow pack. And only time will tell how long it might continue.

    #permafrost

  • The Dark Risk of Large Language Models | WIRED
    https://www.wired.com/story/large-language-models-artificial-intelligence

    There is a lot of talk about “AI alignment” these days—getting machines to behave in ethical ways—but no convincing way to do it. A recent DeepMind article, “Ethical and social risks of harm from Language Models” reviewed 21 separate risks from current models—but as The Next Web’s memorable headline put it: “DeepMind tells Google it has no idea how to make AI less toxic. To be fair, neither does any other lab.” Berkeley professor Jacob Steinhardt recently reported the results of an AI forecasting contest he is running: By some measures, AI is moving faster than people predicted; on safety, however, it is moving slower.

    Meanwhile, the ELIZA effect, in which humans mistake unthinking chat from machines for that of a human, looms more strongly than ever, as evidenced from the recent case of now-fired Google engineer Blake Lemoine, who alleged that Google’s large language model LaMDA was sentient. That a trained engineer could believe such a thing goes to show how credulous some humans can be. In reality, large language models are little more than autocomplete on steroids, but because they mimic vast databases of human interaction, they can easily fool the uninitiated.

    #Intelligence_artificielle #Chatbots

  • Ex-Googler Timnit Gebru Starts Her Own AI Research Center | WIRED
    https://www.wired.com/story/ex-googler-timnit-gebru-starts-ai-research-center/?bxid=5cec29ba24c17c4c6465ed0b&bxid=5cec29ba24c17c4c6465ed0b&cndid=57360427&c

    One year ago Google artificial intelligence researcher Timnit Gebru tweeted, “I was fired” and ignited a controversy over the freedom of employees to question the impact of their company’s technology. Thursday, she launched a new research institute to ask questions about responsible use of artificial intelligence that Gebru says Google and other tech companies won’t.

    “Instead of fighting from the inside, I want to show a model for an independent institution with a different set of incentive structures,” says Gebru, who is founder and executive director of Distributed Artificial Intelligence Research (DAIR). The first part of the name is a reference to her aim to be more inclusive than most AI labs—which skew white, Western, and male—and to recruit people from parts of the world rarely represented in the tech industry.

    Gebru was ejected from Google after clashing with bosses over a research paper urging caution with new text-processing technology enthusiastically adopted by Google and other tech companies. Google has said she resigned and was not fired, but acknowledged that it later fired Margaret Mitchell, another researcher who with Gebru co-led a team researching ethical AI. The company placed new checks on the topics its researchers can explore. Google spokesperson Jason Freidenfelds declined to comment but directed WIRED to a recent report on the company’s work on AI governance, which said Google has published more than 500 papers on “responsible innovation” since 2018.

    The fallout at Google highlighted the inherent conflicts in tech companies sponsoring or employing researchers to study the implications of technology they seek to profit from. Earlier this year, organizers of a leading conference on technology and society canceled Google’s sponsorship of the event. Gebru says DAIR will be freer to question the potential downsides of AI and will be unencumbered by the academic politics and pressure to publish that she says can complicate university research.

    DAIR is currently a project of nonprofit Code for Science and Society but will later incorporate as a nonprofit in its own right, Gebru says. Her project has received grants totaling more than $3 million from the Ford, MacArthur, Rockefeller, and Open Society foundations, as well as the Kapor Center. Over time, she hopes to diversify DAIR’s financial support by taking on consulting work related to its research.
    DAIR joins a recent flourishing of work and organizations taking a broader and critical view of AI technology. New nonprofits and university centers have sprung up to study and critique AI’s effects in and on the world, such as NYU’s AI Now Institute, the Algorithmic Justice League, and Data for Black Lives. Some researchers in AI labs also study the impacts and proper use of algorithms, and scholars from other fields such as law and sociology have turned their own critical eyes on AI.

    #Intelligence_artificielle #Timnit_Gebru #Ethique