• Dans le noir la nuit : Anima Sola #5
    Récit poétique à partir d’images créées par procuration.

    https://liminaire.fr/palimpseste/article/formes-concretes-du-rythme

    Je me vois, je suis de dos, tournée ainsi je ne peux voir mon visage. C’est une image d’enfance. Un souvenir lointain. Un paysage estival. Au bord d’une rivière. Peut-être à la montagne ? Je peux sentir à nouveau la sensation de la chaleur, la sueur qui frissonne sur ma peau.

    (...) #Écriture, #Langage, #Poésie, #Lecture, #Photographie, #Littérature, #Art, #AI, #IntelligenceArtificielle, #Dalle-e, #Récit (...)

    https://liminaire.fr/IMG/mp4/anima_sola_5.mp4

  • Hashima : La mémoire de l’histoire

    https://liminaire.fr/au-lieu-de-se-souvenir/article/hashima

    Hashima, c’est ainsi que ce lieu se nomme. On dit également Gunkanjima, l’île navire de guerre, forteresse industrielle désaffectée. Située au large des côtes de Nagasaki, au Japon. Il ne reste que la mémoire de l’histoire. Et ce seul mot pour la nommer Hashima. L’île déserte. J’y retourne parfois en rêve. Sur place, j’entends des voix dans la nuit. Dans la rumeur de la mer qui encercle l’île. Et le vent qui siffle au loin, qui envahit tout.

    (...) #Écriture, #Poésie, #Vidéo, #Cinéma, #Film, #Photographie, #Japon, #Art, #AI, #IntelligenceArtificielle, #Dalle-e, #Récit, #île (...)

  • Give Every AI a Soul—or Else | WIRED
    https://www.wired.com/story/give-every-ai-a-soul-or-else

    Quand on demande aux auteurs de science fiction d’imaginer des formes de régulation, on tombe parfois sur des idées étranges... qui viennent certainement de la conception d’IA comme des entités “human-like”, non pas comme chaque humain (sentient et ayant un corps - quoique ce dernier point est évoqué pour les IA aussi) mais comme les civilisations d’humains qui s’auto-contrôlent.

    Why this sudden wave of concern? Amid the toppling of many clichéd assumptions, we’ve learned that so-called Turing tests are irrelevant, providing no insight at all into whether generative large language models—GLLMs or “gollems”—are actually sapient beings. They will feign personhood, convincingly, long before there’s anything or anyone “under the skull.”

    Anyway, that distinction now appears less pressing than questions of good or bad—or potentially lethal—behavior.

    This essay is adapted from David Brin’s nonfiction book in progress, Soul on Ai.

    Some remain hopeful that a merging of organic and cybernetic talents will lead to what Reid Hoffman and Marc Andreesen have separately called “amplification intelligence.” Or else we might stumble into lucky synergy with Richard Brautigan’s “machines of loving grace.” But worriers appear to be vastly more numerous, including many elite founders of a new Center for AI Safety who fret about rogue AI misbehaviors, from irksome all the way to “existentially” threatening human survival.

    Some short-term remedies, like citizen-protection regulations recently passed by the European Union, might help, or at least offer reassurance. Tech pundit Yuval Noah Harari proposed a law that any work done by gollems or other AI must be so labeled. Others recommend heightened punishments for any crime that’s committed with the aid of AI, as with a firearm. Of course, these are mere temporary palliatives.

    Un peu de SF...

    By individuation I mean that each AI entity (he/she/they/ae/wae) must have what author Vernor Vinge, way back in 1981, called a true name and an address in the real world. As with every other kind of elite, these mighty beings must say, “I am me. This is my ID and home-root. And yes, I did that.”

    Hence, I propose a new AI format for consideration: We should urgently incentivize AI entities to coalesce into discretely defined, separated individuals of relatively equal competitive strength.

    Each such entity would benefit from having an identifiable true name or registration ID, plus a physical “home” for an operational-referential kernel. (Possibly “soul”?) And thereupon, they would be incentivized to compete for rewards. Especially for detecting and denouncing those of their peers who behave in ways we deem insalubrious. And those behaviors do not even have to be defined in advance, as most AI mavens and regulators and politicians now demand.

    Not only does this approach farm out enforcement to entities who are inherently better capable of detecting and denouncing each other’s problems or misdeeds. The method has another, added advantage. It might continue to function, even as these competing entities get smarter and smarter, long after the regulatory tools used by organic humans—and prescribed now by most AI experts—lose all ability to keep up.

    Putting it differently, if none of us organics can keep up with the programs, then how about we recruit entities who inherently can keep up? Because the watchers are made of the same stuff as the watched.

    Personally, I am skeptical that a purely regulatory approach would work, all by itself. First because regulations require focus, widely shared political attention, and consensus to enact, followed by implementation at the pace of organic human institutions—a sloth/snail rate, by the view of rapidly adapting cybernetic beings. Regulations can also be stymied by the “free-rider problem”—nations, corporations, and individuals (organic or otherwise) who see personal advantage in opting out of inconvenient cooperation.

    There is another problem with any version of individuation that is entirely based on some ID code: It can be spoofed. If not now, then by the next generation of cybernetic scoundrels, or the next.

    I see two possible solutions. First, establish ID on a blockchain ledger. That is very much the modern, with-it approach, and it does seem secure in theory. Only that’s the rub. It seems secure according to our present set of human-parsed theories. Theories that AI entities might surpass to a degree that leaves us cluelessly floundering.

    Another solution: A version of “registration” that’s inherently harder to fool would require AI entities with capabilities above a certain level to have their trust-ID or individuation be anchored in physical reality. I envision—and note: I am a physicist by training, not a cyberneticist—an agreement that all higher-level AI entities who seek trust should maintain a Soul Kernel (SK) in a specific piece of hardware memory, within what we quaintly used to call a particular “computer.”

    Yes, I know it seems old-fashioned to demand that instantiation of a program be restricted to a specific locale. And so, I am not doing that! Indeed, a vast portion, even a great majority, of a cyber entity’s operations may take place in far-dispersed locations of work or play, just as a human being’s attention may not be aimed within their own organic brain, but at a distant hand, or tool. So? The purpose of a program’s Soul Kernel is similar to the driver’s license in your wallet. It can be interrogated in order to prove that you are you.

    Again, the key thing I seek from individuation is not for all AI entities to be ruled by some central agency, or by mollusk-slow human laws. Rather, I want these new kinds of über-minds encouraged and empowered to hold each other accountable, the way we already (albeit imperfectly) do. By sniffing at each other’s operations and schemes, then motivated to tattle or denounce when they spot bad stuff. A definition that might readjust to changing times, but that would at least keep getting input from organic-biological humanity.

    Especially, they would feel incentives to denounce entities who refuse proper ID.

    If the right incentives are in place—say, rewards for whistle-blowing that grant more memory or processing power, or access to physical resources, when some bad thing is stopped—then this kind of accountability rivalry just might keep pace, even as AI entities keep getting smarter and smarter. No bureaucratic agency could keep up at that point. But rivalry among them—tattling by equals—might.

    Above all, perhaps those super-genius programs will realize it is in their own best interest to maintain a competitively accountable system, like the one that made ours the most successful of all human civilizations. One that evades both chaos and the wretched trap of monolithic power by kings or priesthoods … or corporate oligarchs … or Skynet monsters. The only civilization that, after millennia of dismally stupid rule by moronically narrow-minded centralized regimes, finally dispersed creativity and freedom and accountability widely enough to become truly inventive.

    David Brin is an astrophysicist whose international best-selling novels include The Postman, Earth, Existence, and Hugo Award winners Startide Rising and The Uplift War. He consults for NASA, companies, agencies, and nonprofits about the onrushing future. Brin’s first nonfiction book, The Transparent Society, won the Freedom of Speech Award. His new one is Vivid Tomorrows: Science Fiction and Hollywood.

    #Intelligence_artificielle #Individuation #Science_fiction #Régulation

  • Dans le noir la nuit : Anima Sola #4
    Récit poétique à partir d’images créées par procuration.

    https://liminaire.fr/palimpseste/article/dans-le-noir-la-nuit

    Je traverse la fatigue et le silence. Ce sera bientôt la fin. La nuit blesse autant qu’elle soigne. Je braque ma lampe de poche dans toutes les directions. Le faisceau balaie de sa lumière rectiligne l’espace autour de moi. Le jour tarde à venir. Mon ombre se profile derrière la vitre.

    (...) #Écriture, #Langage, #Poésie, #Lecture, #Photographie, #Littérature, #Art, #AI, #IntelligenceArtificielle, #Dalle-e, #Récit (...)

    https://liminaire.fr/IMG/mp4/anima_sola_4.mp4

  • Le retour en silence : Anima Sola #3
    Récit poétique à partir d’images créées par procuration.

    https://liminaire.fr/palimpseste/article/le-retour-en-silence

    Je m’enfonce dans la vie. Le monde est éclairé d’une manière inédite. C’est une lumière qui vient de l’intérieur. Cela ne nous tombe pas dessus, ne nous recouvre pas. Ni un voile ni une couverture. Cela vient de l’intérieur. Je pense à ma main qui devient orange et translucide lorsque je la place devant une source lumineuse. On peut voir à travers.

    (...) #Écriture, #Langage, #Poésie, #Lecture, #Photographie, #Littérature, #Art, #AI, #IntelligenceArtificielle, #Dalle-e, #Récit (...)

    https://liminaire.fr/IMG/mp4/minimalist_world_forest_day_instagram_post_3_.mp4

  • Seule dans la nuit : Anima Sola #2
    Récit poétique à partir d’images créées par procuration.

    http://liminaire.fr/palimpseste/article/seule-dans-la-nuit

    J’ose seule dans la nuit. Ce n’est pas si difficile, il faut penser à rien, se laisser porter par ses propres pas. Leurs bruits m’accompagnent. Je ne suis jamais seule avec eux. Je regarde droit devant moi. Je me projette dans la rue. J’avance sans tarder. J’invente les enseignes lumineuses de la ville. Je plisse les yeux. Dans le mouvement, les néons s’illuminent. Des boucles étincelantes se forment. Je les dessine d’un regard. Elles dansent avec moi. Leur chorégraphie m’accompagne dans la pénombre. Tout est possible à qui le souhaite. Tout arrive.

    (...) #Écriture, #Langage, #Poésie, #Lecture, #Photographie, #Littérature, #Art, #AI, #IntelligenceArtificielle, #Dalle-e, #Récit, (...)

    http://liminaire.fr/IMG/mp4/minimalist_world_forest_day_instagram_post_2_.mp4

  • Nvidia’s Jensen Huang Is Transforming A.I., One Leather Jacket at a Time - The New York Times
    https://www.nytimes.com/2023/06/14/style/jensen-huang-nvidia-leather-jackets.html?nl=todaysheadlines&emc=edit_th_202

    J’adore les articles « style » du New York Times. Quand l’habit fait le moine.

    There’s a new tech titan in town and he’s preparing to enter the pantheon. How do we know?

    Well, Jensen Huang, the chief executive of Nvidia, has the company: He co-founded Nvidia in 1993, and the market cap is now about $950 billion, though at the end of May it was briefly in the $1 trillion club, putting it in a similar league to Apple, Alphabet and Amazon.

    He has the product: a data processing chip that is key to A.I. development, which is to say, the life of ChatGPT and Bard, which is to say, the current paradigm shift.

    And he has the look: a black leather jacket he wears every time he is in the public eye, most often with a black T-shirt and black jeans.

    Mr. Huang wore a black leather jacket when he was on the cover of Time as one of its men of the year in 2021. A black leather jacket during his keynote speeches at multiple GTC developer conferences since 2018. To deliver the 2023 World ITF keynote and the 2023 Computex 2023 keynote. He even identified himself, back in a Reddit AMA in 2016, as “the guy in the leather jacket.”

    Sometimes his leather jackets have collars, sometimes they look more like motorcycle jackets; sometime a lot of zips are involved, sometimes not. But the jackets are always black. He has been wearing them, a spokesman said, “for at least 20 years.” The point is that he always looks the same.

    There hasn’t been a popularly identifiable face of A.I. yet. ChatGPT and Bard are anonymous brains. That’s part of what makes A.I. so eerie — its disembodied nature. Sam Altman, the chief executive of OpenAI, is ubiquitous, but looks kind of generic. Mr. Huang and his leather jacket are poised to step neatly into that gap.

    The jacket is an object that has become a signifier — of a person but also the great leap forward that person represents. And that association puts Mr. Huang in the same club as Steve “black turtleneck” Jobs, Mark “gray T-shirt” Zuckerberg and Jeff “Pitbull” Bezos as a chief executive who understands that the difference between a company that is a world-changing success and a company that is a world-changing success that becomes a part of pop culture may be the image of its figurehead. One that’s just enough of a caricature to work its way into the public imagination and become the avatar of a movement.

    To put this in context, Superstar Jacket sells two versions of a “Jensen Huang leather jacket,” alongside a “Fast & Furious 10 Vin Diesel jacket,” a “Snoop Dogg leather jacket” and an “Indiana Jones leather jacket.”

    But Mr. Huang is the only C.E.O. to have a jacket named after him.

    Vanessa Friedman has been the fashion director and chief fashion critic for The Times since 2014. In this role she covers global fashion for both The New York Times and International New York Times. @VVFriedman

    #Style #Fashion #Intelligence_artificielle #Jensen_Huang #Pop_culture

  • Cette voix dans ma tête : Anima Sola #1
    Récit poétique à partir d’images créées par procuration.

    http://liminaire.fr/palimpseste/article/cette-voix-dans-ma-tete

    J’entends une voix qui vient des rêves. Cette voix elle me parle sans arrêt. Cette voix est douce. Je ne la reconnais pas, elle change sans arrêt. Cette voix parle de moi. Elle dit : Le monde s’assombrit. Elle dit : Pour regarder une personne, il faut se trouver en face d’elle. Elle confirme : Les fantômes sont des existences qui visitent. Ce qu’elle dit n’a rien à voir avec moi, bien au contraire. Elle parle de moi, elle s’échappe de moi, de mon corps. Mon corps est l’endroit de sa venue, son point de départ. Il peut s’agir d’une simple phrase. Il arrive parfois que la phrase soit plus longue, parfois complexe, et même incompréhensible.

    (...) #Écriture, #Langage, #Poésie, #Lecture, #Photographie, #Littérature, #Art, #AI, #IntelligenceArtificielle, #Dalle-e, #Récit, (...)

    http://liminaire.fr/IMG/mp4/minimalist_world_forest_day_instagram_post_1_.mp4

  • « Elon Musk laisse ceux qui partagent ses tendances à l’autoritarisme déverser leur propagande sur Twitter »
    https://www.lemonde.fr/idees/article/2023/06/14/elon-musk-laisse-ceux-qui-partagent-ses-tendances-a-l-autoritarisme-deverser

    Alors qu’Elon Musk sera à Paris pour participer à VivaTech, un salon sur les nouvelles technologies, qui se déroule du 14 au 17 juin, Fred Turner, spécialiste de l’histoire d’Internet, revient sur l’évolution de Twitter depuis son rachat par le milliardaire américain.

    Propos recueillis par Marc-Olivier Bherer

    Fred Turner est professeur de communication à l’université Stanford (Californie). Ses travaux portent sur l’histoire des nouvelles technologies depuis la fin de la seconde guerre mondiale. Son livre Aux sources de l’utopie numérique (C & F, 2012) l’installe comme l’un des plus fins connaisseurs de l’histoire du Web et de la culture propre à la Silicon Valley. Le 16 juin, il prononcera une conférence à Sciences Po Paris.

    Elon Musk a racheté Twitter, en octobre 2022. Ce réseau social a-t-il profondément changé depuis ?

    Au moment de cette transaction, Twitter occupait une position enviable sur le marché. Certes, ses revenus étaient limités et cette société avait de profonds problèmes financiers. Mais elle occupait une position dominante. D’autres avaient tenté de lancer des plates-formes concurrentes, sans parvenir à faire jeu égal. De sorte que Twitter, avant Elon Musk, était le lieu du débat en ligne. Des garde-fous avaient été mis en place pour tenter de contenir les discours haineux et la désinformation. Mais, dès son arrivée, Musk s’est débarrassé de tout cela.

    Désormais, la publicité est davantage présente, tout comme les contenus portant sur des célébrités ou générés par elles. La droite en général occupe également un espace beaucoup plus important. Certains contenus sont mis en avant, alors même qu’ils font pratiquement appel à la violence en défense de l’ancien président américain Donald Trump.

    Les internautes ne retrouvent plus cette diversité de points de vue à laquelle ils avaient accès auparavant. Dans le débat tel qu’il existe aujourd’hui sur Twitter, il est plus difficile de distinguer le bruit d’un signal, d’une information pertinente et significative. Le réseau social occupe toujours une place dominante, mais il a perdu en influence.
    Lors d’un échange avec Elon Musk retransmis en direct sur Twitter, le gouverneur ultraconservateur de Floride, Ron DeSantis, a annoncé, fin mai, qu’il se portait candidat à la primaire républicaine. L’ancien animateur de Fox News Tucker Carlson, figure de l’extrême droite américaine, vient, lui, de relancer son émission sur Twitter. Ce réseau social fait-il le jeu de la droite la plus dure ?

    Deux forces sont à l’œuvre, ici : la droite américaine et la Silicon Valley. Aux Etats-Unis, un combat pour l’avenir de la démocratie est engagé. Le Parti républicain a pris un virage autoritaire, avec un durcissement de ses positions sans précédent dans l’histoire récente. Cette droite souhaite pouvoir s’appuyer sur ceux qui contrôlent une grande partie du système médiatique américain, les acteurs de la Silicon Valley.

    Elon Musk est le propriétaire de Twitter, il décide ce qui peut être dit. Il laisse ceux qui partagent ses tendances à l’autoritarisme déverser leur propagande. La trop faible intervention de l’Etat américain pour encadrer ce secteur est un grave échec, c’est l’une des raisons pour lesquelles notre démocratie est aujourd’hui en crise.

    L’autoritarisme de Musk ne s’inspire cependant pas d’une logique totalitaire, selon laquelle l’Etat contrôlerait les individus. Il incarne plutôt un autoritarisme individualiste, qui s’en remet au marché pour étouffer la voix des plus faibles. L’égalité en matière de liberté d’expression n’existe pas. Les individus fortunés, qui détiennent des médias comme Twitter, peuvent s’assurer de contrôler le débat public. L’autoritarisme individualiste permet aux détenteurs de systèmes aussi influents que Twitter de garder le pouvoir.
    Qu’est-ce qui explique que nous en soyons arrivés là ?

    Un malentendu s’est installé sur ce qu’est véritablement Twitter. Elon Musk veut nous faire croire que ce réseau social est un espace de discussion, ce qui laisse entendre que ce qui s’y passe relève uniquement de la liberté d’expression. Pour l’entrepreneur, si tout le monde peut s’exprimer de façon égale, l’ordre va émerger naturellement.

    Mais, en fait, Twitter ressemble davantage à une plate-forme de diffusion comparable à la télévision ou à la radio, qui sont encadrées par un système étendu de lois et de régulations. Une chaîne de télé est, par exemple, responsable des propos qu’elle diffuse, ce n’est pas le cas des réseaux sociaux. Twitter a, par ailleurs, cette particularité qu’il permet à ses utilisateurs de diffuser massivement leurs idées, tous les autres internautes peuvent y avoir accès. Sur Facebook, les choses se passent autrement, ce n’est pas une plate-forme aussi ouverte : avant de pouvoir consulter le contenu d’un utilisateur, il faut faire partie de son cercle d’amis.
    Dès le 25 août, de nouvelles règles européennes encadreront les plates-formes numériques, entre autres pour lutter contre la désinformation. Elon Musk ne semble pas pressé de s’y conformer, au point que le ministre de la transition numérique, Jean-Noël Barrot, a déclaré, le 29 mai, que « Twitter sera banni de l’Union européenne, s’il ne se conforme pas à [leurs] règles ». N’est-ce pas aller trop loin ?

    Si Twitter n’applique pas ces règles, je crois que l’Europe aura raison de prendre une telle décision. Je ne suis pas particulièrement favorable à l’intervention de l’Etat sur le marché, mais dans ce cas précis, je suis convaincu que ce sera la bonne chose à faire. L’Etat américain reste en position de faiblesse face au secteur technologique. Avec sa nouvelle réglementation, l’Union européenne défend l’intérêt public bien au-delà de ses frontières.
    Lire aussi la chronique : Article réservé à nos abonnés Sur Twitter, « Elon Musk fait le rude apprentissage de la modération des contenus »
    Fin mars, Elon Musk a signé, avec plus de 1 000 spécialistes et entrepreneurs des nouvelles technologies, une tribune appelant à ce que l’on encadre le développement de l’intelligence artificielle. Le patron de Twitter a lancé, en avril, une start-up dans ce domaine. Que vous inspirent ces récents développements ?

    Commençons par arrêter de parler d’intelligence artificielle, cette expression anthropomorphique est trompeuse, les ordinateurs ne sont pas en train de s’humaniser. Ce n’est que du marketing et il n’y a rien que la Silicon Valley n’aime davantage que de vendre du rêve, l’idée qu’elle s’apprête à changer le monde. Il n’y a pas si longtemps, c’était les cryptomonnaies…

    Pour le moment, ce que l’on appelle « intelligence artificielle » n’est rien de plus que des machines d’analyse à grande échelle. De sorte que l’image la plus appropriée pour parler de ces technologies emprunte plutôt au vocabulaire minier, car elles fouillent d’énormes quantités de données, du texte, des images, des enregistrements audio trouvés sur Internet, pour en extraire une réponse à la question formulée par l’utilisateur. C’est ce que fait ChatGPT.

    Laissons tomber les effets de manche de la Silicon Valley et concentrons-nous sur les dangers dont cette technologie est porteuse. Les créateurs de ces machines ne les maîtrisent pas parfaitement, elles peuvent prendre des décisions inattendues, imprévisibles. Cette technologie peut disposer d’une forme d’autonomie qui nous dépasse.

    Je crois qu’il faut donc faire comme pour le secteur minier et encadrer l’accès au sous-sol. Des lois doivent être mises en place pour encadrer l’accès aux données et ce qu’il est possible de faire avec ces données. Requérir que le fonctionnement de ces algorithmes soit accessible au régulateur me semble également une bonne chose.

    Marc-Olivier Bherer

    #Fred_Turner #Elon_Musk #Intelligence_artificielle

  • Gouverner par les #données ? Pour une sociologie politique du numérique

    Cet ouvrage est une invitation à entrer dans la boîte noire des #algorithmes, non pas d’un point de vue technique, mais de sociologie politique. La multiplication des données disponibles en ligne, couplée au progrès de l’#intelligence_artificielle, ont-ils des effets sur les manières de gouverner ? Les algorithmes peuvent-ils « prédire » les comportements des citoyens ? Comment sont fabriqués ces algorithmes, dits prédictifs, et par qui ? Sont-ils neutres et objectifs ? Quels sont les enjeux sociaux, éthiques et politiques, liés à l’exploitation des données ? Et quelles sont les stratégies commerciales et marchandes à l’œuvre ? Peut-on encore protéger nos données ?

    Derrière l’exploitation des données, il y a bien des #visions_du_monde. Il s’agit alors de penser l’#algorithme comme un objet politique et social, produit par des acteurs et issu de commandes privées et désormais aussi publiques. Ces lignes de codes et de calculs complexes ne peuvent être dissociées de leurs conditions de production : elles sont encastrées dans un ensemble organisationnel et professionnel spécifique et portées par des #intentions et volontés politiques.

    À travers une série d’études de cas et l’apport d’enquêtes empiriques poussées et inédites, ce volume permet de saisir en contexte comment sont utilisées nos données et quelles sont les influences possibles sur les modes de #gouvernance et les prises de décision. La force de cet ouvrage, à la croisée de la sociologie économique, du droit, des sciences politiques et de l’informatique, est de poser les bases d’une #sociologie_politique des données et du #numérique, visant à dépasser et déconstruire les mythes et les croyances véhiculées par le #big_data.

    http://catalogue-editions.ens-lyon.fr/fr/livre/?GCOI=29021100613110

    #livre #prise_de_décision

  • Un drone militaire américain piloté par une #IA se retourne contre sa tour de contrôle lors d’une simulation
    https://www.radiofrance.fr/franceinter/un-drone-militaire-americain-pilote-par-de-une-ia-se-retourne-contre-sa-

    Pendant un test #virtuel mené par l’armée américaine, un #drone d’attaque contrôlé par une #intelligence_artificielle a décidé de se retourner contre ses donneurs d’ordre afin d’arriver à son objectif final. L’armée de l’air américaine dément avoir réalisé ce test.

  • [Les ondes d’à côté] dIAgnostic
    https://www.radiopanik.org/emissions/les-ondes-d-a-cote/diagnostic

    Voici le résultat d’un atelier de création éditoriale d’étudiant·es en masters 2 en journalisme de l’ULB. La contrainte de départ était « le #care ».

    dIAgnostic de Tarik Si Sadi.

    Plus ou moins connue de tous aujourd’hui, l’intelligence artificielle a pris place dans de nombreux domaines. Le cinéma, la musique, la justice, la sécurité, l’éducation. La crainte qu’une machine puisse, un jour, remplacer l’être humain est un fantasme qui ne date pas d’hier. Pourtant, il y a un domaine qui semble épargné par le feu des critiques, c’est celui de la santé.

    [30:16] après la diffusion, des membres de la rédaction de radio panik discuteront avec tarik si sadi sur son travail. et voici le linktree qu’il a créé avec d’autres étudiant·es du master, (...)

    #intelligence_artificielle #intelligence_artificielle,care
    https://www.radiopanik.org/media/sounds/les-ondes-d-a-cote/diagnostic_15892__1.mp3

  • AI machines aren’t ‘hallucinating’. But their makers are, by Naomi Klein (The Guardian)
    https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein
    https://i.guim.co.uk/img/media/3e2a29edcacc4a95e491f4320c27942e55e75eca/0_160_4800_2880/master/4800.jpg?width=620&quality=85&dpr=1&s=none

    There is a world in which generative #AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life.

    And as those of us who are not currently tripping well understand, our current system is nothing like that. Rather, it is built to maximize the extraction of wealth and profit – from both humans and the natural world – a reality that has brought us to what we might think of it as capitalism’s techno-necro stage. In that reality of hyper-concentrated power and wealth, AI – far from living up to all those utopian hallucinations – is much more likely to become a fearsome tool of further dispossession and despoilation.

    I’ll dig into why that is so. But first, it’s helpful to think about the purpose the utopian hallucinations about AI are serving. What work are these benevolent stories doing in the culture as we encounter these strange new tools? Here is one hypothesis: they are the powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history. Because what we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon …) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent.

  • Naomi Klein s’intéresse aux dangers du “tout A.I.” dans le Guardian US
    https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-naomi-klein

    un petit extrait pris au début

    Warped hallucinations are indeed afoot in the world of AI, however – but it’s not the bots that are having them; it’s the tech CEOs who unleashed them, along with a phalanx of their fans, who are in the grips of wild hallucinations, both individually and collectively. Here I am defining hallucination not in the mystical or psychedelic sense, mind-altered states that can indeed assist in accessing profound, previously unperceived truths. No. These folks are just tripping: seeing, or at least claiming to see, evidence that is not there at all, even conjuring entire worlds that will put their products to use for our universal elevation and education.

    Generative AI will end poverty, they tell us. It will cure all disease. It will solve climate change. It will make our jobs more meaningful and exciting. It will unleash lives of leisure and contemplation, helping us reclaim the humanity we have lost to late capitalist mechanization. It will end loneliness. It will make our governments rational and responsive. These, I fear, are the real AI hallucinations and we have all been hearing them on a loop ever since Chat GPT launched at the end of last year.

    There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to benefit humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life.

    Un autre vers la fin

    A world of deep fakes, mimicry loops and worsening inequality is not an inevitability. It’s a set of policy choices. We can regulate the current form of vampiric chatbots out of existence—and begin to build the world in which AI’s most exciting promises would be more than Silicon Valley hallucinations.
    Because we trained the machines. All of us. But we never gave our consent. They fed on humanity’s collective ingenuity, inspiration and revelations (along with our more venal traits). These models are enclosure and appropriation machines, devouring and privatizing our individual lives as well as our collective intellectual and artistic inheritances.

    Entre les deux elle fait l’inventaire des “hallucinations” dont certains CEO aimeraient bien qu’elles deviennent collectives :

    Hallucination # 1: AI will solve the climate crisis
    Hallucination # 2: AI will deliver wise governance
    Hallucination # 3: tech giants can be trusted not to break the world
    Hallucination # 4: AI will liberate us from drudgery

  • #rappel La fête de LO pendant le week-end de la Pentecôte, à Presles (95)

    La fête se déroule à Presles le samedi 27 mai de 11 h à 23 h, puis sans interruption du dimanche 28 mai à 9 h au lundi 29 mai à 20 h.

    La carte est valable pendant les trois jours de la fête. Elle coûte 20 euros en prévente jusqu’au 24 mai au soir, 25 euros ensuite et sur place. L’entrée est gratuite pour les enfants de moins de 14 ans accompagnés.

    Pour obtenir à l’avance 20 % de réduction sur vos dépenses, des bons d’achat sont disponibles. Payés 4 euros, ils valent 5 euros pendant la fête.

    Cartes et bons d’achat sont disponibles auprès de nos militants et ici : https://fete.lutte-ouvriere.org/billetterie

    Train : gare de Presles-Courcelles (ligne H depuis la gare du Nord) – départ toutes les heures à partir de 6 h 34 – 38 minutes de trajet.

    Cars gratuits depuis le métro Saint-Denis-Université (ligne 13) : samedi 10 h à 17 h – dimanche 8 h à 17 h – lundi 8 h à 14 h. Retour : jusqu’à 23 h le samedi, 1 h 50 le dimanche, 20 h le lundi.

    Renseignements, programme, vente de cartes et bons : https://fete.lutte-ouvriere.org

    • Les présentations de livres à la fête cette année :

      – Guillaume Fondu, préfacier et traducteur, et Éric Sevault, éditeur : Les Carnets de la Révolution russe, de #Nikolaï_Soukhanov
      – Marc Plocki : pour la réédition des livres de #Maurice_Rajsfus, historien-militant
      – Marion Leclair et Alexia Blin : Les articles du New-York Daily Tribune de #Marx et #Engels
      – Rémi Adam : Vendus contre des obus, d’Alexeï Kozlov
      – Lucien Détroit, préfacier : Sur les piquets de grève, les femmes dans la grande grève des mines de l’Arizona de 1983, de #Barbara_Kingsolver
      – Henri Marnier, préfacier : Seuls les fous n’ont pas peur, de Georg Scheuer

      Au chapiteau des sciences :

      Samedi
      – Valérie Delattre : les grandes #épidémies de peste
      – Patrick Berche : les #pandémies virales contemporaines
      – Claire Mathieu : #ChatGPT
      – Paul Verdu : la diversité des couleurs de peau
      – Pierre-Olivier Lagage : le télescope spatial James Webb

      Dimanche
      – Vincent Mourre : les outils en pierre de nos ancêtres
      – Patrizia D’Ettorre : les #fourmis et leur monde d’odeurs
      – Olivier Lambert : quand les #baleines marchaient sur la terre ferme
      – François Desset : faire revivre les langues anciennes
      – Dalila Bovet : l’#intelligence des #oiseaux
      – Antoine Balzeau : une brève histoire des origines de l’humanité #préhistoire
      – Bahia Guellai : les enfants face aux #écrans

      Lundi
      – Roland Salesse : le cerveau cuisinier
      – Edwin Roubanovitch : la #musique à la #Préhistoire
      – Alain Riazuelo : l’aventure de la Terre
      #Étienne_Klein : la démarche scientifique
      – Michel Viso : les défis pour aller sur Mars

  • « Pays-Bas, un empire logistique au coeur de l’Europe » : https://cairn.info/revue-du-crieur-2023-1-page-60.htm
    Excellent papier du dernier numéro de la Revue du Crieur qui montre comment le hub logistique néerlandais a construit des espaces dérogatoires aux droits pour exploiter des milliers de migrants provenant de toute l’Europe. Ces zones franches optimisent la déréglementation et l’exploitation, générant une zone de non-droit, où, des horaires de travail aux logements, toute l’existence des petites mains de la logistique mondiale dépend d’une poignée d’employeurs et de logiciels. L’article évoque notamment Isabel, le logiciel de l’entreprise bol.com qui assure la mise à disposition de la main d’oeuvre, en intégrant statut d’emploi, productivité, gérant plannings et menaces... optimisant les RH à « l’affaiblissement de la capacité de négociation du flexworker ». Une technique qui n’est pas sans rappeler Orion, le logiciel qui optimise les primes pour les faire disparaitre... https://www.monde-diplomatique.fr/2022/12/DERKAOUI/65381

    Les boucles de rétroaction de l’injustice sont déjà en place. Demain, attendez-vous à ce qui est testé et mis en place à l’encontre des migrants qui font tourner nos usines logistiques s’élargisse à tous les autres travailleurs. #travail #RH #migrants

  • Pour Geoffrey Hinton, le père fondateur de l’#IA, les progrès actuels sont « effrayants »

    Mercredi, pour sa première apparition publique depuis la parution de l’article, #Geoffrey_Hinton s’est expliqué longuement sur les raisons de son départ [de #Google]. Interrogé en visioconférence lors de la conférence EmTech Digital, organisée à Boston par la « MIT Technology Review », le chercheur a indiqué avoir « très récemment changé d’avis » sur la capacité des modèles informatiques à apprendre mieux que le cerveau humain. « Plusieurs éléments m’ont amené à cette conclusion, l’un d’entre eux étant la performance de systèmes tels que #GPT-4. »

    Avec seulement 1.000 milliards de connexions, ces systèmes ont, selon lui, « une sorte de sens commun sur tout, et en savent probablement mille fois plus qu’une personne, dont le cerveau a plus de 100.000 milliards de connexions. Cela veut dire que leur algorithme d’apprentissage pourrait être bien meilleur que le nôtre, et c’est effrayant ! »

    D’autant que, comme ces nouvelles formes d’intelligence sont numériques, elles peuvent partager instantanément ce qu’elles ont appris, ce dont les humains sont bien incapables… Reconnaissant qu’il avait longtemps refusé de croire aux dangers existentiels posés par l’#intelligence_artificielle, et en particulier à celui d’une « prise de contrôle » de l’humanité par des machines devenues superintelligentes, Geoffrey Hinton n’hésite plus à évoquer ce scénario catastrophe. « Ces choses auront tout appris de nous, lu tous les livres de Machiavel, et si elles sont plus intelligentes que nous, elles n’auront pas de mal à nous manipuler. » Avant d’ajouter, avec un humour pince-sans-rire : « Et si on sait manipuler les gens, on peut envahir un bâtiment à Washington sans être sur place. »
    Face à un tel risque, le chercheur avoue « ne pas avoir de solution simple à proposer. Mais je pense qu’il faut y réfléchir sérieusement. »

    (Les Échos)

  • Opinion | Lina Khan : We Must Regulate A.I. Here’s How. - The New York Times
    https://www.nytimes.com/2023/05/03/opinion/ai-lina-khan-ftc-technology.html

    Encore une excellent prise de position de Lina Khan... une des personnes les plus pointues sur la régulation des technologies.
    Jeune, dynamique, ouverte, courageuse, d’une intelligence et subtilité sans faille... je suis membre du fan-club.

    By Lina M. Khan

    Ms. Khan is the chair of the Federal Trade Commission.

    It’s both exciting and unsettling to have a realistic conversation with a computer. Thanks to the rapid advance of generative artificial intelligence, many of us have now experienced this potentially revolutionary technology with vast implications for how people live, work and communicate around the world. The full extent of generative A.I.’s potential is still up for debate, but there’s little doubt it will be highly disruptive.

    The last time we found ourselves facing such widespread social change wrought by technology was the onset of the Web 2.0 era in the mid-2000s. New, innovative companies like Facebook and Google revolutionized communications and delivered popular services to a fast-growing user base.

    Those innovative services, however, came at a steep cost. What we initially conceived of as free services were monetized through extensive surveillance of the people and businesses that used them. The result has been an online economy where access to increasingly essential services is conditioned on the widespread hoarding and sale of our personal data.

    These business models drove companies to develop endlessly invasive ways to track us, and the Federal Trade Commission would later find reason to believe that several of these companies had broken the law. Coupled with aggressive strategies to acquire or lock out companies that threatened their position, these tactics solidified the dominance of a handful of companies. What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security.

    The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice. As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself.

    As companies race to deploy and monetize A.I., the Federal Trade Commission is taking a close look at how we can best achieve our dual mandate to promote fair competition and to protect Americans from unfair or deceptive practices. As these technologies evolve, we are committed to doing our part to uphold America’s longstanding tradition of maintaining the open, fair and competitive markets that have underpinned both breakthrough innovations and our nation’s economic success — without tolerating business models or practices involving the mass exploitation of their users. Although these tools are novel, they are not exempt from existing rules, and the F.T.C. will vigorously enforce the laws we are charged with administering, even in this new market.

    While the technology is moving swiftly, we already can see several risks. The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms. A handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools. This includes cloud services and computing power, as well as vast stores of data.

    Enforcers and regulators must be vigilant. Dominant firms could use their control over these key inputs to exclude or discriminate against downstream rivals, picking winners and losers in ways that further entrench their dominance. Meanwhile, the A.I. tools that firms use to set prices for everything from laundry detergent to bowling lane reservations can facilitate collusive behavior that unfairly inflates prices — as well as forms of precisely targeted price discrimination. Enforcers have the dual responsibility of watching out for the dangers posed by new A.I. technologies while promoting the fair competition needed to ensure the market for these technologies develops lawfully. The F.T.C. is well equipped with legal jurisdiction to handle the issues brought to the fore by the rapidly developing A.I. sector, including collusion, monopolization, mergers, price discrimination and unfair methods of competition.

    And generative A.I. risks turbocharging fraud. It may not be ready to replace professional writers, but it can already do a vastly better job of crafting a seemingly authentic message than your average con artist — equipping scammers to generate content quickly and cheaply. Chatbots are already being used to generate spear-phishing emails designed to scam people, fake websites and fake consumer reviews —bots are even being instructed to use words or phrases targeted at specific groups and communities. Scammers, for example, can draft highly targeted spear-phishing emails based on individual users’ social media posts. Alongside tools that create deep fake videos and voice clones, these technologies can be used to facilitate fraud and extortion on a massive scale.
    Editors’ Picks
    If You Stop Mowing This May, Will Your Lawn Turn Into a Meadow?
    My Weekend With an Emotional Support A.I. Companion
    Nothing Says Fashion in 2023 Like a Corset Hoodie

    When enforcing the law’s prohibition on deceptive practices, we will look not just at the fly-by-night scammers deploying these tools but also at the upstream firms that are enabling them.

    Lastly, these A.I. tools are being trained on huge troves of data in ways that are largely unchecked. Because they may be fed information riddled with errors and bias, these technologies risk automating discrimination — unfairly locking out people from jobs, housing or key services. These tools can also be trained on private emails, chats and sensitive data, ultimately exposing personal details and violating user privacy. Existing laws prohibiting discrimination will apply, as will existing authorities proscribing exploitative collection or use of personal data.

    The history of the growth of technology companies two decades ago serves as a cautionary tale for how we should think about the expansion of generative A.I. But history also has lessons for how to handle technological disruption for the benefit of all. Facing antitrust scrutiny in the late 1960s, the computing titan IBM unbundled software from its hardware systems, catalyzing the rise of the American software industry and creating trillions of dollars of growth. Government action required AT&T to open up its patent vault and similarly unleashed decades of innovation and spurred the expansion of countless young firms.

    America’s longstanding national commitment to fostering fair and open competition has been an essential part of what has made this nation an economic powerhouse and a laboratory of innovation. We once again find ourselves at a key decision point. Can we continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control that locks out higher quality products or the next big idea? Yes — if we make the right policy choices.

    #Lina_Khan #Régulation #Intelligence_artificielle

  • UK signs contract with US startup to identify migrants in small-boat crossings

    The UK government has turned a US-based startup specialized in artificial intelligence as part of its pledge to stop small-boat crossings. Experts have already pointed out the legal and logistical challenges of the plan.

    In a new effort to address the high number of Channel crossings, the UK Home Office is working with the US defense startup #Anduril, specialized in the use of artificial intelligence (AI).

    A surveillance tower has already been installed at Dover, and other technologies might be rolled out with the onset of warmer temperatures and renewed attempts by migrants to reach the UK. Some experts already point out the risks and practical loopholes involved in using AI to identify migrants.

    “This is obviously the next step of the illegal migration bill,” said Olivier Cahn, a researcher specialized in penal law.

    “The goal is to retrieve images that were taken at sea and use AI to show they entered UK territory illegally even if people vanish into thin air upon arrival in the UK.”

    The “illegal migration bill” was passed by the UK last month barring anyone from entering the country irregularly from filing an asylum claim and imposing a “legal duty” to remove them to a third country.
    Who is behind Anduril?

    Founded in 2017 by its CEO #Palmer_Luckey, Anduril is backed by #Peter_Thiel, a Silicon Valley investor and supporter of Donald Trump. The company has supplied autonomous surveillance technology to the US Department of Defense (DOD) to detect and track migrants trying to cross the US-Mexico border.

    In 2021, the UK Ministry of Defence awarded Anduril with a £3.8-million contract to trial an advanced base defence system. Anduril eventually opened a branch in London where it states its mission: “combining the latest in artificial intelligence with commercial-of-the-shelf sensor technology (EO, IR, Radar, Lidar, UGS, sUAS) to enhance national security through automated detection, identification and tracking of objects of interest.”

    According to Cahn, the advantage of Brexit is that the UK government is no longer required to submit to the General Data Protection Regulation (RGPDP), a component of data protection that also addresses the transfer of personal data outside the EU and EEA areas.

    “Even so, the UK has data protection laws of its own which the government cannot breach. Where will the servers with the incoming data be kept? What are the rights of appeal for UK citizens whose data is being processed by the servers?”, he asked.

    ’Smugglers will provide migrants with balaclavas for an extra 15 euros’

    Cahn also pointed out the technical difficulties of identifying migrants at sea. “The weather conditions are often not ideal, and many small-boat crossings happen at night. How will facial recognition technology operate in this context?”

    The ability of migrants and smugglers to adapt is yet another factor. “People are going to cover their faces, and anyone would think the smugglers will respond by providing migrants with balaclavas for an extra 15 euros.”

    If the UK has solicited the services of a US startup to detect and identify migrants, the reason may lie in AI’s principle of self-learning. “A machine accumulates data and recognizes what it has already seen. The US is a country with a significantly more racially and ethnically diverse population than the UK. Its artificial intelligence might contain data from populations which are more ethnically comparable to the populations that are crossing the Channel, like Somalia for example, thus facilitating the process of facial recognition.”

    For Cahn, it is not capturing the images which will be the most difficult but the legal challenges that will arise out of their usage. “People are going to be identified and there are going to be errors. If a file exists, there needs to be the possibility for individuals to appear before justice and have access to a judge.”

    A societal uproar

    In a research paper titled “Refugee protection in the artificial intelligence Era”, Chatham House notes “the most common ethical and legal challenges associated with the use of AI in asylum and related border and immigration systems involve issues of opacity and unpredictability, the potential for bias and unlawful discrimination, and how such factors affect the ability of individuals to obtain a remedy in the event of erroneous or unfair decisions.”

    For Cahn, the UK government’s usage of AI can only be used to justify and reinforce its hardline position against migrants. “For a government that doesn’t respect the Geneva Convention [whose core principle is non-refoulement, editor’s note] and which passed an illegal migration law, it is out of the question that migrants have entered the territory legally.”

    Identifying migrants crossing the Channel is not going to be the hardest part for the UK government. Cahn imagines a societal backlash with, “the Supreme Court of the United Kingdom being solicited, refugees seeking remedies to legal decisions through lawyers and associations attacking”.

    He added there would be due process concerning the storage of the data, with judges issuing disclosure orders. “There is going to be a whole series of questions which the government will have to elucidate. The rights of refugees are often used as a laboratory. If these technologies are ’successful’, they will soon be applied to the rest of the population."

    https://www.infomigrants.net/en/post/48326/uk-signs-contract-with-us-startup-to-identify-migrants-in-smallboat-cr

    #UK #Angleterre #migrations #asile #réfugiés #militarisation_des_frontières #frontières #start-up #complexe_militaro-industriel #IA #intelligence_artificielle #surveillance #technologie #channel #Manche

    –—

    ajouté à la métaliste sur la Bibby Stockholm:
    https://seenthis.net/messages/1016683

    • Huge barge set to house 500 asylum seekers arrives in the UK

      The #Bibby_Stockholm is being refitted in #Falmouth to increase its capacity from 222 to 506 people.

      A barge set to house 500 asylum seekers has arrived in the UK as the government struggles with efforts to move migrants out of hotels.

      The Independent understands that people will not be transferred onto the Bibby Stockholm until July, following refurbishment to increase its capacity and safety checks.

      The barge has been towed from its former berth in Italy to the port of Falmouth, in Cornwall.

      It will remain there while works are carried out, before being moved onto its final destination in #Portland, Dorset.

      The private operators of the port struck an agreement to host the barge with the Home Office without formal public consultation, angering the local council and residents.

      Conservative MP Richard Drax previously told The Independent legal action was still being considered to stop the government’s plans for what he labelled a “quasi-prison”.

      He accused ministers and Home Office officials of being “unable to answer” practical questions on how the barge will operate, such as how asylum seekers will be able to come and go safely through the port, what activities they will be provided with and how sufficient healthcare will be ensured.

      “The question is how do we cope?” Mr Drax said. “Every organisation has its own raft of questions: ‘Where’s the money coming from? Who’s going to do what if this all happens?’ There are not sufficient answers, which is very worrying.”

      The Independent previously revealed that asylum seekers will have less living space than an average parking bay on the Bibby Stockholm, which saw at least one person die and reports of rape and abuse on board when it was used by the Dutch government to detain migrants in the 2000s.

      An official brochure released by owner Bibby Marine shows there are only 222 “single en-suite bedrooms” on board, meaning that at least two people must be crammed into every cabin for the government to achieve its aim of holding 500 people.

      Dorset Council has said it still had “serious reservations about the appropriateness of Portland Port in this scenario and remains opposed to the proposals”.

      The Conservative police and crime commissioner for Dorset is demanding extra government funding for the local force to “meet the extra policing needs that this project will entail”.

      A multi-agency forum including representatives from national, regional and local public sector agencies has been looking at plans for the provision of health services, the safety and security of both asylum seekers and local residents and charity involvement.

      Portland Port said it had been working with the Home Office and local agencies to ensure the safe arrival and operation of the Bibby Stockholm, and to minimise its impact locally.

      The barge is part of a wider government push to move migrants out of hotels, which are currently housing more than 47,000 asylum seekers at a cost of £6m a day.

      But the use of ships as accommodation was previously ruled out on cost grounds by the Treasury, when Rishi Sunak was chancellor, and the government has not confirmed how much it will be spending on the scheme.

      Ministers have also identified several former military and government sites, including two defunct airbases and an empty prison, that they want to transform into asylum accommodation.

      But a court battle with Braintree District Council over former RAF Wethersfield is ongoing, and legal action has also been threatened over similar plans for RAF Scampton in Lancashire.

      Last month, a barrister representing home secretary Suella Braverman told the High Court that 56,000 people were expected to arrive on small boats in 2023 and that some could be made homeless if hotel places are not found.

      A record backlog of asylum applications, driven by the increase in Channel crossings and a collapse in Home Office decision-making, mean the government is having to provide accommodation for longer while claims are considered.

      https://www.independent.co.uk/news/uk/home-news/barge-falmouth-cornwall-migrants-bibby-b2333313.html
      #barge #bateau

    • ‘Performative cruelty’ : the hostile architecture of the UK government’s migrant barge

      The arrival of the Bibby Stockholm barge at Portland Port, in Dorset, on July 18 2023, marks a new low in the UK government’s hostile immigration environment. The vessel is set to accommodate over 500 asylum seekers. This, the Home Office argues, will benefit British taxpayers and local residents.

      The barge, however, was immediately rejected by the local population and Dorset council. Several British charities and church groups have condemned the barge, and the illegal migration bill it accompanies, as “an affront to human dignity”.

      Anti-immigration groups have also protested against the barge, with some adopting offensive language, referring to the asylum seekers who will be hosted there as “bargies”. Conservative MP for South Dorset Richard Drax has claimed that hosting migrants at sea would exacerbate tenfold the issues that have arisen in hotels to date, namely sexual assaults, children disappearing and local residents protesting.

      My research shows that facilities built to house irregular migrants in Europe and beyond create a temporary infrastructure designed to be hostile. Governments thereby effectively make asylum seekers more displaceable while ignoring their everyday spatial and social needs.
      Precarious space

      The official brochure plans for the Bibby Stockholm show 222 single bedrooms over three stories, built around two small internal courtyards. It has now been retrofitted with bunk beds to host more than 500 single men – more than double the number it was designed to host.

      Journalists Lizzie Dearden and Martha McHardy have shown this means the asylum seekers housed there – for up to nine months – will have “less living space than an average parking bay”. This stands in contravention of international standards of a minimum 4.5m² of covered living space per person in cold climates, where more time is spent indoors.

      In an open letter, dated June 15 2023 and addressed to home secretary Suella Braverman, over 700 people and nearly 100 non-governmental organisations (NGOs) voiced concerns that this will only add to the trauma migrants have already experienced:

      Housing people on a sea barge – which we argue is equal to a floating prison – is morally indefensible, and threatens to retraumatise a group of already vulnerable people.

      Locals are concerned already overstretched services in Portland, including GP practices, will not be able to cope with further pressure. West Dorset MP Chris Lode has questioned whether the barge itself is safe “to cope with double the weight that it was designed to bear”. A caller to the LBC radio station, meanwhile, has voiced concerns over the vessel’s very narrow and low fire escape routes, saying: “What they [the government] are effectively doing here is creating a potential Grenfell on water, a floating coffin.”

      Such fears are not unfounded. There have been several cases of fires destroying migrant camps in Europe, from the Grand-Synthe camp near Dunkirk in France, in 2017, to the 2020 fire at the Moria camp in Greece. The difficulty of escaping a vessel at sea could turn it into a death trap.

      Performative hostility

      Research on migrant accommodation shows that being able to inhabit a place – even temporarily – and develop feelings of attachment and belonging, is crucial to a person’s wellbeing. Even amid ever tighter border controls, migrants in Europe, who can be described as “stuck on the move”, nonetheless still attempt to inhabit their temporary spaces and form such connections.

      However, designs can hamper such efforts when they concentrate asylum seekers in inhospitable, cut-off spaces. In 2015, Berlin officials began temporarily housing refugees in the former Tempelhof airport, a noisy, alienating industrial space, lacking in privacy and disconnected from the city. Many people ended up staying there for the better part of a year.

      French authorities, meanwhile, opened the Centre Humanitaire Paris-Nord in Paris in 2016, temporary migrant housing in a disused train depot. Nicknamed la Bulle (the bubble) for its bulbous inflatable covering, this facility was noisy and claustrophobic, lacking in basic comforts.

      Like the barge in Portland Port, these facilities, placed in industrial sites, sit uncomfortably between hospitality and hostility. The barge will be fenced off, since the port is a secured zone, and access will be heavily restricted and controlled. The Home Office insists that the barge is not a floating prison, yet it is an unmistakably hostile space.

      Infrastructure for water and electricity will physically link the barge to shore. However, Dorset council has no jurisdiction at sea.

      The commercial agreement on the barge was signed between the Home Office and Portland Port, not the council. Since the vessel is positioned below the mean low water mark, it did not require planning permission.

      This makes the barge an island of sorts, where other rules apply, much like those islands in the Aegean sea and in the Pacific, on which Greece and Australia have respectively housed migrants.

      I have shown how facilities are often designed in this way not to give displaced people any agency, but, on the contrary, to objectify them. They heighten the instability migrants face, keeping them detached from local communities and constantly on the move.

      The government has presented the barge as a cheaper solution than the £6.8 million it is currently spending, daily, on housing asylum seekers in hotels. A recent report by two NGOs, Reclaim the Seas and One Life to Live, concludes, however, that it will save less than £10 a person a day. It could even prove more expensive than the hotel model.

      Sarah Teather, director of the Jesuit Refugee Service UK charity, has described the illegal migration bill as “performative cruelty”. Images of the barge which have flooded the news certainly meet that description too.

      However threatening these images might be, though, they will not stop desperate people from attempting to come to the UK to seek safety. Rather than deterring asylum seekers, the Bibby Stockholm is potentially creating another hazard to them and to their hosting communities.

      https://theconversation.com/performative-cruelty-the-hostile-architecture-of-the-uk-governments

      –---

      Point intéressant, lié à l’aménagement du territoire :

      “Since the vessel is positioned below the mean low water mark, it did not require planning permission”

      C’est un peu comme les #zones_frontalières qui ont été créées un peu partout en Europe (et pas que) pour que les Etats se débarassent des règles en vigueur (notamment le principe du non-refoulement). Voir cette métaliste, à laquelle j’ajoute aussi cet exemple :
      https://seenthis.net/messages/795053

      voir aussi :

      The circumstances at Portland Port are very different because where the barge is to be positioned is below the mean low water mark. This means that the barge is outside of our planning control and there is no requirement for planning permission from the council.

      https://news.dorsetcouncil.gov.uk/2023/07/18/leaders-comments-on-the-home-office-barge

      #hostile_architecture #architecture_hostile #dignité #espace #Portland #hostilité #hostilité_performative #île #infrastructure #extraterritorialité #extra-territorialité #prix #coût

    • Sur l’#histoire (notamment liées au commerce d’ #esclaves) de la Bibby Stockholm :

      Bibby Line, shipowners

      Information
      From Guide to the Records of Merseyside Maritime Museum, volume 1: Bibby Line. In 1807 John Bibby and John Highfield, Liverpool shipbrokers, began taking shares in ships, mainly Parkgate Dublin packets. By 1821 (the end of the partnership) they had vessels sailing to the Mediterranean and South America. In 1850 they expanded their Mediterranean and Black Sea interests by buying two steamers and by 1865 their fleet had increased to twenty three. The opening of the Suez Canal in 1869 severely affected their business and Frederick Leyland, their general manager, failed to persuade the family partners to diversify onto the Atlantic. Eventually, he bought them out in 1873. In 1889 the Bibby family revived its shipowning interests with a successful passenger cargo service to Burma. From 1893 it also began to carry British troops to overseas postings which remained a Bibby staple until 1962. The Burma service ended in 1971 and the company moved to new areas of shipowning including bulkers, gas tankers and accommodation barges. It still has its head office in Liverpool where most management records are held. The museum holds models of the Staffordshire (1929) and Oxfordshire (1955). For further details see the attached catalogue or contact The Archives Centre for a copy of the catalogue.

      The earliest records within the collection, the ships’ logs at B/BIBBY/1/1/1 - 1/1/3 show company vessels travelling between Europe and South America carrying cargoes that would have been produced on plantations using the labour of enslaved peoples or used within plantation and slave based economies. For example the vessel Thomas (B/BIBBY/1/1/1) carries a cargo of iron hoops for barrels to Brazil in 1812. The Mary Bibby on a voyage in 1825-1826 loads a cargo of sugar in Rio de Janeiro, Brazil to carry to Rotterdam. The log (B/BIBBY/1/1/3) records the use of ’negroes’ to work with the ship’s carpenter while the vessel is in port.

      In September 1980 the latest Bibby vessel to hold the name Derbyshire was lost with all hands in the South China Sea. This collection does not include records relating to that vessel or its sinking, apart from a copy ’Motor vessel ’Derbyshire’, 1976-80: in memoriam’ at reference B/BIBBY/3/2/1 (a copy is also available in The Archives Centre library collection at 340.DER). Information about the sinking and subsequent campaigning by the victims’ family can be found on the NML website and in the Life On Board gallery. The Archives Centre holds papers of Captain David Ramwell who assisted the Derbyshire Family Association at D/RAM and other smaller collections of related documents within the DX collection.

      https://www.liverpoolmuseums.org.uk/artifact/bibby-line-shipowners

      –—
      An Open Letter to #Bibby_Marine

      Links between your parent company #Bibby_Line_Group (#BLG) and the slave trade have repeatedly been made. If true, we appeal to you to consider what actions you might take in recompense.

      Bibby Marine’s modern slavery statement says that one of the company’s values is to “do the right thing”, and that you “strongly support the eradication of slavery, as well as the eradication of servitude, forced or compulsory labour and human trafficking”. These are admirable words.

      Meanwhile, your parent company’s website says that it is “family owned with a rich history”. Please will you clarify whether this rich history includes slaving voyages where ships were owned, and cargoes transported, by BLG’s founder John Bibby, six generations ago. The BLG website says that in 1807 (which is when slavery was abolished in Britain), “John Bibby began trading as a shipowner in Liverpool with his partner John Highfield”. John Bibby is listed as co-owner of three slaving ships, of which John Highfield co-owned two:

      In 1805, the Harmonie (co-owned by #John_Bibby and three others, including John Highfield) left Liverpool for a voyage which carried 250 captives purchased in West Central Africa and St Helena, delivering them to Cumingsberg in 1806 (see the SlaveVoyages database using Voyage ID 81732).
      In 1806, the Sally (co-owned by John Bibby and two others) left Liverpool for a voyage which transported 250 captives purchased in Bassa and delivered them to Barbados (see the SlaveVoyages database using Voyage ID 83481).
      In 1806, the Eagle (co-owned by John Bibby and four others, including John Highfield) left Liverpool for a voyage which transported 237 captives purchased in Cameroon and delivered them to Kingston in 1807 (see the SlaveVoyages database using Voyage ID 81106).

      The same and related claims were recently mentioned by Private Eye. They also appear in the story of Liverpool’s Calderstones Park [PDF] and on the website of National Museums Liverpool and in this blog post “Shenanigans in Shipping” (a detailed history of the BLG). They are also mentioned by Laurence Westgaph, a TV presenter specialising in Black British history and slavery and the author of Read The Signs: Street Names with a Connection to the Transatlantic Slave Trade and Abolition in Liverpool [PDF], published with the support of English Heritage, The City of Liverpool, Northwest Regional Development Agency, National Museums Liverpool and Liverpool Vision.

      While of course your public pledges on slavery underline that there is no possibility of there being any link between the activities of John Bibby and John Highfield in the early 1800s and your activities in 2023, we do believe that it is in the public interest to raise this connection, and to ask for a public expression of your categorical renunciation of the reported slave trade activities of Mr Bibby and Mr Highfield.

      https://www.refugeecouncil.org.uk/latest/news/an-open-letter-to-bibby-marine

      –-

      Très peu d’info sur John Bibby sur wikipedia :

      John Bibby (19 February 1775 – 17 July 1840) was the founder of the British Bibby Line shipping company. He was born in Eccleston, near Ormskirk, Lancashire. He was murdered on 17 July 1840 on his way home from dinner at a friend’s house in Kirkdale.[1]


      https://en.wikipedia.org/wiki/John_Bibby_(businessman)

    • ‘Floating Prisons’: The 200-year-old family #business behind the Bibby Stockholm

      #Bibby_Line_Group_Limited is a UK company offering financial, marine and construction services to clients in at least 16 countries around the world. It recently made headlines after the government announced one of the firm’s vessels, Bibby Stockholm, would be used to accommodate asylum seekers on the Dorset coast.

      In tandem with plans to house migrants at surplus military sites, the move was heralded by Prime Minister Rishi Sunak and Home Secretary Suella Braverman as a way of mitigating the £6m-a-day cost of hotel accommodation amid the massive ongoing backlog of asylum claims, as well as deterring refugees from making the dangerous channel crossing to the UK. Several protests have been organised against the project already, while over ninety migrants’ rights groups and hundreds of individual campaigners have signed an open letter to the Home Secretary calling for the plans to be scrapped, describing the barge as a “floating prison.”

      Corporate Watch has researched into the Bibby Line Group’s operations and financial interests. We found that:

      - The Bibby Stockholm vessel was previously used as a floating detention centre in the Netherlands, where undercover reporting revealed violence, sexual exploitation and poor sanitation.

      – Bibby Line Group is more than 90% owned by members of the Bibby family, primarily through trusts. Its pre-tax profits for 2021 stood at almost £31m, which they upped to £35.5m by claiming generous tax credits and deferring a fair amount to the following year.

      - Management aboard the vessel will be overseen by an Australian business travel services company, Corporate Travel Management, who have previously had aspersions cast over the financial health of their operations and the integrity of their business practices.

      - Another beneficiary of the initiative is Langham Industries, a maritime and engineering company whose owners, the Langham family, have longstanding ties to right wing parties.

      Key Issues

      According to the Home Office, the Bibby Stockholm barge will be operational for at least 18 months, housing approximately 500 single adult men while their claims are processed, with “24/7 security in place on board, to minimise the disruption to local communities.” These measures appear to have been to dissuade opposition from the local Conservative council, who pushed for background checks on detainees and were reportedly even weighing legal action out of concern for a perceived threat of physical attacks from those housed onboard, as well as potential attacks from the far right against migrants held there.

      Local campaigners have taken aim at the initiative, noting in the open letter:

      “For many people seeking asylum arriving in the UK, the sea represents a site of significant trauma as they have been forced to cross it on one or more occasions. Housing people on a sea barge – which we argue is equal to a floating prison – is morally indefensible, and threatens to re-traumatise a group of already vulnerable people.”

      Technically, migrants on the barge will be able to leave the site. However, in reality they will be under significant levels of surveillance and cordoned off behind fences in the high security port area.

      If they leave, there is an expectation they will return by 11pm, and departure will be controlled by the authorities. According to the Home Office:

      “In order to ensure that migrants come and go in an orderly manner with as little impact as possible, buses will be provided to take those accommodated on the vessel from the port to local drop off points”.

      These drop off points are to be determined by the government, while being sited off the coast of Dorset means they will be isolated from centres of support and solidarity.

      Meanwhile, the government’s new Illegal Migration Bill is designed to provide a legal justification for the automatic detention of refugees crossing the Channel. If it passes, there’s a chance this might set the stage for a change in regime on the Bibby Stockholm – from that of an “accommodation centre” to a full-blown migrant prison.

      An initial release from the Home Office suggested the local voluntary sector would be engaged “to organise activities that keep occupied those being accommodated, potentially involved in local volunteering activity,” though they seemed to have changed the wording after critics said this would mean detainees could be effectively exploited for unpaid labour. It’s also been reported the vessel required modifications in order to increase capacity to the needed level, raising further concerns over cramped living conditions and a lack of privacy.

      Bibby Line Group has prior form in border profiteering. From 1994 to 1998, the Bibby Stockholm was used to house the homeless, some of whom were asylum seekers, in Hamburg, Germany. In 2005, it was used to detain asylum seekers in the Netherlands, which proved a cause of controversy at the time. Undercover reporting revealed a number of cases abuse on board, such as beatings and sexual exploitation, as well suicide attempts, routine strip searches, scabies and the death of an Algerian man who failed to receive timely medical care for a deteriorating heart condition. As the undercover security guard wrote:

      “The longer I work on the Bibby Stockholm, the more I worry about safety on the boat. Between exclusion and containment I encounter so many defects and feel so much tension among the prisoners that it no longer seems to be a question of whether things will get completely out of hand here, but when.”

      He went on:

      “I couldn’t stand the way prisoners were treated […] The staff become like that, because the whole culture there is like that. Inhuman. They do not see the residents as people with a history, but as numbers.”

      Discussions were also held in August 2017 over the possibility of using the vessel as accommodation for some 400 students in Galway, Ireland, amid the country’s housing crisis. Though the idea was eventually dropped for lack of mooring space and planning permission requirements, local students had voiced safety concerns over the “bizarre” and “unconventional” solution to a lack of rental opportunities.
      Corporate Travel Management & Langham Industries

      Although leased from Bibby Line Group, management aboard the Bibby Stockholm itself will be handled by #Corporate_Travel_Management (#CTM), a global travel company specialising in business travel services. The Australian-headquartered company also recently received a £100m contract for the provision of accommodation, travel, venue and ancillary booking services for the housing of Ukrainian refugees at local hotels and aboard cruise ships M/S Victoria and M/S Ambition. The British Red Cross warned earlier in May against continuing to house refugees on ships with “isolated” and “windowless” cabins, and said the scheme had left many “living in limbo.”

      Founded by CEO #Jamie_Pherous, CTM was targeted in 2018 by #VGI_Partners, a group of short-sellers, who identified more than 20 red flags concerning the company’s business interests. Most strikingly, the short-sellers said they’d attended CTM’s offices in Glasgow, Paris, Amsterdam, Stockholm and Switzerland. Finding no signs of business activity there, they said it was possible the firm had significantly overstated the scale of its operations. VGI Partners also claimed CTM’s cash flows didn’t seem to add up when set against the company’s reported growth, and that CTM hadn’t fully disclosed revisions they’d made to their annual revenue figures.

      Two years later, the short-sellers released a follow-up report, questioning how CTM had managed to report a drop in rewards granted for high sales numbers to travel agencies, when in fact their transaction turnover had grown during the same period. They also accused CTM of dressing up their debt balance to make their accounts look healthier.

      CTM denied VGI Partners’ allegations. In their response, they paraphrased a report by auditors EY, supposedly confirming there were no question marks over their business practices, though the report itself was never actually made public. They further claim VGI Partners, as short-sellers, had only released the reports in the hope of benefitting from uncertainty over CTM’s operations.

      Despite these troubles, CTM’s market standing improved drastically earlier this year, when it was announced the firm had secured contracts for the provision of travel services to the UK Home Office worth in excess of $3bn AUD (£1.6bn). These have been accompanied by further tenders with, among others, the National Audit Office, HS2, Cafcass, Serious Fraud Office, Office of National Statistics, HM Revenue & Customs, National Health Service, Ministry of Justice, Department of Education, Foreign Office, and the Equality and Human Rights Commission.

      The Home Office has not released any figures on the cost of either leasing or management services aboard Bibby Stockholm, though press reports have put the estimated price tag at more than £20,000 a day for charter and berthing alone. If accurate, this would put the overall expenditure for the 18-month period in which the vessel will operate as a detention centre at almost £11m, exclusive of actual detention centre management costs such as security, food and healthcare.

      Another beneficiary of the project are Portland Port’s owners, #Langham_Industries, a maritime and engineering company owned by the #Langham family. The family has long-running ties to right-wing parties. Langham Industries donated over £70,000 to the UK Independence Party from 2003 up until the 2016 Brexit referendum. In 2014, Langham Industries donated money to support the re-election campaign of former Clacton MP for UKIP Douglas Carswell, shortly after his defection from the Conservatives. #Catherine_Langham, a Tory parish councillor for Hilton in Dorset, has described herself as a Langham Industries director (although she is not listed on Companies House). In 2016 she was actively involved in local efforts to support the campaign to leave the European Union. The family holds a large estate in Dorset which it uses for its other line of business, winemaking.

      At present, there is no publicly available information on who will be providing security services aboard the Bibby Stockholm.

      Business Basics

      Bibby Line Group describes itself as “one of the UK’s oldest family owned businesses,” operating in “multiple countries, employing around 1,300 colleagues, and managing over £1 billion of funds.” Its head office is registered in Liverpool, with other headquarters in Scotland, Hong Kong, India, Singapore, Malaysia, France, Slovakia, Czechia, the Netherlands, Germany, Poland and Nigeria (see the appendix for more). The company’s primary sectors correspond to its three main UK subsidiaries:

      #Bibby_Financial_Services. A global provider of financial services. The firm provides loans to small- and medium-sized businesses engaged in business services, construction, manufacturing, transportation, export, recruitment and wholesale markets. This includes invoice financing, export and trade finance, and foreign exchanges. Overall, the subsidiary manages more than £6bn each year on behalf of some 9,000 clients across 300 different industry sectors, and in 2021 it brought in more than 50% of the group’s annual turnover.

      - #Bibby_Marine_Limited. Owner and operator of the Bibby WaveMaster fleet, a group of vessels specialising in the transport and accommodation of workers employed at remote locations, such as offshore oil and gas sites in the North Sea. Sometimes, as in the case of Chevron’s Liquified Natural Gas (LNG) project in Nigeria, the vessels are used as an alternative to hotels owing to a “a volatile project environment.” The fleet consists of 40 accommodation vessels similar in size to the Bibby Stockholm and a smaller number of service vessels, though the share of annual turnover pales compared to the group’s financial services operations, standing at just under 10% for 2021.

      - #Garic Ltd. Confined to construction, quarrying, airport, agriculture and transport sectors in the UK, the firm designs, manufactures and purchases plant equipment and machinery for sale or hire. Garic brought in around 14% of Bibby Line Group’s turnover in 2021.

      Prior to February 2021, Bibby Line Group also owned #Costcutter_Supermarkets_Group, before it was sold to #Bestway_Wholesale to maintain liquidity amid the Covid-19 pandemic. In their report for that year, the company’s directors also suggested grant funding from #MarRI-UK, an organisation facilitating innovation in maritime technologies and systems, had been important in preserving the firm’s position during the crisis.
      History

      The Bibby Line Group’s story begins in 1807, when Lancashire-born shipowner John Bibby began trading out of Liverpool with partner John Highfield. By the time of his death in 1840, murdered while returning home from dinner with a friend in Kirkdale, Bibby had struck out on his own and come to manage a fleet of more than 18 ships. The mysterious case of his death has never been solved, and the business was left to his sons John and James.

      Between 1891 and 1989, the company operated under the name #Bibby_Line_Limited. Its ships served as hospital and transport vessels during the First World War, as well as merchant cruisers, and the company’s entire fleet of 11 ships was requisitioned by the state in 1939.

      By 1970, the company had tripled its overseas earnings, branching into ‘factoring’, or invoice financing (converting unpaid invoices into cash for immediate use via short-term loans) in the early 1980s, before this aspect of the business was eventually spun off into Bibby Financial Services. The group acquired Garic Ltd in 2008, which currently operates four sites across the UK.

      People

      #Jonathan_Lewis has served as Bibby Line Group’s Managing and Executive Director since January 2021, prior to which he acted as the company’s Chief Financial and Strategy Officer since joining in 2019. Previously, Lewis worked as CFO for Imagination Technologies, a tech company specialising in semiconductors, and as head of supermarket Tesco’s mergers and acquisitions team. He was also a member of McKinsey’s European corporate finance practice, as well as an investment banker at Lazard. During his first year at the helm of Bibby’s operations, he was paid £748,000. Assuming his role at the head of the group’s operations, he replaced Paul Drescher, CBE, then a board member of the UK International Chamber of Commerce and a former president of the Confederation of British Industry.

      Bibby Line Group’s board also includes two immediate members of the Bibby family, Sir #Michael_James_Bibby, 3rd Bt. and his younger brother #Geoffrey_Bibby. Michael has acted as company chairman since 2020, before which he had occupied senior management roles in the company for 20 years. He also has external experience, including time at Unilever’s acquisitions, disposals and joint venture divisions, and now acts as president of the UK Chamber of Shipping, chairman of the Charities Trust, and chairman of the Institute of Family Business Research Foundation.

      Geoffrey has served as a non-executive director of the company since 2015, having previously worked as a managing director of Vast Visibility Ltd, a digital marketing and technology company. In 2021, the Bibby brothers received salaries of £125,000 and £56,000 respectively.

      The final member of the firm’s board is #David_Anderson, who has acted as non-executive director since 2012. A financier with 35 years experience in investment banking, he’s founder and CEO of EPL Advisory – which advises company boards on requirements and disclosure obligations of public markets – and chair of Creative Education Trust, a multi-academy trust comprising 17 schools. Anderson is also chairman at multinational ship broker Howe Robinson Partners, which recently auctioned off a superyacht seized from Dmitry Pumpyansky, after the sanctioned Russian businessman reneged on a €20.5m loan from JP Morgan. In 2021, Anderson’s salary stood at £55,000.

      Ownership

      Bibby Line Group’s annual report and accounts for 2021 state that more than 90% of the company is owned by members of the Bibby family, primarily through family trusts. These ownership structures, effectively entities allowing people to benefit from assets without being their registered legal owners, have long attracted staunch criticism from transparency advocates given the obscurity they afford means they often feature extensively in corruption, money laundering and tax abuse schemes.

      According to Companies House, the UK corporate registry, between 50% and 75% of Bibby Line Group’s shares and voting rights are owned by #Bibby_Family_Company_Limited, which also retains the right to appoint and remove members of the board. Directors of Bibby Family Company Limited include both the Bibby brothers, as well as a third sibling, #Peter_John_Bibby, who’s formally listed as the firm’s ‘ultimate beneficial owner’ (i.e. the person who ultimately profits from the company’s assets).

      Other people with comparable shares in Bibby Family Company Limited are #Mark_Rupert_Feeny, #Philip_Charles_Okell, and Lady #Christine_Maud_Bibby. Feeny’s occupation is listed as solicitor, with other interests in real estate management and a position on the board of the University of Liverpool Pension Fund Trustees Limited. Okell meanwhile appears as director of Okell Money Management Limited, a wealth management firm, while Lady Bibby, Michael and Geoffrey’s mother, appears as “retired playground supervisor.”

      Key Relationships

      Bibby Line Group runs an internal ‘Donate a Day’ volunteer program, enabling employees to take paid leave in order to “help causes they care about.” Specific charities colleagues have volunteered with, listed in the company’s Annual Review for 2021 to 2022, include:

      - The Hive Youth Zone. An award-winning charity for young people with disabilities, based in the Wirral.

      – The Whitechapel Centre. A leading homeless and housing charity in the Liverpool region, working with people sleeping rough, living in hostels, or struggling with their accommodation.

      - Let’s Play Project. Another charity specialising in after-school and holiday activities for young people with additional needs in the Banbury area.

      - Whitdale House. A care home for the elderly, based in Whitburn, West Lothian and run by the local council.

      – DEBRA. An Irish charity set up in 1988 for individuals living with a rare, painful skin condition called epidermolysis bullosa, as well as their families.

      – Reaching Out Homeless Outreach. A non-profit providing resources and support to the homeless in Ireland.

      Various senior executives and associated actors at Bibby Line Group and its subsidiaries also have current and former ties to the following organisations:

      - UK Chamber of Shipping

      - Charities Trust

      - Institute of Family Business Research Foundation

      - Indefatigable Old Boys Association

      - Howe Robinson Partners

      - hibu Ltd

      - EPL Advisory

      - Creative Education Trust

      - Capita Health and Wellbeing Limited

      - The Ambassador Theatre Group Limited

      – Pilkington Plc

      – UK International Chamber of Commerce

      – Confederation of British Industry

      – Arkley Finance Limited (Weatherby’s Banking Group)

      – FastMarkets Ltd, Multiple Sclerosis Society

      – Early Music as Education

      – Liverpool Pension Fund Trustees Limited

      – Okell Money Management Limited

      Finances

      For the period ending 2021, Bibby Line Group’s total turnover stood at just under £260m, with a pre-tax profit of almost £31m – fairly healthy for a company providing maritime services during a global pandemic. Their post-tax profits in fact stood at £35.5m, an increase they would appear to have secured by claiming generous tax credits (£4.6m) and deferring a fair amount (£8.4m) to the following year.

      Judging by their last available statement on the firm’s profitability, Bibby’s directors seem fairly confident the company has adequate financing and resources to continue operations for the foreseeable future. They stress their February 2021 sale of Costcutter was an important step in securing this, given it provided additional liquidity during the pandemic, as well as the funding secured for R&D on fuel consumption by Bibby Marine’s fleet.
      Scandal Sheet

      Bibby Line Group and its subsidiaries have featured in a number of UK legal proceedings over the years, sometimes as defendants. One notable case is Godfrey v Bibby Line, a lawsuit brought against the company in 2019 after one of their former employees died as the result of an asbestos-related disease.

      In their claim, the executors of Alan Peter Godfrey’s estate maintained that between 1965 and 1972, he was repeatedly exposed to large amounts of asbestos while working on board various Bibby vessels. Although the link between the material and fatal lung conditions was established as early as 1930, they claimed that Bibby Line, among other things:

      “Failed to warn the deceased of the risk of contracting asbestos related disease or of the precautions to be taken in relation thereto;

      “Failed to heed or act upon the expert evidence available to them as to the best means of protecting their workers from danger from asbestos dust; [and]

      “Failed to take all reasonably practicable measures, either by securing adequate ventilation or by the provision and use of suitable respirators or otherwise, to prevent inhalation of dust.”

      The lawsuit, which claimed “unlimited damage”’ against the group, also stated that Mr Godfrey’s “condition deteriorated rapidly with worsening pain and debility,” and that he was “completely dependent upon others for his needs by the last weeks of his life.” There is no publicly available information on how the matter was concluded.

      In 2017, Bibby Line Limited also featured in a leak of more than 13.4 million financial records known as the Paradise Papers, specifically as a client of Appleby, which provided “offshore corporate services” such as legal and accountancy work. According to the Organized Crime and Corruption Reporting Project, a global network of investigative media outlets, leaked Appleby documents revealed, among other things, “the ties between Russia and [Trump’s] billionaire commerce secretary, the secret dealings of Canadian Prime Minister Justin Trudeau’s chief fundraiser and the offshore interests of the Queen of England and more than 120 politicians around the world.”

      This would not appear to be the Bibby group’s only link to the shady world of offshore finance. Michael Bibby pops up as a treasurer for two shell companies registered in Panama, Minimar Transport S.A. and Vista Equities Inc.
      Looking Forward

      Much about the Bibby Stockholm saga remains to be seen. The exact cost of the initiative and who will be providing security services on board, are open questions. What’s clear however is that activists will continue to oppose the plans, with efforts to prevent the vessel sailing from Falmouth to its final docking in Portland scheduled to take place on 30th June.

      Appendix: Company Addresses

      HQ and general inquiries: 3rd Floor Walker House, Exchange Flags, Liverpool, United Kingdom, L2 3YL

      Tel: +44 (0) 151 708 8000

      Other offices, as of 2021:

      6, Shenton Way, #18-08A Oue Downtown 068809, Singapore

      1/1, The Exchange Building, 142 St. Vincent Street, Glasgow, G2 5LA, United Kingdom

      4th Floor Heather House, Heather Road, Sandyford, Dublin 18, Ireland

      Unit 2302, 23/F Jubilee Centre, 18 Fenwick Street, Wanchai, Hong Kong

      Unit 508, Fifth Floor, Metropolis Mall, MG Road, Gurugram, Haryana, 122002 India

      Suite 7E, Level 7, Menara Ansar, 65 Jalan Trus, 8000 Johor Bahru, Johor, Malaysia

      160 Avenue Jean Jaures, CS 90404, 69364 Lyon Cedex, France

      Prievozská 4D, Block E, 13th Floor, Bratislava 821 09, Slovak Republic

      Hlinky 118, Brno, 603 00, Czech Republic

      Laan Van Diepenvoorde 5, 5582 LA, Waalre, Netherlands

      Hansaallee 249, 40549 Düsseldorf, Germany

      Poland Eurocentrum, Al. Jerozolimskie 134, 02-305 Warsaw, Poland

      1/2 Atarbekova str, 350062, Krasnodar, Krasnodar

      1 St Peter’s Square, Manchester, M2 3AE, United Kingdom

      25 Adeyemo Alakija Street, Victoria Island, Lagos, Nigeria

      10 Anson Road, #09-17 International Plaza, 079903 Singapore

      https://corporatewatch.org/floating-prisons-the-200-year-old-family-business-behind-the-bibby-s

      signalé ici aussi par @rezo:
      https://seenthis.net/messages/1010504

    • The Langham family seem quite happy to support right-wing political parties that are against immigration, while at the same time profiting handsomely from the misery of refugees who are forced to claim sanctuary here.


      https://twitter.com/PositiveActionH/status/1687817910364884992

      –---

      Family firm ’profiteering from misery’ by providing migrant barges donated £70k to #UKIP

      The Langham family, owners of Langham Industries, is now set to profit from an 18-month contract with the Home Office to let the Bibby Stockholm berth at Portland, Dorset

      A family firm that donated more than £70,000 to UKIP is “profiteering from misery” by hosting the Government’s controversial migrant barge. Langham Industries owns Portland Port, where the Bibby Stockholm is docked in a deal reported to be worth some £2.5million.

      The Langham family owns luxurious properties and has links to high-profile politicians, including Prime Minister Rishi Sunak and Deputy Prime Minister Oliver Dowden. And we can reveal that their business made 19 donations to pro-Brexit party UKIP between 2003 and 2016.

      Late founder John Langham was described as an “avid supporter” of UKIP in an obituary in 2017. Now his children, John, Jill and Justin – all directors of the family firm – are set to profit from an 18-month contract with the Home Office to let the Bibby Stockholm berth at Portland, Dorset.

      While Portland Port refuses to reveal how much the Home Office is paying, its website cites berthing fees for a ship the size of the Bibby Stockholm at more than £4,000 a day. In 2011, Portland Port chairman John, 71, invested £3.7million in Grade II* listed country pile Steeple Manor at Wareham, Dorset. Dating to around 1600, it has a pond, tennis court and extensive gardens designed by the landscape architect Brenda Colvin.

      The arrangement to host the “prison-like” barge for housing migrants has led some locals to blast the Langhams, who have owned the port since 1997. Portland mayor Carralyn Parkes, 61, said: “I don’t know how John Langham will sleep at night in his luxurious home, with his tennis court and his fluffy bed, when asylum seekers are sleeping in tiny beds on the barge.

      “I went on the boat and measured the rooms with a tape measure. On average they are about 10ft by 12ft. The bunk bed mattresses are about 6ft long. If you’re taller than 6ft you’re stuffed. The Langham family need to have more humanity. They are only interested in making money. It’s shocking.”

      (#paywall)
      https://www.mirror.co.uk/news/politics/family-firm-profiteering-misery-providing-30584405.amp

      #UK_Independence_Party

    • ‘This is a prison’: men tell of distressing conditions on Bibby Stockholm

      Asylum seekers share fears about Dorset barge becoming even more crowded, saying they already ‘despair and wish for death’

      Asylum seekers brought back to the Bibby Stockholm barge in Portland, Dorset, have said they are being treated in such a way that “we despair and wish for death”.

      The Guardian spoke to two men in their first interview since their return to the barge on 19 October after the vessel lay empty for more than two months. The presence of deadly legionella bacteria was confirmed on board on 7 August, the same day the first group of asylum seekers arrived. The barge was evacuated four days later.

      The new warning comes after it emerged that one asylum seeker attempted to kill himself and is in hospital after finding out he is due to be taken to the barge on Tuesday.

      A man currently on the barge told the Guardian: “Government decisions are turning healthy and normal refugees into mental patients whom they then hand over to society. Here, many people were healthy and coping with OK spirits, but as a result of the dysfunctional strategies of the government, they have suffered – and continue to suffer – from various forms of serious mental distress. We are treated in such a way that we despair and wish for death.”

      He said that although the asylum seekers were not detained on the barge and could leave to visit the nearby town, in practice, doing so was not easy.

      He added: “In the barge, we have exactly the feeling of being in prison. It is true that they say that this is not a prison and you can go outside at any time, but you can only go to specific stops at certain times by bus, and this does not give me a good feeling.

      “Even to use the fresh air, you have to go through the inspection every time and go to the small yard with high fences and go through the X-ray machine again. And this is not good for our health.

      “In short, this is a prison whose prisoners are not criminals, they are people who have fled their country just to save their lives and have taken shelter here to live.”

      The asylum seekers raised concerns about what conditions on the barge would be like if the Home Office did fill it with about 500 asylum seekers, as officials say is the plan. Those on board said it already felt quite full with about 70 people living there.

      The second asylum seeker said: “The space inside the barge is very small. It feels crowded in the dining hall and the small entertainment room. It is absolutely clear to me that there will be chaos here soon.

      “According to my estimate, as I look at the spaces around us, the capacity of this barge is maximum 120 people, including personnel and crew. The strategy of ​​transferring refugees from hotels to barges or ships or military installations is bound to fail.

      “The situation here on the barge is getting worse. Does the government have a plan for shipwrecked residents? Everyone here is going mad with anxiety. It is not just the barge that floats on the water, but the plans of the government that are radically adrift.”

      Maddie Harris of the NGO Humans For Rights Network, which supports asylum seekers in hotels, said: “Home Office policies directly contribute to the significant deterioration of the wellbeing and mental health of so many asylum seekers in their ‘care’, with a dehumanising environment, violent anti-migrant rhetoric and isolated accommodations away from community and lacking in support.”

      A Home Office spokesperson said: “The Bibby Stockholm is part of the government’s pledge to reduce the use of expensive hotels and bring forward alternative accommodation options which provide a more cost-effective, sustainable and manageable system for the UK taxpayer and local communities.

      “The health and welfare of asylum seekers remains the utmost priority. We work continually to ensure the needs and vulnerabilities of those residing in asylum accommodation are identified and considered, including those related to mental health and trauma.”

      Nadia Whittome and Lloyd Russell-Moyle, the Labour MPs for Nottingham East and Brighton Kemptown respectively, will travel to Portland on Monday to meet asylum seekers accommodated on the Bibby Stockholm barge and local community members.

      The visit follows the home secretary, Suella Braverman, not approving a visit from the MPs to assess living conditions as they requested through parliamentary channels.

      https://www.theguardian.com/uk-news/2023/oct/29/this-is-a-prison-men-tell-of-distressing-conditions-on-bibby-stockholm
      #prison #conditions_de_vie

  • The messy, secretive reality behind OpenAI’s bid to save the world
    https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secret

    17.2.2020 by Karen Hao -Every year, OpenAI’s employees vote on when they believe artificial general intelligence, or AGI, will finally arrive. It’s mostly seen as a fun way to bond, and their estimates differ widely. But in a field that still debates whether human-like autonomous systems are even possible, half the lab bets it is likely to happen within 15 years.

    In the four short years of its existence, OpenAI has become one of the leading AI research labs in the world. It has made a name for itself producing consistently headline-grabbing research, alongside other AI heavyweights like Alphabet’s DeepMind. It is also a darling in Silicon Valley, counting Elon Musk and legendary investor Sam Altman among its founders.

    Above all, it is lionized for its mission. Its goal is to be the first to create AGI—a machine with the learning and reasoning powers of a human mind. The purpose is not world domination; rather, the lab wants to ensure that the technology is developed safely and its benefits distributed evenly to the world.

    The implication is that AGI could easily run amok if the technology’s development is left to follow the path of least resistance. Narrow intelligence, the kind of clumsy AI that surrounds us today, has already served as an example. We now know that algorithms are biased and fragile; they can perpetrate great abuse and great deception; and the expense of developing and running them tends to concentrate their power in the hands of a few. By extrapolation, AGI could be catastrophic without the careful guidance of a benevolent shepherd.

    OpenAI wants to be that shepherd, and it has carefully crafted its image to fit the bill. In a field dominated by wealthy corporations, it was founded as a nonprofit. Its first announcement said that this distinction would allow it to “build value for everyone rather than shareholders.” Its charter—a document so sacred that employees’ pay is tied to how well they adhere to it—further declares that OpenAI’s “primary fiduciary duty is to humanity.” Attaining AGI safely is so important, it continues, that if another organization were close to getting there first, OpenAI would stop competing with it and collaborate instead. This alluring narrative plays well with investors and the media, and in July Microsoft injected the lab with a fresh $1 billion.
    Photograph of OpenAI branded sign in their office space
    OpenAI’s logo hanging in its office.

    Christie Hemm Klok

    But three days at OpenAI’s office—and nearly three dozen interviews with past and current employees, collaborators, friends, and other experts in the field—suggest a different picture. There is a misalignment between what the company publicly espouses and how it operates behind closed doors. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration. Many who work or worked for the company insisted on anonymity because they were not authorized to speak or feared retaliation. Their accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.

    Since its earliest conception, AI as a field has strived to understand human-like intelligence and then re-create it. In 1950, Alan Turing, the renowned English mathematician and computer scientist, began a paper with the now-famous provocation “Can machines think?” Six years later, captivated by the nagging idea, a group of scientists gathered at Dartmouth College to formalize the discipline.

    “It is one of the most fundamental questions of all intellectual history, right?” says Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence (AI2), a Seattle-based nonprofit AI research lab. “It’s like, do we understand the origin of the universe? Do we understand matter?”

    The trouble is, AGI has always remained vague. No one can really describe what it might look like or the minimum of what it should do. It’s not obvious, for instance, that there is only one kind of general intelligence; human intelligence could just be a subset. There are also differing opinions about what purpose AGI could serve. In the more romanticized view, a machine intelligence unhindered by the need for sleep or the inefficiency of human communication could help solve complex challenges like climate change, poverty, and hunger.

    But the resounding consensus within the field is that such advanced capabilities would take decades, even centuries—if indeed it’s possible to develop them at all. Many also fear that pursuing this goal overzealously could backfire. In the 1970s and again in the late ’80s and early ’90s, the field overpromised and underdelivered. Overnight, funding dried up, leaving deep scars in an entire generation of researchers. “The field felt like a backwater,” says Peter Eckersley, until recently director of research at the industry group Partnership on AI, of which OpenAI is a member.
    Photograph of infinite jest conference room
    A conference room on the first floor named Infinite Jest.

    Christie Hemm Klok

    Against this backdrop, OpenAI entered the world with a splash on December 11, 2015. It wasn’t the first to openly declare it was pursuing AGI; DeepMind had done so five years earlier and had been acquired by Google in 2014. But OpenAI seemed different. For one thing, the sticker price was shocking: the venture would start with $1 billion from private investors, including Musk, Altman, and PayPal cofounder Peter Thiel.

    The star-studded investor list stirred up a media frenzy, as did the impressive list of initial employees: Greg Brockman, who had run technology for the payments company Stripe, would be chief technology officer; Ilya Sutskever, who had studied under AI pioneer Geoffrey Hinton, would be research director; and seven researchers, freshly graduated from top universities or plucked from other companies, would compose the core technical team. (Last February, Musk announced that he was parting ways with the company over disagreements about its direction. A month later, Altman stepped down as president of startup accelerator Y Combinator to become OpenAI’s CEO.)

    But more than anything, OpenAI’s nonprofit status made a statement. “It’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest,” the announcement said. “Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world.” Though it never made the criticism explicit, the implication was clear: other labs, like DeepMind, could not serve humanity because they were constrained by commercial interests. While they were closed, OpenAI would be open.

    In a research landscape that had become increasingly privatized and focused on short-term financial gains, OpenAI was offering a new way to fund progress on the biggest problems. “It was a beacon of hope,” says Chip Huyen, a machine learning expert who has closely followed the lab’s journey.

    At the intersection of 18th and Folsom Streets in San Francisco, OpenAI’s office looks like a mysterious warehouse. The historic building has drab gray paneling and tinted windows, with most of the shades pulled down. The letters “PIONEER BUILDING”—the remnants of its bygone owner, the Pioneer Truck Factory—wrap around the corner in faded red paint.

    Inside, the space is light and airy. The first floor has a few common spaces and two conference rooms. One, a healthy size for larger meetings, is called A Space Odyssey; the other, more of a glorified phone booth, is called Infinite Jest. This is the space I’m restricted to during my visit. I’m forbidden to visit the second and third floors, which house everyone’s desks, several robots, and pretty much everything interesting. When it’s time for their interviews, people come down to me. An employee trains a watchful eye on me in between meetings.
    Pioneer building
    The Pioneer Building.

    wikimedia commons / tfinc

    On the beautiful blue-sky day that I arrive to meet Brockman, he looks nervous and guarded. “We’ve never given someone so much access before,” he says with a tentative smile. He wears casual clothes and, like many at OpenAI, sports a shapeless haircut that seems to reflect an efficient, no-frills mentality.

    Brockman, 31, grew up on a hobby farm in North Dakota and had what he describes as a “focused, quiet childhood.” He milked cows, gathered eggs, and fell in love with math while studying on his own. In 2008, he entered Harvard intending to double-major in math and computer science, but he quickly grew restless to enter the real world. He dropped out a year later, entered MIT instead, and then dropped out again within a matter of months. The second time, his decision was final. Once he moved to San Francisco, he never looked back.

    Brockman takes me to lunch to remove me from the office during an all-company meeting. In the café across the street, he speaks about OpenAI with intensity, sincerity, and wonder, often drawing parallels between its mission and landmark achievements of science history. It’s easy to appreciate his charisma as a leader. Recounting memorable passages from the books he’s read, he zeroes in on the Valley’s favorite narrative, America’s race to the moon. (“One story I really love is the story of the janitor,” he says, referencing a famous yet probably apocryphal tale. “Kennedy goes up to him and asks him, ‘What are you doing?’ and he says, ‘Oh, I’m helping put a man on the moon!’”) There’s also the transcontinental railroad (“It was actually the last megaproject done entirely by hand … a project of immense scale that was totally risky”) and Thomas Edison’s incandescent lightbulb (“A committee of distinguished experts said ‘It’s never gonna work,’ and one year later he shipped”).
    Photograph of founder
    Greg Brockman, co-founder and CTO.

    Christie Hemm Klok

    Brockman is aware of the gamble OpenAI has taken on—and aware that it evokes cynicism and scrutiny. But with each reference, his message is clear: People can be skeptical all they want. It’s the price of daring greatly.

    Those who joined OpenAI in the early days remember the energy, excitement, and sense of purpose. The team was small—formed through a tight web of connections—and management stayed loose and informal. Everyone believed in a flat structure where ideas and debate would be welcome from anyone.

    Musk played no small part in building a collective mythology. “The way he presented it to me was ‘Look, I get it. AGI might be far away, but what if it’s not?’” recalls Pieter Abbeel, a professor at UC Berkeley who worked there, along with several of his students, in the first two years. “‘What if it’s even just a 1% or 0.1% chance that it’s happening in the next five to 10 years? Shouldn’t we think about it very carefully?’ That resonated with me,” he says.

    But the informality also led to some vagueness of direction. In May 2016, Altman and Brockman received a visit from Dario Amodei, then a Google researcher, who told them no one understood what they were doing. In an account published in the New Yorker, it wasn’t clear the team itself knew either. “Our goal right now … is to do the best thing there is to do,” Brockman said. “It’s a little vague.”

    Nonetheless, Amodei joined the team a few months later. His sister, Daniela Amodei, had previously worked with Brockman, and he already knew many of OpenAI’s members. After two years, at Brockman’s request, Daniela joined too. “Imagine—we started with nothing,” Brockman says. “We just had this ideal that we wanted AGI to go well.”

    Throughout our lunch, Brockman recites the charter like scripture, an explanation for every aspect of the company’s existence.

    By March of 2017, 15 months in, the leadership realized it was time for more focus. So Brockman and a few other core members began drafting an internal document to lay out a path to AGI. But the process quickly revealed a fatal flaw. As the team studied trends within the field, they realized staying a nonprofit was financially untenable. The computational resources that others in the field were using to achieve breakthrough results were doubling every 3.4 months. It became clear that “in order to stay relevant,” Brockman says, they would need enough capital to match or exceed this exponential ramp-up. That required a new organizational model that could rapidly amass money—while somehow also staying true to the mission.

    Unbeknownst to the public—and most employees—it was with this in mind that OpenAI released its charter in April of 2018. The document re-articulated the lab’s core values but subtly shifted the language to reflect the new reality. Alongside its commitment to “avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power,” it also stressed the need for resources. “We anticipate needing to marshal substantial resources to fulfill our mission,” it said, “but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.”

    “We spent a long time internally iterating with employees to get the whole company bought into a set of principles,” Brockman says. “Things that had to stay invariant even if we changed our structure.”
    Group photo of the team
    From left to right: Daniela Amodei, Jack Clark, Dario Amodei, Jeff Wu (technical staff member), Greg Brockman, Alec Radford (technical language team lead), Christine Payne (technical staff member), Ilya Sutskever, and Chris Berner (head of infrastructure).

    Christie Hemm Klok

    That structure change happened in March 2019. OpenAI shed its purely nonprofit status by setting up a “capped profit” arm—a for-profit with a 100-fold limit on investors’ returns, albeit overseen by a board that’s part of a nonprofit entity. Shortly after, it announced Microsoft’s billion-dollar investment (though it didn’t reveal that this was split between cash and credits to Azure, Microsoft’s cloud computing platform).

    Predictably, the move set off a wave of accusations that OpenAI was going back on its mission. In a post on Hacker News soon after the announcement, a user asked how a 100-fold limit would be limiting at all: “Early investors in Google have received a roughly 20x return on their capital,” they wrote. “Your bet is that you’ll have a corporate structure which returns orders of magnitude more than Google ... but you don’t want to ‘unduly concentrate power’? How will this work? What exactly is power, if not the concentration of resources?”

    The move also rattled many employees, who voiced similar concerns. To assuage internal unrest, the leadership wrote up an FAQ as part of a series of highly protected transition docs. “Can I trust OpenAI?” one question asked. “Yes,” began the answer, followed by a paragraph of explanation.

    The charter is the backbone of OpenAI. It serves as the springboard for all the lab’s strategies and actions. Throughout our lunch, Brockman recites it like scripture, an explanation for every aspect of the company’s existence. (“By the way,” he clarifies halfway through one recitation, “I guess I know all these lines because I spent a lot of time really poring over them to get them exactly right. It’s not like I was reading this before the meeting.”)

    How will you ensure that humans continue to live meaningful lives as you develop more advanced capabilities? “As we wrote, we think its impact should be to give everyone economic freedom, to let them find new opportunities that aren’t imaginable today.” How will you structure yourself to evenly distribute AGI? “I think a utility is the best analogy for the vision that we have. But again, it’s all subject to the charter.” How do you compete to reach AGI first without compromising safety? “I think there is absolutely this important balancing act, and our best shot at that is what’s in the charter.”
    Cover of open AI charter
    APRIL 9, 2018 5 MINUTE READ

    OpenAI

    For Brockman, rigid adherence to the document is what makes OpenAI’s structure work. Internal alignment is treated as paramount: all full-time employees are required to work out of the same office, with few exceptions. For the policy team, especially Jack Clark, the director, this means a life divided between San Francisco and Washington, DC. Clark doesn’t mind—in fact, he agrees with the mentality. It’s the in-between moments, like lunchtime with colleagues, he says, that help keep everyone on the same page.

    In many ways, this approach is clearly working: the company has an impressively uniform culture. The employees work long hours and talk incessantly about their jobs through meals and social hours; many go to the same parties and subscribe to the rational philosophy of “effective altruism.” They crack jokes using machine-learning terminology to describe their lives: “What is your life a function of?” “What are you optimizing for?” “Everything is basically a minmax function.” To be fair, other AI researchers also love doing this, but people familiar with OpenAI agree: more than others in the field, its employees treat AI research not as a job but as an identity. (In November, Brockman married his girlfriend of one year, Anna, in the office against a backdrop of flowers arranged in an OpenAI logo. Sutskever acted as the officiant; a robot hand was the ring bearer.)

    But at some point in the middle of last year, the charter became more than just lunchtime conversation fodder. Soon after switching to a capped-profit, the leadership instituted a new pay structure based in part on each employee’s absorption of the mission. Alongside columns like “engineering expertise” and “research direction” in a spreadsheet tab titled “Unified Technical Ladder,” the last column outlines the culture-related expectations for every level. Level 3: “You understand and internalize the OpenAI charter.” Level 5: “You ensure all projects you and your team-mates work on are consistent with the charter.” Level 7: “You are responsible for upholding and improving the charter, and holding others in the organization accountable for doing the same.”

    The first time most people ever heard of OpenAI was on February 14, 2019. That day, the lab announced impressive new research: a model that could generate convincing essays and articles at the push of a button. Feed it a sentence from The Lord of the Rings or the start of a (fake) news story about Miley Cyrus shoplifting, and it would spit out paragraph after paragraph of text in the same vein.

    But there was also a catch: the model, called GPT-2, was too dangerous to release, the researchers said. If such powerful technology fell into the wrong hands, it could easily be weaponized to produce disinformation at immense scale.

    The backlash among scientists was immediate. OpenAI was pulling a publicity stunt, some said. GPT-2 was not nearly advanced enough to be a threat. And if it was, why announce its existence and then preclude public scrutiny? “It seemed like OpenAI was trying to capitalize off of panic around AI,” says Britt Paris, an assistant professor at Rutgers University who studies AI-generated disinformation.
    photograph of Jack
    Jack Clark, policy director.

    Christie Hemm Klok

    By May, OpenAI had revised its stance and announced plans for a “staged release.” Over the following months, it successively dribbled out more and more powerful versions of GPT-2. In the interim, it also engaged with several research organizations to scrutinize the algorithm’s potential for abuse and develop countermeasures. Finally, it released the full code in November, having found, it said, “no strong evidence of misuse so far.”

    Amid continued accusations of publicity-seeking, OpenAI insisted that GPT-2 hadn’t been a stunt. It was, rather, a carefully thought-out experiment, agreed on after a series of internal discussions and debates. The consensus was that even if it had been slight overkill this time, the action would set a precedent for handling more dangerous research. Besides, the charter had predicted that “safety and security concerns” would gradually oblige the lab to “reduce our traditional publishing in the future.”

    This was also the argument that the policy team carefully laid out in its six-month follow-up blog post, which they discussed as I sat in on a meeting. “I think that is definitely part of the success-story framing,” said Miles Brundage, a policy research scientist, highlighting something in a Google doc. “The lead of this section should be: We did an ambitious thing, now some people are replicating it, and here are some reasons why it was beneficial.”

    But OpenAI’s media campaign with GPT-2 also followed a well-established pattern that has made the broader AI community leery. Over the years, the lab’s big, splashy research announcements have been repeatedly accused of fueling the AI hype cycle. More than once, critics have also accused the lab of talking up its results to the point of mischaracterization. For these reasons, many in the field have tended to keep OpenAI at arm’s length.
    Photograph of books, games, and posters in the office space
    Cover images of OpenAI’s research releases hang on its office wall.

    Christie Hemm Klok

    This hasn’t stopped the lab from continuing to pour resources into its public image. As well as research papers, it publishes its results in highly produced company blog posts for which it does everything in-house, from writing to multimedia production to design of the cover images for each release. At one point, it also began developing a documentary on one of its projects to rival a 90-minute movie about DeepMind’s AlphaGo. It eventually spun the effort out into an independent production, which Brockman and his wife, Anna, are now partially financing. (I also agreed to appear in the documentary to provide technical explanation and context to OpenAI’s achievement. I was not compensated for this.)

    And as the blowback has increased, so have internal discussions to address it. Employees have grown frustrated at the constant outside criticism, and the leadership worries it will undermine the lab’s influence and ability to hire the best talent. An internal document highlights this problem and an outreach strategy for tackling it: “In order to have government-level policy influence, we need to be viewed as the most trusted source on ML [machine learning] research and AGI,” says a line under the “Policy” section. “Widespread support and backing from the research community is not only necessary to gain such a reputation, but will amplify our message.” Another, under “Strategy,” reads, “Explicitly treat the ML community as a comms stakeholder. Change our tone and external messaging such that we only antagonize them when we intentionally choose to.”

    There was another reason GPT-2 had triggered such an acute backlash. People felt that OpenAI was once again walking back its earlier promises of openness and transparency. With news of the for-profit transition a month later, the withheld research made people even more suspicious. Could it be that the technology had been kept under wraps in preparation for licensing it in the future?
    Photograph of Ilya
    Ilya Sutskever, co-founder and chief scientist.

    Christie Hemm Klok

    But little did people know this wasn’t the only time OpenAI had chosen to hide its research. In fact, it had kept another effort entirely secret.

    There are two prevailing technical theories about what it will take to reach AGI. In one, all the necessary techniques already exist; it’s just a matter of figuring out how to scale and assemble them. In the other, there needs to be an entirely new paradigm; deep learning, the current dominant technique in AI, won’t be enough.

    Most researchers fall somewhere between these extremes, but OpenAI has consistently sat almost exclusively on the scale-and-assemble end of the spectrum. Most of its breakthroughs have been the product of sinking dramatically greater computational resources into technical innovations developed in other labs.

    Brockman and Sutskever deny that this is their sole strategy, but the lab’s tightly guarded research suggests otherwise. A team called “Foresight” runs experiments to test how far they can push AI capabilities forward by training existing algorithms with increasingly large amounts of data and computing power. For the leadership, the results of these experiments have confirmed its instincts that the lab’s all-in, compute-driven strategy is the best approach.

    For roughly six months, these results were hidden from the public because OpenAI sees this knowledge as its primary competitive advantage. Employees and interns were explicitly instructed not to reveal them, and those who left signed nondisclosure agreements. It was only in January that the team, without the usual fanfare, quietly posted a paper on one of the primary open-source databases for AI research. People who experienced the intense secrecy around the effort didn’t know what to make of this change. Notably, another paper with similar results from different researchers had been posted a few months earlier.
    Photograph of AI books

    Christie Hemm Klok

    In the beginning, this level of secrecy was never the intention, but it has since become habitual. Over time, the leadership has moved away from its original belief that openness is the best way to build beneficial AGI. Now the importance of keeping quiet is impressed on those who work with or at the lab. This includes never speaking to reporters without the express permission of the communications team. After my initial visits to the office, as I began contacting different employees, I received an email from the head of communications reminding me that all interview requests had to go through her. When I declined, saying that this would undermine the validity of what people told me, she instructed employees to keep her informed of my outreach. A Slack message from Clark, a former journalist, later commended people for keeping a tight lid as a reporter was “sniffing around.”

    In a statement responding to this heightened secrecy, an OpenAI spokesperson referred back to a section of its charter. “We expect that safety and security concerns will reduce our traditional publishing in the future,” the section states, “while increasing the importance of sharing safety, policy, and standards research.” The spokesperson also added: “Additionally, each of our releases is run through an infohazard process to evaluate these trade-offs and we want to release our results slowly to understand potential risks and impacts before setting loose in the wild.”

    One of the biggest secrets is the project OpenAI is working on next. Sources described it to me as the culmination of its previous four years of research: an AI system trained on images, text, and other data using massive computational resources. A small team has been assigned to the initial effort, with an expectation that other teams, along with their work, will eventually fold in. On the day it was announced at an all-company meeting, interns weren’t allowed to attend. People familiar with the plan offer an explanation: the leadership thinks this is the most promising way to reach AGI.

    The man driving OpenAI’s strategy is Dario Amodei, the ex-Googler who now serves as research director. When I meet him, he strikes me as a more anxious version of Brockman. He has a similar sincerity and sensitivity, but an air of unsettled nervous energy. He looks distant when he talks, his brows furrowed, a hand absentmindedly tugging his curls.

    Amodei divides the lab’s strategy into two parts. The first part, which dictates how it plans to reach advanced AI capabilities, he likens to an investor’s “portfolio of bets.” Different teams at OpenAI are playing out different bets. The language team, for example, has its money on a theory postulating that AI can develop a significant understanding of the world through mere language learning. The robotics team, in contrast, is advancing an opposing theory that intelligence requires a physical embodiment to develop.

    As in an investor’s portfolio, not every bet has an equal weight. But for the purposes of scientific rigor, all should be tested before being discarded. Amodei points to GPT-2, with its remarkably realistic auto-generated texts, as an instance of why it’s important to keep an open mind. “Pure language is a direction that the field and even some of us were somewhat skeptical of,” he says. “But now it’s like, ‘Wow, this is really promising.’”

    Over time, as different bets rise above others, they will attract more intense efforts. Then they will cross-pollinate and combine. The goal is to have fewer and fewer teams that ultimately collapse into a single technical direction for AGI. This is the exact process that OpenAI’s latest top-secret project has supposedly already begun.
    Photo of Dario
    Dario Amodei, research director.

    Christie Hemm Klok

    The second part of the strategy, Amodei explains, focuses on how to make such ever-advancing AI systems safe. This includes making sure that they reflect human values, can explain the logic behind their decisions, and can learn without harming people in the process. Teams dedicated to each of these safety goals seek to develop methods that can be applied across projects as they mature. Techniques developed by the explainability team, for example, may be used to expose the logic behind GPT-2’s sentence constructions or a robot’s movements.

    Amodei admits this part of the strategy is somewhat haphazard, built less on established theories in the field and more on gut feeling. “At some point we’re going to build AGI, and by that time I want to feel good about these systems operating in the world,” he says. “Anything where I don’t currently feel good, I create and recruit a team to focus on that thing.”

    For all the publicity-chasing and secrecy, Amodei looks sincere when he says this. The possibility of failure seems to disturb him.

    “We’re in the awkward position of: we don’t know what AGI looks like,” he says. “We don’t know when it’s going to happen.” Then, with careful self-awareness, he adds: “The mind of any given person is limited. The best thing I’ve found is hiring other safety researchers who often have visions which are different than the natural thing I might’ve thought of. I want that kind of variation and diversity because that’s the only way that you catch everything.”

    The thing is, OpenAI actually has little “variation and diversity”—a fact hammered home on my third day at the office. During the one lunch I was granted to mingle with employees, I sat down at the most visibly diverse table by a large margin. Less than a minute later, I realized that the people eating there were not, in fact, OpenAI employees. Neuralink, Musk’s startup working on computer-brain interfaces, shares the same building and dining room.
    Photograph of Daniela
    Daniela Amodei, head of people operations.

    Christie Hemm Klok

    According to a lab spokesperson, out of the over 120 employees, 25% are female or nonbinary. There are also two women on the executive team and the leadership team is 30% women, she said, though she didn’t specify who was counted among these teams. (All four C-suite executives, including Brockman and Altman, are white men. Out of over 112 employees I identified on LinkedIn and other sources, the overwhelming number were white or Asian.)

    In fairness, this lack of diversity is typical in AI. Last year a report from the New York–based research institute AI Now found that women accounted for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. “There is definitely still a lot of work to be done across academia and industry,” OpenAI’s spokesperson said. “Diversity and inclusion is something we take seriously and are continually working to improve by working with initiatives like WiML, Girl Geek, and our Scholars program.”

    Indeed, OpenAI has tried to broaden its talent pool. It began its remote Scholars program for underrepresented minorities in 2018. But only two of the first eight scholars became full-time employees, even though they reported positive experiences. The most common reason for declining to stay: the requirement to live in San Francisco. For Nadja Rhodes, a former scholar who is now the lead machine-learning engineer at a New York–based company, the city just had too little diversity.

    But if diversity is a problem for the AI industry in general, it’s something more existential for a company whose mission is to spread the technology evenly to everyone. The fact is that it lacks representation from the groups most at risk of being left out.

    Nor is it at all clear just how OpenAI plans to “distribute the benefits” of AGI to “all of humanity,” as Brockman frequently says in citing its mission. The leadership speaks of this in vague terms and has done little to flesh out the specifics. (In January, the Future of Humanity Institute at Oxford University released a report in collaboration with the lab proposing to distribute benefits by distributing a percentage of profits. But the authors cited “significant unresolved issues regarding … the way in which it would be implemented.”) “This is my biggest problem with OpenAI,” says a former employee, who spoke on condition of anonymity.
    photo of office space

    Christie Hemm Klok

    “They are using sophisticated technical practices to try to answer social problems with AI,” echoes Britt Paris of Rutgers. “It seems like they don’t really have the capabilities to actually understand the social. They just understand that that’s a sort of a lucrative place to be positioning themselves right now.”

    Brockman agrees that both technical and social expertise will ultimately be necessary for OpenAI to achieve its mission. But he disagrees that the social issues need to be solved from the very beginning. “How exactly do you bake ethics in, or these other perspectives in? And when do you bring them in, and how? One strategy you could pursue is to, from the very beginning, try to bake in everything you might possibly need,” he says. “I don’t think that that strategy is likely to succeed.”

    The first thing to figure out, he says, is what AGI will even look like. Only then will it be time to “make sure that we are understanding the ramifications.”

    Last summer, in the weeks after the switch to a capped-profit model and the $1 billion injection from Microsoft, the leadership assured employees that these updates wouldn’t functionally change OpenAI’s approach to research. Microsoft was well aligned with the lab’s values, and any commercialization efforts would be far away; the pursuit of fundamental questions would still remain at the core of the work.

    For a while, these assurances seemed to hold true, and projects continued as they were. Many employees didn’t even know what promises, if any, had been made to Microsoft.

    But in recent months, the pressure of commercialization has intensified, and the need to produce money-making research no longer feels like something in the distant future. In sharing his 2020 vision for the lab privately with employees, Altman’s message is clear: OpenAI needs to make money in order to do research—not the other way around.

    This is a hard but necessary trade-off, the leadership has said—one it had to make for lack of wealthy philanthropic donors. By contrast, Seattle-based AI2, a nonprofit that ambitiously advances fundamental AI research, receives its funds from a self-sustaining (at least for the foreseeable future) pool of money left behind by the late Paul Allen, a billionaire best known for cofounding Microsoft.

    But the truth is that OpenAI faces this trade-off not only because it’s not rich, but also because it made the strategic choice to try to reach AGI before anyone else. That pressure forces it to make decisions that seem to land farther and farther away from its original intention. It leans into hype in its rush to attract funding and talent, guards its research in the hopes of keeping the upper hand, and chases a computationally heavy strategy—not because it’s seen as the only way to AGI, but because it seems like the fastest.

    Yet OpenAI is still a bastion of talent and cutting-edge research, filled with people who are sincerely striving to work for the benefit of humanity. In other words, it still has the most important elements, and there’s still time for it to change.

    Near the end of my interview with Rhodes, the former remote scholar, I ask her the one thing about OpenAI that I shouldn’t omit from this profile. “I guess in my opinion, there’s problems,” she begins hesitantly. “Some of them come from maybe the environment it faces; some of them come from the type of people that it tends to attract and other people that it leaves out.”

    “But to me, it feels like they are doing something a little bit right,” she says. “I got a sense that the folks there are earnestly trying.”

    Update: We made some changes to this story after OpenAI asked us to clarify that when Greg Brockman said he didn’t think it was possible to “bake ethics in… from the very beginning” when developing AI, he intended it to mean that ethical questions couldn’t be solved from the beginning, not that they couldn’t be addressed from the beginning. Also, that after dropping out of Harvard he transferred straight to MIT rather than waiting a year. Also, that he was raised not “on a farm,” but “on a hobby farm.” Brockman considers this distinction important.

    In addition, we have clarified that while OpenAI did indeed “shed its nonprofit status,” a board that is part of a nonprofit entity still oversees it, and that OpenAI publishes its research in the form of company blog posts as well as, not in lieu of, research papers. We’ve also corrected the date of publication of a paper by outside researchers and the affiliation of Peter Eckersley (former, not current, research director of Partnership on AI, which he recently left).

    #capitalisme #benevolat #intelligence_artificielle #USA #idéologie #effective_altruism

    • #Big_data et #Intelligence_artificielle (#IA), les #ressources_humaines (sic) #RH vont pouvoir passer à la #gouvernance_algorithmique (#data_driven) en temps réel (deliver value faster)

      While historically management consulting firms have viewed a highly talented workforce as their key asset, the emergence of data technologies has prompted them to turn to the productization of their offerings. According to “Killing Strategy: The Disruption Of Management Consulting” report by CB Insights, one of the main reasons for the disruption of the management consulting industry is the increasing pace of digitalization, and in particular, the expansion of Artificial Intelligence and Big Data capabilities. Incumbents in the consulting world are recognizing competitive pressure coming from smaller industry players, which leverage modern data analytics and visualization technologies to deliver value faster. At the same time, clients of major consulting companies are investing in software systems to collect and analyze data, aiming to empower their managers with data-driven decision-making tools.

      (l’auteur est product leader chez Google et, accessoirement, a fondé une boîte de #coaching : Our mission is to help talented product managers prepare for their job interviews in the most effective ways - ways that land them the offer they’re hoping for!)

  • Google C.E.O. Sundar Pichai on the A.I. Moment: ‘You Will See Us Be Bold’ - The New York Times
    https://www.nytimes.com/2023/03/31/technology/google-pichai-ai.html

    Sundar Pichai has been trying to start an A.I. revolution for a very long time.

    In 2016, shortly after being named Google’s chief executive, Mr. Pichai declared that Google was an “A.I.-first” company. He spent lavishly to assemble an all-star team of A.I. researchers, whose breakthroughs powered changes to products like Google Translate and Google Photos. He even predicted that A.I.’s impact would be bigger than “electricity or fire.”

    So it had to sting when A.I.’s big moment finally arrived, and Google wasn’t involved.

    Instead, OpenAI — a scrappy A.I. start-up backed by Microsoft — stole the spotlight in November by releasing ChatGPT, a poem-writing, code-generating, homework-finishing marvel. ChatGPT became an overnight sensation, attracting millions of users and kicking off a Silicon Valley frenzy. It made Google look sluggish and vulnerable for the first time in years. (It didn’t help when Microsoft relaunched its Bing search engine with OpenAI’s technology inside, instantly ending Bing’s decade-long run as a punchline.)

    In an interview with The Times’s “Hard Fork” podcast on Thursday, his first extended interview since ChatGPT’s launch, Mr. Pichai said he was glad that A.I. was having a moment, even if Google wasn’t the driving force.

    #Intelligence_artificielle #Google

  • L’Italie bloque l’usage de #ChatGPT
    https://www.france24.com/fr/%C3%A9co-tech/20230331-l-italie-bloque-l-usage-de-l-intelligence-artificielle-chatgpt

    Dans un communiqué, l’Autorité italienne de protection des données personnelles prévient que sa décision a un « effet immédiat » et accuse le robot conversationnel de ne pas respecter la réglementation européenne et de ne pas vérifier l’âge des usagers mineurs.

    #ia #intelligence_artificielle #OpenAI

    • ChatGPT de nouveau autorisé en Italie
      https://www.liberation.fr/economie/economie-numerique/chatgpt-de-nouveau-autorise-en-italie-20230429_HZAXWZDVXFBYLP2H5IHUDVJQBU

      L’Autorité italienne de protection des données personnelles avait bloqué fin mars ChatGPT, qu’elle accusait de ne pas respecter la réglementation européenne et de ne pas avoir de système pour vérifier l’âge des usagers mineurs.

      Bloqué il y a un mois pour atteinte à la législation sur les données personnelles, le programme d’intelligence artificielle ChatGPT est de nouveau autorisé en Italie depuis vendredi. « ChatGPT est de nouveau disponible pour nos utilisateurs en Italie. Nous sommes ravis de leur souhaiter à nouveau la bienvenue et restons engagés dans la protection de leurs données personnelles », a indiqué un porte-parole de OpenAI vendredi 28 avril.

      L’Autorité italienne de protection des données personnelles avait bloqué fin mars ChatGPT, qu’elle accusait de ne pas respecter la réglementation européenne et de ne pas avoir de système pour vérifier l’âge des usagers mineurs. L’Autorité reprochait aussi à ChatGPT « l’absence d’une note d’information aux utilisateurs dont les données sont récoltées par OpenAI […] dans le but “d’entraîner” les algorithmes faisant fonctionner la plateforme ».

      En outre, alors que le programme est destiné aux personnes de plus de 13 ans, l’Autorité mettait « l’accent sur le fait que l’absence de tout filtre pour vérifier l’âge des utilisateurs expose les mineurs à des réponses absolument non conformes par rapport à leur niveau de développement ».

      Sonnets et code informatique
      OpenAI publie désormais sur son site des informations sur la façon dont il « collecte » et « utilise les données liées à l’entraînement » et offre une « plus grande visibilité » sur la page d’accueil de ChatGPT et OpenAI de la politique concernant les données personnelles. La compagnie assure aussi avoir mis en place un outil « permettant de vérifier en Italie l’âge des utilisateurs » une fois qu’ils se branchent.

      L’Autorité italienne a donc donné acte vendredi « des pas en avant accomplis pour conjuguer le progrès technologique avec le respect des droits des personnes ».

      ChatGPT est apparu en novembre et a rapidement été pris d’assaut par des utilisateurs impressionnés par sa capacité à répondre clairement à des questions difficiles, à écrire des sonnets ou du code informatique. Financé notamment par le géant informatique Microsoft, qui l’a ajouté à plusieurs de ses services, il est parfois présenté comme un concurrent potentiel du moteur de recherche Google.

      Le 13 avril, jour où l’Union européenne a lancé un groupe de travail pour favoriser la coopération européenne sur le sujet, l’Espagne a annoncé l’ouverture d’une enquête sur ChatGPT.

  • The Only Way to Deal With the Threat From AI ? Shut It Down | Time

    https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough

    Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
    More from TIME

    Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.

    Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”

    Bon bah y’avait les monstres #Nucléaires et #ChangementClimatique. On peut rajouter #IntelligenceArtificielle dans les menaces à l’échelle de l’espèce humaine.