• Alexandre Jardin présente un outil d’écriture dopé à l’IA

    L’écrivain Alexandre Jardin (Les Magiciens, Frères) a présenté un nouveau projet, intitulé de manière provisoire Yourscrib.ai, basé sur l’intelligence artificielle. Cet outil d’aide à l’écriture, qui entend proposer un équivalent au suivi éditorial, a la capacité de « révéler l’ADN mental des auteurs », selon Jardin, pour les aider à finaliser leurs projets littéraires.

    #Intelligence_artificielle #Foutaise #Ecriture #Edition

  • Emory University awarded two students $10,000 for their AI study tool, then suspended them

    Trop fun !!!
    L’université Emory offre une bourse pour un système d’IA... puis radie les étudiants qui l’ont mis au point.

    Individuals and organizations are still struggling with how and how much to integrate AI into daily life. Rarely has that been more clear than a case out of Emory University in which the school went from awarding students with an entrepreneurship prize worth $10,000 for their AI-powered studying tool to suspending them for it, 404 Media reports. No, the students didn’t suddenly misuse the tool, known as Eightball, in any way; they did just as they said they would, and all the while, Emory promoted them — until they didn’t.

    Eightball allowed students to turn any coursework or readings into practice tests or flashcards for studying. It also connected to Canvas — the platform professors at Emory use to share course documents with their students. A demo video for Eightball called it similar to ChatGPT but trained on Canvas courses, looking at everything from lectures to slides, rather than students having to upload each PDF individually to the tool.

    Emory’s Honor Council accused Eightball’s creators of cheating, plagiarizing and helping other students violate the Honor Code in November 2023 and the duo shut the tool down. The Council also claimed Eightball attached to Canvas without permission, despite it being stated during the awards competition in Spring 2023. The body launched an investigation into the students, which found that Eightball hadn’t assisted with cheating and that the student creators had never lied about its capabilities.

    Yet, the Honor Council recommended a year suspension for one of the students, Benjamin Craver, and expulsion for the other (who ideated Eightball). The Council’s director called the situation “unprecedented” due to the harm it could cause at Emory. Craver was eventually suspended for the summer and fall 2024 semesters — after which he would need to apply for readmission. He was also given a mark on his permanent record and required to complete an educational program. His co-creator received a one-year suspension.

    Craver filed a lawsuit on May 20 against Emory detailing how Eightball came to be, teachers’ support and use, articles promoting it in the university’s newspaper and that the students had always been transparent in its use. Among other evidence, the lawsuit also shares words of support from the associate dean of Emory’s business school about Eightball following the award and her choice to connect the students with an outside entrepreneur, an Emory Alumnus. “While nothing about Eightball changed, Emory’s view of Eightball changed dramatically,” Craver’s lawsuit states. “Emory concedes that there is no evidence that anyone has ever used Eightball to cheat. And to this day Emory advertises Eightball as an example of student innovation and entrepreneurship.”

    #Intelligence_artificielle #Université #Fun #

  • Navigating the Rising Tide of AI-Generated Publications: Insights from ACSE Advisory Cabinet

    In the rapidly evolving landscape of scholarly publishing, new challenges continue to emerge, and organizations like the Asian Council of Science Editors (ACSE) must remain vigilant and proactive in addressing them. One such pressing issue that recently came to the forefront is the recent proliferation of AI-generated publications by leading publishers.

    Recognizing the significance of this new development, ACSE convened its Advisory Cabinet to thoroughly examine the case and explore potential solutions to mitigate such occurrences in the future. In this article, we explore the insights and recommendations provided by our esteemed panel of advisors, which we hope will shed more light on the ongoing discourse around AI use in content development and peer review in publications.
    Details of Submitted Case:

    Retraction Watch Uncovers AI-Generated Paper Published by Elsevier
    In a surprising turn of events, a tweet by Retraction Watch has ignited a firestorm on Twitter regarding an article published by Elsevier that was seemingly authored by the AI language model ChatGPT. The paper in question was titled “The three-dimensional porous mesh structure of Cu-based metal-organic-framework - aramid cellulose separator enhances the electrochemical performance of lithium metal anode batteries,” and was authored by Manchu Zhang, Lining Wu, Tao Yang, Bing Zhu, and Yangai Liu. It appeared in the peer-reviewed journal “Surfaces and Interfaces,” Volume 46, March 2024 (DOI: 10.1016/j.surfin.2024.104081).

    The quickly tweet went viral, prompting widespread discussions among the scientific community and beyond. Many have expressed astonishment that an AI language model could generate a paper detailed enough to pass through the editorial process of a reputable publisher like Elsevier.

    Upon closer examination of the paper (accessible here: Link), it appears that the entire editorial process, including the reviewing team, may have overlooked the fact that an AI language model generated the paper.

    This revelation has raised significant questions about the reliability of the peer review process and the potential implications of AI-generated content in scholarly publishing. While AI technologies have shown remarkable capabilities in generating content efficiently, this incident underscores the need for greater scrutiny and oversight in the editorial process.

    Merci @simplicissimus pour la piste.
    #Intelligence_artificielle #Publications_scientifiques

  • Food and Drink - Lummi Photos

    Glacial, glacé... et fascinant.

    Outre la capacité à obtenir des images, cette banque d’image devrait permettre de faire des études sur la mentalité des créateurs cherchant à utiliser l’iA. Une plastique pré-définie, standardisée, accrocheuse, colorée, lisse et sans âme.

    The best free stock photos and royalty free images. Powered by robots everywhere.

    #Intelligence_artificielle #Images #Création_par_IA #Mentalité_publicitaire

  • The automated Fortress Europe : No place for human rights

    29,000 people have died in the Mediterranean over the past ten years while trying to reach the EU. You would think that the EU wanted this tragedy to stop and scientists across Europe were working feverishly on making this happen with the latest technology. The opposite is the case: With the help of so-called Artificial Intelligence, digital border walls are being raised, financed with taxpayers’ money.

    Drones, satellites, and other digital monitoring systems: For decades, the EU’s external borders have been upgraded with state-of-the-art surveillance technology to create so-called smart borders. Now, algorithms and Artificial Intelligence are increasingly adding to the wall.

    Their development is funded with millions of euros by EU research programs with names like Horizon 2020 or Horizon Europe. The funded projects read like a catalog of surveillance technologies. Instead of trying to save people from losing their lives, they put all of us in danger.

    It doesn’t come as a surprise that most initiatives are kept secret. The public learns next to nothing about them. Law enforcement and border authorities prefer not to be bothered with giving insights into their work. They try to avoid a democratic debate about the research and development of this sort of AI-driven surveillance technology.

    When we asked for information on research projects in which such systems are being developed, we received many responses that wouldn’t give us any substantial information.

    The European Research Executive Agency (REA) is mandated by the EU Commission to fund and manage innovative projects in virtually all areas of research, including Horizon 2020. Still, the REA isn’t particularly outspoken about their research projects.

    We had tried, for example, to obtain details about the ROBORDER project‘s “methodology applied for the evaluation of the system performance” through access to information requests. At first, we were denied it in reference to the “protection of the public interest as regards public security.” The identity and affiliation of individuals involved in the ethics review process would also not be shared, to protect their “privacy and integrity.” REA also cited “commercial interests” and the protection of intellectual property as lawful grounds to refuse disclosure: “releasing this information into public domain would give the competitors of the consortium an unfair advantage, as the competitors would be able to use this sensitive commercial information in their favour.” These reasons given to us to avoid disclosure were common reactions to all the requests we sent out. But in the end, REA did provide us with information on the methodology.

    More transparency is urgently needed. ROBORDER aims at developing unmanned vehicles to patrol EU borders, capable of operating in swarms. Such capabilities would most likely be of interest to the military as well. In fact, research by AlgorithmWatch and ZDF Magazin Royale shows that in a market analysis conducted within the ROBORDER project, “military units” have been identified as potential users of the system. Documents we obtained show that members of the research team met with prospective officers of the Greek Navy to introduce the ROBORDER system.

    Military applications would exclude ROBORDER from Horizon 2020 funding, which is reserved for civilian applications. However, an EU Commission’s spokesperson said that the mere fact that a “military audience” was also chosen to disseminate the project does not “per se call into question the exclusively civilian application of the activities carried out within the framework of this project.”

    The ROBORDER project was executed as planned until its scheduled end in 2021. Its output contributed to later projects. At a national level, one is REACTION, which is funded by the EU’s Border Management and Visa Instrument and coordinated by the Greek Ministry of Immigration and Asylum. AlgorithmWatch and ZDF Magazin Royale tried to ask the Greek research center CERTH – which coordinated ROBORDER and is now working on REACTION – what results or components exactly were adopted, but we didn’t get an answer.

    Due to our persistence, we managed to obtain documents for various EU-funded projects. Some of them we received were so heavily redacted that it was impossible to get an idea what they were about. The grant agreement and the annexes to the NESTOR project contained 169 consecutive redacted pages.

    An automated Fortress Europe would also impact everyone’s rights, since the technology it facilitates allows governments to find out everything about us.

    How do they do it, you ask? By using face recognition, for example, and by reducing your identity to your face and other measurable biometric features. Faces can be captured and analyzed by increasingly sophisticated biometric recognition systems. In the D4FLY project, they combine “2D+thermal facial, 3D facial, iris and somatotype biometrics.” In projects such as iBorderCtrl, they examine emotions and “micro-expressions,” fleeting facial expressions that last only fractions of a second, to assess whether travelers are lying to (virtual) border officials. That way, risk assessments are automatically created, which could lead to stricter security checks at EU borders.

    Such EU-funded projects are designed to digitalize, computerize, and automate human mobility. The EU envisions a future where law-abiding travelers enjoy uninterrupted freedom, while “risky” people are automatically flagged for further checks.

    As Frontex’ deputy executive director, Uku Särekanno, put it in a recent interview: „What comes next is a very serious discussion on automation. We are looking into how, in the next five to ten years, we can have more automated border crossings and a more seamless travel experience.”

    According to various scientists, this is the result of over two decades’ work, ultimately leading to total remote surveillance and thus to a perfect panoptic society, in which we are utterly dominated by such digital technologies and the underlying logic of security policy.


    Checking people requires time and resources. Therefore, some projects aim to automatically “relieve” border officials, which means make them auxiliaries for automated systems that are falsely assumed to be more objective or reliable.

    Automated systems are supposed to detect “abnormal behavior,” increase “situation awareness,” and derive real-time information and predictions ("nowcasts") from multiple sensors attached to individuals, groups, but also freighters or other vehicles. Migration movements are to be predicted algorithmically, by analyzing Google Trends data, content on social media platforms such as Facebook and X (formerly Twitter), and “quantitative (geo-located) indicators of telephone conversations.” But such automated systems can’t replace political decisions by taking available data and leaving the decision to algorithms. The decisions have to be justified. Political decisions are also not only a byproduct of technological solutions and have to be put first.

    Risks become apparent by looking at the ITFLOWS project’s EuMigraTool. It includes “monthly predictions of asylum applications in the EU” and is supposed to “identify the potential risks of tensions between migrants and EU citizens” by providing “intuitions” on the “attitudes towards migration” in the EU using “Twitter Sentiment Analysis model data as input”. The very project’s Users Board, in which organizations such as the Red Cross and Oxfam are represented, warned in a statement against misuse, “misuse could entail closing of borders, instigating violence, and misuse for political purposes to gain support and consensus for an anti-migration policy.” The tool was developed nonetheless.

    In these EU-funded projects, people on the move are constantly portrayed as a threat to security. The FOLDOUT project explicates this core premise in all frankness: “in the last years irregular migration has dramatically increased,” therefore it was “no longer manageable with existing systems.” Law enforcement and border agencies now assume that in order to “stay one step ahead” of criminals and terrorists, automation needs to become the norm, especially in migration-related contexts.


    A driving force in border security is also one of the main customers: Frontex. Founded in 2004, the European Border and Coast Guard Agency has played an increasingly important role in the EU’s research and innovation projects in recent years. The agency’s budget has increased by 194 percent compared to the previous budget, and by an incredible 13,200 percent in the last 20 years. But Frontex’ influence goes far beyond the money at its disposal. The agency intervened to “help,” "actively participate in," and “push forward” several Horizon 2020 projects, addressing “a wide spectrum of technological capabilities critical for border security,” including Artificial Intelligence, augmented reality, or virtual reality.

    In 2020, the agency formalized their collaboration with the EU Commission’s Directorate-General for Migration and Home Affairs (DG-HOME). It allowed Frontex to provide assistance to DG-HOME “in the areas of programming, monitoring and the uptake of projects results.” The agency is now responsible for “identifying research activities,” evaluating research proposals, and the supervision of the Horizon Europe research projects’ “operational relevance.”

    The agency therefore joined EU-funded projects trials, demonstrations, and workshops, held events involving EU-funded projects, and even created a laboratory (the Border Management Innovation Centre, BoMIC) to help implement EU-funded projects in border security. This is complemented with Frontex’s own “Research Grants Programme”, whose first call for proposals was announced in November 2022, to “bring promising ideas from the lab to real applications in border security.”

    The NESTOR project promises “an entirely functional, next-generation, comprehensive border surveillance system offering pre-frontier situational awareness beyond sea and land borders.” The system is based on optical, thermal imaging, and radio frequency spectrum analysis technologies. Such data will be “fed by an interoperable sensors network” comprised of both stationary installations and mobile manned or unmanned vehicles (that can operate underwater, on water surfaces, on the ground, or in the air). The vehicles are also capable of functioning in swarms. This allows for detecting, recognizing, classifying, and tracking “moving targets” such as persons, vessels, vehicles, or drones. A “Border Command, Control, and Coordination intelligence system” would adopt “cutting-edge Artificial Intelligence and Risk Assessment technologies”, fusing “in real-time the surveillance data in combination with analysis of web and social media data.”

    The key term here is “pre-frontier awareness.” According to the EU, “pre-frontier” refers to “the geographical area beyond the external borders which is relevant for managing the external borders through risk analysis and situational awareness.” Or, to put it bluntly: the very notion of “border” ultimately dissolves into whatever the authorities want it to mean.

    The list of projects could go on and on (see the box below), but you get the EU’s gist: They perceive migrants as a threat and want to better protect their borders from them by constantly improving automation and ever-increasing surveillance − far beyond existing borders. The EU conjures up the image of a migration “crisis” that we can only hope to end through technological solutions.

    This belief is extensively and increasingly affirmed and shaped by the border and coast guard community in lockstep with the surveillance and security industries, as has been well documented. But it threatens social justice, non-discrimination, fairness, and a basic respect of fundamental rights. “Ethics assessments” only scratch at the surface of the complexity of automating migration. The systems will be developed anyway, even if the assessments fundamentally question whether the systems’ use can be justified at all. Many of these projects should not have been funded in the first place, so they should not be pursued.

    #AI #IA #intelligence_artificielle #migrations #réfugiés #contrôles_frontaliers #mur_digital #frontières_digitales #technologie #drones #satellites #frontières_intelligentes #smart_borders #Horizon_2020 #Horizon_Europe #surveillance #complexe_militaro-industriel #European_Research_Executive_Agency (#REA) #recherche #ROBORDER #REACTION #Border_Management_and_Visa_Instrument #CERTH #Grèce #NESTOR #biométrie #D4FLY #iBorderCtrl #Frontex #ITFLOWS #risques #EuMigraTool #FOLDOUT #pré-frontière

    ping @reka

  • Le règlement IA adopté, la fuite en avant techno-solutionniste peut se poursuivre – La Quadrature du Net

    Réunis au sein du Conseil de l’Union européenne, les États-membres ont adopté hier le règlement IA, dédié à la régulation des systèmes d’Intelligence Artificielle. Cette étape marque l’adoption définitive de cet édifice législatif en discussion depuis 2021, et présenté au départ comme un instrument de protection des droits et libertés face au rouleau compresseur de l’IA. À l’arrivée, loin des promesses initiales et des commentaires emphatiques, ce texte est taillé sur mesure pour l’industrie de la tech, les polices européennes et autres grandes bureaucraties désireuses d’automatiser le contrôle social. Largement fondé sur l’auto-régulation, bardé de dérogations, il s’avère totalement incapable de faire obstacle aux dégâts sociaux, politiques et environnementaux liés à la prolifération de l’IA.

    À l’arrivée, loin de protéger les valeurs de démocratie, d’État de droit et de respect pour l’environnement que l’Union européenne prétend encore incarner comme un phare dans la nuit, le règlement IA reste le produit d’une realpolitik désastreuse. Face à l’étau formé par la Chine et les États-Unis, il doit en effet permettre de relancer l’Europe dans la course aux dernières technologies informatiques, perçues comme de véritables étalons de puissance. Non seulement cette course paraît perdue d’avance mais, ce faisant, l’Union européenne participe à légitimer une fuite en avant techno-solutionniste dangereuse pour les libertés et insoutenable au plan écologique.

    La généralisation de l’IA, en tant que paradigme technique et politique, a pour principal effet de démultiplier les dégâts engendrés par la sur-informatisation de nos sociétés. Puisqu’il est désormais clair que ce règlement ne sera d’aucune utilité pour enrayer l’emballement actuel, c’est d’autres moyens de lutte qu’il va nous falloir collectivement envisager.

    #Intelligence_artificielle #AIAct #Europe #Régulation

  • AI takeaways from under-the-radar innovators: Context, creation, and communication

    4. AI innovators need people skills

    “In a world of large language models (LLMs), just being able to articulate yourself well is really what’s key to be[ing] able to program these models correctly,” said Tim Hwang, author of “Subprime Attention Crisis.”

    Hwang said that because generative AI models are trained on human data, the same standards that apply to communicating with humans, such as being clear, direct, and thoughtful, apply for talking to LLMs. “The hottest new programming language is psychology,” he said, and marketers will need to understand it better than ever to work with generative AI. That’s because generative AI functions like people do, according to Hwang. It responds better when people ask nicely, or tell it to do a good job. Understanding how to work within that framework is the key to using AI.

    #Tim_Hwang #Intelligence_artificielle

  • Opinion | Scarlett Johansson’s Voice Isn’t the Only Thing A.I. Companies Want - The New York Times

    Par Zeynep Tufekci

    When OpenAI introduced its virtual assistant, Sky, last week, many gasped. It sounded just like Scarlett Johansson, who had famously played an artificial intelligence voice assistant in the movie “Her.”

    On the surface, the choice made sense: Last year, Sam Altman, the C.E.O. of OpenAI, had named it his favorite science fiction movie, even posting the single word “her” around the assistant’s debut.

    OpenAI approached Johansson to be the voice for its virtual assistant, and she turned it down. The company approached her again two days before the debut of Sky, but this time, she said in a blistering statement, it didn’t even wait for her official “no” before releasing a voice that sounds so similar to hers that it even fooled her friends and family.

    In response to Johansson’s scathing letter, OpenAI claimed that the voice was someone else and “was never intended to resemble hers,” but it took Sky down anyway.

    The A.I. industry is built on grabbing our data — the output that humanity has collectively produced: books, art, music, blog posts, social media, videos — and using it to train their models, from which they then make money or use as they wish. For the most part, A.I. companies haven’t asked or paid the people who created the data they grab and whose actual employment and future are threatened by the models trained on it.

    Politicians haven’t stepped in to ask why humanity’s collective output should be usurped and monopolized by a handful of companies. They’ve practically let the industry do what it wants for decades.

    I am someone who believes in the true upside of technology, including A.I. But amid all the lofty talk about its transformational power, these companies are perpetuating an information grab, a money grab and a “break the rules and see what we can get away with” mentality that’s worked very well for them for the past few decades.

    Altman, it seems, liked Johansson’s voice, so the company made a simulacrum of it. Why not?

    When you’re a tech industry star, they let you do anything.

    #Zeynep_Tufekci #Intelligence_artificielle #OpenAI #Voice

  • Can Humanity Survive AI ?

    La question est l’expression des intérêts d’une partie des habitants de la bulle californienne. Elle est quand même intéressante car elle tourne autour des idées dangereuses de quelques très riches politiciens et entrepreneurs. Mettez vos centures avant de commencer le tour des montagnes russes de cet article.

    22.1.2024 by Garrison Lovely - With the development of artificial intelligence racing forward at warp speed, some of the richest men in the world may be deciding the fate of humanity right now.

    Google cofounder Larry Page thinks superintelligent AI is “just the next step in evolution.” In fact, Page, who’s worth about $120 billion, has reportedly argued that efforts to prevent AI-driven extinction and protect human consciousness are “speciesist” and “sentimental nonsense.”

    In July, former Google DeepMind senior scientist Richard Sutton — one of the pioneers of reinforcement learning, a major subfield of AI — said that the technology “could displace us from existence,” and that “we should not resist succession.” In a 2015 talk, Sutton said, suppose “everything fails” and AI “kill[s] us all”; he asked, “Is it so bad that humans are not the final form of intelligent life in the universe?”

    “Biological extinction, that’s not the point,” Sutton, sixty-six, told me. “The light of humanity and our understanding, our intelligence — our consciousness, if you will — can go on without meat humans.”

    Yoshua Bengio, fifty-nine, is the second-most cited living scientist, noted for his foundational work on deep learning. Responding to Page and Sutton, Bengio told me, “What they want, I think it’s playing dice with humanity’s future. I personally think this should be criminalized.” A bit surprised, I asked what exactly he wanted outlawed, and he said efforts to build “AI systems that could overpower us and have their own self-interest by design.” In May, Bengio began writing and speaking about how advanced AI systems might go rogue and pose an extinction risk to humanity.

    Bengio posits that future, genuinely human-level AI systems could improve their own capabilities, functionally creating a new, more intelligent species. Humanity has driven hundreds of other species extinct, largely by accident. He fears that we could be next — and he isn’t alone.

    Bengio shared the 2018 Turing Award, computing’s Nobel Prize, with fellow deep learning pioneers Yann LeCun and Geoffrey Hinton. Hinton, the most cited living scientist, made waves in May when he resigned from his senior role at Google to more freely sound off about the possibility that future AI systems could wipe out humanity. Hinton and Bengio are the two most prominent AI researchers to join the “x-risk” community. Sometimes referred to as AI safety advocates or doomers, this loose-knit group worries that AI poses an existential risk to humanity.

    In the same month that Hinton resigned from Google, hundreds of AI researchers and notable figures signed an open letter stating, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Hinton and Bengio were the lead signatories, followed by OpenAI CEO Sam Altman and the heads of other top AI labs.

    Hinton and Bengio were also the first authors of an October position paper warning about the risk of “an irreversible loss of human control over autonomous AI systems,” joined by famous academics like Nobel laureate Daniel Kahneman and Sapiens author Yuval Noah Harari.

    LeCun, who runs AI at Meta, agrees that human-level AI is coming but said in a public debate against Bengio on AI extinction, “If it’s dangerous, we won’t build it.”

    Deep learning powers the most advanced AI systems in the world, from DeepMind’s protein-folding model to large language models (LLMs) like OpenAI’s ChatGPT. No one really understands how deep learning systems work, but their performance has continued to improve nonetheless. These systems aren’t designed to function according to a set of well-understood principles but are instead “trained” to analyze patterns in large datasets, with complex behavior — like language understanding — emerging as a consequence. AI developer Connor Leahy told me, “It’s more like we’re poking something in a Petri dish” than writing a piece of code. The October position paper warns that “no one currently knows how to reliably align AI behavior with complex values.”

    In spite of all this uncertainty, AI companies see themselves as being in a race to make these systems as powerful as they can — without a workable plan to understand how the things they’re creating actually function, all while cutting corners on safety to win more market share. Artificial general intelligence (AGI) is the holy grail that leading AI labs are explicitly working toward. AGI is often defined as a system that is at least as good as humans at almost any intellectual task. It’s also the thing that Bengio and Hinton believe could lead to the end of humanity.

    Bizarrely, many of the people actively advancing AI capabilities think there’s a significant chance that doing so will ultimately cause the apocalypse. A 2022 survey of machine learning researchers found that nearly half of them thought there was at least a 10 percent chance advanced AI could lead to “human extinction or [a] similarly permanent and severe disempowerment” of humanity. Just months before he cofounded OpenAI, Altman said, “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”

    Public opinion on AI has soured, particularly in the year since ChatGPT was released. In all but one 2023 survey, more Americans than not have thought that AI could pose an existential threat to humanity. In the rare instances when pollsters asked people if they wanted human-level or beyond AI, strong majorities in the United States and the UK said they didn’t.

    So far, when socialists weigh in on AI, it’s usually to highlight AI-powered discrimination or to warn about the potentially negative impact of automation in a world of weak unions and powerful capitalists. But the Left has been conspicuously quiet about Hinton and Bengio’s nightmare scenario — that advanced AI could kill us all.
    Worrying Capabilities
    Illustration by Ricardo Santos

    While much of the attention from the x-risk community focuses on the idea that humanity could eventually lose control of AI, many are also worried about less capable systems empowering bad actors on very short timelines.

    Thankfully, it’s hard to make a bioweapon. But that might change soon.

    Anthropic, a leading AI lab founded by safety-forward ex-OpenAI staff, recently worked with biosecurity experts to see how much an LLM could help an aspiring bioterrorist. Testifying before a Senate subcommittee in July, Anthropic CEO Dario Amodei reported that certain steps in bioweapons production can’t be found in textbooks or search engines, but that “today’s AI tools can fill in some of these steps, albeit incompletely,” and that “a straightforward extrapolation of today’s systems to those we expect to see in two to three years suggests a substantial risk that AI systems will be able to fill in all the missing pieces.”

    In October, New Scientist reported that Ukraine made the first battlefield use of lethal autonomous weapons (LAWs) — literally killer robots. The United States, China, and Israel are developing their own LAWs. Russia has joined the United States and Israel in opposing new international law on LAWs.

    However, the more expansive idea that AI poses an existential risk has many critics, and the roiling AI discourse is hard to parse: equally credentialed people make opposite claims about whether AI x-risk is real, and venture capitalists are signing open letters with progressive AI ethicists. And while the x-risk idea seems to be gaining ground the fastest, a major publication runs an essay seemingly every week arguing that x-risk distracts from existing harms. Meanwhile, orders of magnitude more money and people are quietly dedicated to making AI systems more powerful than to making them safer or less biased.

    Some fear not the “sci-fi” scenario where AI models get so capable they wrest control from our feeble grasp, but instead that we will entrust biased, brittle, and confabulating systems with too much responsibility, opening a more pedestrian Pandora’s box full of awful but familiar problems that scale with the algorithms causing them. This community of researchers and advocates — often labeled “AI ethics” — tends to focus on the immediate harms being wrought by AI, exploring solutions involving model accountability, algorithmic transparency, and machine learning fairness.

    I spoke with some of the most prominent voices from the AI ethics community, like computer scientists Joy Buolamwini, thirty-three, and Inioluwa Deborah Raji, twenty-seven. Each has conducted pathbreaking research into existing harms caused by discriminatory and flawed AI models whose impacts, in their view, are obscured one day and overhyped the next. Like that of many AI ethics researchers, their work blends science and activism.

    Those I spoke to within the AI ethics world largely expressed a view that, rather than facing fundamentally new challenges like the prospect of complete technological unemployment or extinction, the future of AI looks more like intensified racial discrimination in incarceration and loan decisions, the Amazon warehouse-ification of workplaces, attacks on the working poor, and a further entrenched and enriched techno-elite.
    Illustration by Ricardo Santos

    A frequent argument from this crowd is that the extinction narrative overhypes the capabilities of Big Tech’s products and dangerously “distracts” from AI’s immediate harms. At best, they say, entertaining the x-risk idea is a waste of time and money. At worst, it leads to disastrous policy ideas.

    But many of the x-risk believers highlighted that the positions “AI causes harm now” and “AI could end the world” are not mutually exclusive. Some researchers have tried explicitly to bridge the divide between those focused on existing harms and those focused on extinction, highlighting potential shared policy goals. AI professor Sam Bowman, another person whose name is on the extinction letter, has done research to reveal and reduce algorithmic bias and reviews submissions to the main AI ethics conference. Simultaneously, Bowman has called for more researchers to work on AI safety and wrote of the “dangers of underclaiming” the abilities of LLMs.

    The x-risk community commonly invokes climate advocacy as an analogy, asking whether the focus on reducing the long-term harms of climate change dangerously distracts from the near-term harms from air pollution and oil spills.

    But by their own admission, not everyone from the x-risk side has been as diplomatic. In an August 2022 thread of spicy AI policy takes, Anthropic cofounder Jack Clark tweeted that “Some people who work on long-term/AGI-style policy tend to ignore, minimize, or just not consider the immediate problems of AI deployment/harms.”
    “AI Will Save the World”

    A third camp worries that when it comes to AI, we’re not actually moving fast enough. Prominent capitalists like billionaire Marc Andreessen agree with safety folks that AGI is possible but argue that, rather than killing us all, it will usher in an indefinite golden age of radical abundance and borderline magical technologies. This group, largely coming from Silicon Valley and commonly referred to as AI boosters, tends to worry far more that regulatory overreaction to AI will smother a transformative, world-saving technology in its crib, dooming humanity to economic stagnation.

    Some techno-optimists envision an AI-powered utopia that makes Karl Marx seem unimaginative. The Guardian recently released a mini-documentary featuring interviews from 2016 through 2019 with OpenAI’s chief scientist, Ilya Sutskever, who boldly pronounces: “AI will solve all the problems that we have today. It will solve employment, it will solve disease, it will solve poverty. But it will also create new problems.”

    Andreessen is with Sutskever — right up until the “but.” In June, Andreessen published an essay called “Why AI Will Save the World,” where he explains how AI will make “everything we care about better,” as long as we don’t regulate it to death. He followed it up in October with his “Techno-Optimist Manifesto,” which, in addition to praising a founder of Italian fascism, named as enemies of progress ideas like “existential risk,” “sustainability,” “trust and safety,” and “tech ethics.” Andreessen does not mince words, writing, “We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing [are] a form of murder.”

    Andreessen, along with “pharma bro” Martin Shkreli, is perhaps the most famous proponent of “effective accelerationism,” also called “e/acc,” a mostly online network that mixes cultish scientism, hypercapitalism, and the naturalistic fallacy. E/acc, which went viral this summer, builds on reactionary writer Nick Land’s theory of accelerationism, which argues that we need to intensify capitalism to propel ourselves into a posthuman, AI-powered future. E/acc takes this idea and adds a layer of physics and memes, mainstreaming it for a certain subset of Silicon Valley elites. It was formed in reaction to calls from “decels” to slow down AI, which have come significantly from the effective altruism (EA) community, from which e/acc takes its name.

    AI booster Richard Sutton — the scientist ready to say his goodbyes to “meat humans” — is now working at Keen AGI, a new start-up from John Carmack, the legendary programmer behind the 1990s video game Doom. The company mission, according to Carmack: “AGI or bust, by way of Mad Science!”
    Capitalism Makes It Worse

    In February, Sam Altman tweeted that Eliezer Yudkowsky might eventually “deserve the Nobel Peace Prize.” Why? Because Altman thought the autodidactic researcher and Harry Potter fan-fiction author had done “more to accelerate AGI than anyone else.” Altman cited how Yudkowsky helped DeepMind secure pivotal early-stage funding from Peter Thiel as well as Yudkowsky’s “critical” role “in the decision to start OpenAI.”

    Yudkowsky was an accelerationist before the term was even coined. At the age of seventeen — fed up with dictatorships, world hunger, and even death itself — he published a manifesto demanding the creation of a digital superintelligence to “solve” all of humanity’s problems. Over the next decade of his life, his “technophilia” turned to phobia, and in 2008 he wrote about his conversion story, admitting that “to say, I almost destroyed the world!, would have been too prideful.”

    Yudkowsky is now famous for popularizing the idea that AGI could kill everyone, and he has become the doomiest of the AI doomers. A generation of techies grew up reading Yudkowsky’s blog posts, but more of them (perhaps most consequentially, Altman) internalized his arguments that AGI would be the most important thing ever than his beliefs about how hard it would be to get it not to kill us.

    During our conversation, Yudkowsky compared AI to a machine that “prints gold,” right up until it “ignite[s] the atmosphere.”

    And whether or not it will ignite the atmosphere, that machine is printing gold faster than ever. The “generative AI” boom is making some people very, very rich. Since 2019, Microsoft has invested a cumulative $13 billion into OpenAI. Buoyed by the wild success of ChatGPT, Microsoft gained nearly $1 trillion in value in the year following the product’s release. Today the nearly fifty-year-old corporation is worth more than Google and Meta combined.

    Profit-maximizing actors will continue barreling forward, externalizing risks the rest of us never agreed to bear, in the pursuit of riches — or simply the glory of creating digital superintelligence, which Sutton tweeted “will be the greatest intellectual achievement of all time … whose significance is beyond humanity, beyond life, beyond good and bad.” Market pressures will likely push companies to transfer more and more power and autonomy to AI systems as they improve.

    One Google AI researcher wrote to me, “I think big corps are in such a rush to win market share that [AI] safety is seen as a kind of silly distraction.” Bengio told me he sees “a dangerous race between companies” that could get even worse.

    Panicking in response to the OpenAI-powered Bing search engine, Google declared a “code red,” “recalibrate[d]” their risk appetite, and rushed to release Bard, their LLM, over staff opposition. In internal discussions, employees called Bard “a pathological liar” and “cringe-worthy.” Google published it anyway.

    Dan Hendrycks, the director of the Center for AI Safety, said that “cutting corners on safety . . . is largely what AI development is driven by. . . . I don’t think, actually, in the presence of these intense competitive pressures, that intentions particularly matter.” Ironically, Hendrycks is also the safety adviser to xAI, Elon Musk’s latest venture.

    The three leading AI labs all began as independent, mission-driven organizations, but they are now either full subsidiaries of tech behemoths (Google DeepMind) or have taken on so many billions of dollars in investment from trillion-dollar companies that their altruistic missions may get subsumed by the endless quest for shareholder value (Anthropic has taken up to $6 billion from Google and Amazon combined, and Microsoft’s $13 billion bought them 49 percent of OpenAI’s for-profit arm). The New York Times recently reported that DeepMind’s founders became “increasingly worried about what Google would do with their inventions. In 2017, they tried to break away from the company. Google responded by increasing the salaries and stock award packages of the DeepMind founders and their staff. They stayed put.”

    One developer at a leading lab wrote to me in October that, since the leadership of these labs typically truly believes AI will obviate the need for money, profit-seeking is “largely instrumental” for fundraising purposes. But “then the investors (whether it’s a VC firm or Microsoft) exert pressure for profit-seeking.”

    Between 2020 and 2022, more than $600 billion in corporate investment flowed into the industry, and a single 2021 AI conference hosted nearly thirty thousand researchers. At the same time, a September 2022 estimate found only four hundred full-time AI safety researchers, and the primary AI ethics conference had fewer than nine hundred attendees in 2023.

    The way software “ate the world,” we should expect AI to exhibit a similar winner-takes-all dynamic that will lead to even greater concentrations of wealth and power. Altman has predicted that the “cost of intelligence” will drop to near zero as a result of AI, and in 2021 he wrote that “even more power will shift from labor to capital.” He continued, “If public policy doesn’t adapt accordingly, most people will end up worse off than they are today.” Also in his “spicy take” thread, Jack Clark wrote, “economy-of-scale capitalism is, by nature, anti-democratic, and capex-intensive AI is therefore anti-democratic.”

    Markus Anderljung is the policy chief at GovAI, a leading AI safety think tank, and the first author on an influential white paper focused on regulating “frontier AI.” He wrote to me and said, “If you’re worried about capitalism in its current form, you should be even more worried about a world where huge parts of the economy are run by AI systems explicitly trained to maximize profit.”

    Sam Altman, circa June 2021, agreed, telling Ezra Klein about the founding philosophy of OpenAI: “One of the incentives that we were very nervous about was the incentive for unlimited profit, where more is always better. . . . And I think with these very powerful general purpose AI systems, in particular, you do not want an incentive to maximize profit indefinitely.”

    In a stunning move that has become widely seen as the biggest flash point in the AI safety debate so far, Open-AI’s nonprofit board fired CEO Sam Altman on November 17, 2023, the Friday before Thanksgiving. The board, per OpenAI’s unusual charter, has a fiduciary duty to “humanity,” rather than to investors or employees. As justification, the board vaguely cited Altman’s lack of candor but then ironically largely kept quiet about its decision.

    Around 3 a.m. the following Monday, Microsoft announced that Altman would be spinning up an advanced research lab with positions for every OpenAI employee, the vast majority of whom signed a letter threatening to take Microsoft’s offer if Altman wasn’t reinstated. (While he appears to be a popular CEO, it’s worth noting that the firing disrupted a planned sale of OpenAI’s employee-owned stock at a company valuation of $86 billion.) Just after 1 a.m. on Wednesday, OpenAI announced Altman’s return as CEO and two new board members: the former Twitter board chair, and former Treasury secretary Larry Summers.

    Within less than a week, OpenAI executives and Altman had collaborated with Microsoft and the company’s staff to engineer his successful return and the removal of most of the board members behind his firing. Microsoft’s first preference was having Altman back as CEO. The unexpected ouster initially sent the legacy tech giant’s stock plunging 5 percent ($140 billion), and the announcement of Altman’s reinstatement took it to an all-time high. Loath to be “blindsided” again, Microsoft is now taking a nonvoting seat on the nonprofit board.

    Immediately after Altman’s firing, X exploded, and a narrative largely fueled by online rumors and anonymously sourced articles emerged that safety-focused effective altruists on the board had fired Altman over his aggressive commercialization of OpenAI’s models at the expense of safety. Capturing the tenor of the overwhelming e/acc response, then pseudonymous founder @BasedBeffJezos posted, “EAs are basically terrorists. Destroying 80B of value overnight is an act of terrorism.”

    The picture that emerged from subsequent reporting was that a fundamental mistrust of Altman, not immediate concerns about AI safety, drove the board’s choice. The Wall Street Journal found that “there wasn’t one incident that led to their decision to eject Altman, but a consistent, slow erosion of trust over time that made them increasingly uneasy.”

    Weeks before the firing, Altman reportedly used dishonest tactics to try to remove board member Helen Toner over an academic paper she coauthored that he felt was critical of OpenAI’s commitment to AI safety. In the paper, Toner, an EA-aligned AI governance researcher, lauded Anthropic for avoiding “the kind of frantic corner-cutting that the release of ChatGPT appeared to spur.”

    The New Yorker reported that “some of the board’s six members found Altman manipulative and conniving.” Days after the firing, a DeepMind AI safety researcher who used to work for OpenAI wrote that Altman “lied to me on various occasions” and “was deceptive, manipulative, and worse to others,” an assessment echoed by recent reporting in Time.

    This wasn’t Altman’s first time being fired. In 2019, Y Combinator founder Paul Graham removed Altman from the incubator’s helm over concerns that he was prioritizing his own interests over those of the organization. Graham has previously said, “Sam is extremely good at becoming powerful.”

    OpenAI’s strange governance model was established specifically to prevent the corrupting influence of profit-seeking, but as the Atlantic rightly proclaimed, “the money always wins.” And more money than ever is going into advancing AI capabilities.
    Full Speed Ahead

    Recent AI progress has been driven by the culmination of many decades-long trends: increases in the amount of computing power (referred to as “compute”) and data used to train AI models, which themselves have been amplified by significant improvements in algorithmic efficiency. Since 2010, the amount of compute used to train AI models has increased roughly one-hundred-millionfold. Most of the advances we’re seeing now are the product of what was at the time a much smaller and poorer field.

    And while the last year has certainly contained more than its fair share of AI hype, the confluence of these three trends has led to quantifiable results. The time it takes AI systems to achieve human-level performance on many benchmark tasks has shortened dramatically in the last decade.

    It’s possible, of course, that AI capability gains will hit a wall. Researchers may run out of good data to use. Moore’s law — the observation that the number of transistors on a microchip doubles every two years — will eventually become history. Political events could disrupt manufacturing and supply chains, driving up compute costs. And scaling up systems may no longer lead to better performance.

    But the reality is that no one knows the true limits of existing approaches. A clip of a January 2022 Yann LeCun interview resurfaced on Twitter this year. LeCun said, “I don’t think we can train a machine to be intelligent purely from text, because I think the amount of information about the world that’s contained in text is tiny compared to what we need to know.” To illustrate his point, he gave an example: “I take an object, I put it on the table, and I push the table. It’s completely obvious to you that the object would be pushed with the table.” However, with “a text-based model, if you train a machine, as powerful as it could be, your ‘GPT-5000’ . . . it’s never gonna learn about this.”

    But if you give ChatGPT-3.5 that example, it instantly spits out the correct answer.

    In an interview published four days before his firing, Altman said, “Until we go train that model [GPT-5], it’s like a fun guessing game for us. We’re trying to get better at it, because I think it’s important from a safety perspective to predict the capabilities. But I can’t tell you here’s exactly what it’s going to do that GPT-4 didn’t.”

    History is littered with bad predictions about the pace of innovation. A New York Times editorial claimed it might take “one million to ten million years” to develop a flying machine — sixty-nine days before the Wright Brothers first flew. In 1933, Ernest Rutherford, the “father of nuclear physics,” confidently dismissed the possibility of a neutron-induced chain reaction, inspiring physicist Leo Szilard to hypothesize a working solution the very next day — a solution that ended up being foundational to the creation of the atomic bomb.

    One conclusion that seems hard to avoid is that, recently, the people who are best at building AI systems believe AGI is both possible and imminent. Perhaps the two leading AI labs, OpenAI and DeepMind, have been working toward AGI since their inception, starting when admitting you believed it was possible anytime soon could get you laughed out of the room. (Ilya Sutskever led a chant of “Feel the AGI” at OpenAI’s 2022 holiday party.)
    Perfect Workers

    Employers are already using AI to surveil, control, and exploit workers. But the real dream is to cut humans out of the loop. After all, as Marx wrote, “The machine is a means for producing surplus-value.”

    Open Philanthropy (OP) AI risk researcher Ajeya Cotra wrote to me that “the logical end point of a maximally efficient capitalist or market economy” wouldn’t involve humans because “humans are just very inefficient creatures for making money.” We value all these “commercially unproductive” emotions, she writes, “so if we end up having a good time and liking the outcome, it’ll be because we started off with the power and shaped the system to be accommodating to human values.”

    OP is an EA-inspired foundation financed by Facebook cofounder Dustin Moskovitz. It’s the leading funder of AI safety organizations, many of which are mentioned in this article. OP also granted $30 million to OpenAI to support AI safety work two years before the lab spun up a for-profit arm in 2019. I previously received a onetime grant to support publishing work at New York Focus, an investigative news nonprofit covering New York politics, from EA Funds, which itself receives funding from OP. After I first encountered EA in 2017, I began donating 10 to 20 percent of my income to global health and anti–factory farming nonprofits, volunteered as a local group organizer, and worked at an adjacent global poverty nonprofit. EA was one of the earliest communities to seriously engage with AI existential risk, but I looked at the AI folks with some wariness, given the uncertainty of the problem and the immense, avoidable suffering happening now.

    A compliant AGI would be the worker capitalists can only dream of: tireless, motivated, and unburdened by the need for bathroom breaks. Managers from Frederick Taylor to Jeff Bezos resent the various ways in which humans aren’t optimized for output — and, therefore, their employer’s bottom line. Even before the days of Taylor’s scientific management, industrial capitalism has sought to make workers more like the machines they work alongside and are increasingly replaced by. As The Communist Manifesto presciently observed, capitalists’ extensive use of machinery turns a worker into “an appendage of the machine.”

    But according to the AI safety community, the very same inhuman capabilities that would make Bezos salivate also make AGI a mortal danger to humans.
    Explosion: The Extinction Case

    The common x-risk argument goes: once AI systems reach a certain threshold, they’ll be able to recursively self-improve, kicking off an “intelligence explosion.” If a new AI system becomes smart — or just scaled up — enough, it will be able to permanently disempower humanity.

    The October “Managing AI Risks” paper states:

    There is no fundamental reason why AI progress would slow or halt when it reaches human-level abilities. . . . Compared to humans, AI systems can act faster, absorb more knowledge, and communicate at a far higher bandwidth. Additionally, they can be scaled to use immense computational resources and can be replicated by the millions.

    These features have already enabled superhuman abilities: LLMs can “read” much of the internet in months, and DeepMind’s AlphaFold can perform years of human lab work in a few days.

    Here’s a stylized version of the idea of “population” growth spurring an intelligence explosion: if AI systems rival human scientists at research and development, the systems will quickly proliferate, leading to the equivalent of an enormous number of new, highly productive workers entering the economy. Put another way, if GPT-7 can perform most of the tasks of a human worker and it only costs a few bucks to put the trained model to work on a day’s worth of tasks, each instance of the model would be wildly profitable, kicking off a positive feedback loop. This could lead to a virtual “population” of billions or more digital workers, each worth much more than the cost of the energy it takes to run them. Sutskever thinks it’s likely that “the entire surface of the earth will be covered with solar panels and data centers.”

    These digital workers might be able to improve on our AI designs and bootstrap their way to creating “superintelligent” systems, whose abilities Alan Turing speculated in 1951 would soon “outstrip our feeble powers.” And, as some AI safety proponents argue, an individual AI model doesn’t have to be superintelligent to pose an existential threat; there might just need to be enough copies of it. Many of my sources likened corporations to superintelligences, whose capabilities clearly exceed those of their constituent members.

    “Just unplug it,” goes the common objection. But once an AI model is powerful enough to threaten humanity, it will probably be the most valuable thing in existence. You might have an easier time “unplugging” the New York Stock Exchange or Amazon Web Services.

    A lazy superintelligence may not pose much of a risk, and skeptics like Allen Institute for AI CEO Oren Etzioni, complexity professor Melanie Mitchell, and AI Now Institute managing director Sarah Myers West all told me they haven’t seen convincing evidence that AI systems are becoming more autonomous. Anthropic’s Dario Amodei seems to agree that current systems don’t exhibit a concerning level of agency. However, a completely passive but sufficiently powerful system wielded by a bad actor is enough to worry people like Bengio.

    Further, academics and industrialists alike are increasing efforts to make AI models more autonomous. Days prior to his firing, Altman told the Financial Times: “We will make these agents more and more powerful . . . and the actions will get more and more complex from here. . . . The amount of business value that will come from being able to do that in every category, I think, is pretty good.”
    What’s Behind the Hype?

    The fear that keeps many x-risk people up at night is not that an advanced AI would “wake up,” “turn evil,” and decide to kill everyone out of malice, but rather that it comes to see us as an obstacle to whatever goals it does have. In his final book, Brief Answers to the Big Questions, Stephen Hawking articulated this, saying, “You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants.”

    Unexpected and undesirable behaviors can result from simple goals, whether it’s profit or an AI’s reward function. In a “free” market, profit-seeking leads to monopolies, multi-level marketing schemes, poisoned air and rivers, and innumerable other harms.

    There are abundant examples of AI systems exhibiting surprising and unwanted behaviors. A program meant to eliminate sorting errors in a list deleted the list entirely. One researcher was surprised to find an AI model “playing dead” to avoid being identified on safety tests.

    Yet others see a Big Tech conspiracy looming behind these concerns. Some people focused on immediate harms from AI argue that the industry is actively promoting the idea that their products might end the world, like Myers West of the AI Now Institute, who says she “see[s] the narratives around so-called existential risk as really a play to take all the air out of the room, in order to ensure that there’s not meaningful movement in the present moment.” Strangely enough, Yann LeCun and Baidu AI chief scientist Andrew Ng purport to agree.

    When I put the idea to x-risk believers, they often responded with a mixture of confusion and exasperation. OP’s Ajeya Cotra wrote back: “I wish it were less of an industry-associated thing to be concerned about x-risk, because I think it’s just really fundamentally, on the merits, a very anti-industry belief to have. . . . If the companies are building things that are going to kill us all, that’s really bad, and they should be restricted very stringently by the law.”

    GovAI’s Markus Anderljung called fears of regulatory capture “a natural reaction for folks to have,” but he emphasized that his preferred policies may well harm the industry’s biggest players.

    One understandable source of suspicion is that Sam Altman is now one of the people most associated with the existential risk idea, but his company has done more than any other to advance the frontier of general-purpose AI.

    Additionally, as OpenAI got closer to profitability and Altman got closer to power, the CEO changed his public tune. In a January 2023 Q and A, when asked about his worst-case scenario for AI, he replied, “Lights out for all of us.” But while answering a similar question under oath before senators in May, Altman doesn’t mention extinction. And, in perhaps his last interview before his firing, Altman said, “I actually don’t think we’re all going to go extinct. I think it’s going to be great. I think we’re heading towards the best world ever.”

    Altman implored Congress in May to regulate the AI industry, but a November investigation found that OpenAI’s quasi-parent company Microsoft was influential in the ultimately unsuccessful lobbying to exclude “foundation models” like ChatGPT from regulation by the forthcoming EU AI Act. And Altman did plenty of his own lobbying in the EU, even threatening to pull out of the region if regulations became too onerous (threats he quickly walked back). Speaking on a CEO panel in San Francisco days before his ouster, Altman said that “current models are fine. We don’t need heavy regulation here. Probably not even for the next couple of generations.”

    President Joe Biden’s recent “sweeping” executive order on AI seems to agree: its safety test information sharing requirements only affect models larger than any that have likely been trained so far. Myers West called these kinds of “scale thresholds” a “massive carveout.” Anderljung wrote to me that regulation should scale with a system’s capabilities and usage, and said that he “would like some regulation of today’s most capable and widely used models,” but he thinks it will “be a lot more politically viable to impose requirements on systems that are yet to be developed.”

    Inioluwa Deborah Raji ventured that if the tech giants “know that they have to be the bad guy in some dimension . . . they would prefer for it to be abstract and long-term in timeline.” This sounds far more plausible to me than the idea that Big Tech actually wants to promote the idea that their products have a decent chance of literally killing everyone.

    Nearly seven hundred people signed the extinction letter, the majority of them academics. Only one of them runs a publicly traded company: OP funder Moskovitz, who is also cofounder and CEO of Asana, a productivity app. There were zero employees from Amazon, Apple, IBM, or any leading AI hardware firms. No Meta executives signed.

    If the heads of the Big Tech firms wanted to amplify the extinction narrative, why haven’t they added their names to the list?
    Why Build the “Doom Machine?”

    If AI actually does save the world, whoever created it may hope to be lauded like a modern Julius Caesar. And even if it doesn’t, whoever first builds “the last invention that man need ever make” will not have to worry about being forgotten by history — unless, of course, history ends abruptly after their invention.

    Connor Leahy thinks that, on our current path, the end of history will shortly follow the advent of AGI. With his flowing hair and unkempt goatee, he would probably look at home wearing a sandwich board reading “The end is nigh” — though that hasn’t prevented him from being invited to address the British House of Lords or CNN. The twenty-eight-year-old CEO of Conjecture and cofounder of EleutherAI, an influential open-source collective, told me that a lot of the motivation to build AI boils down to: “‘Oh, you’re building the ultimate doom machine that makes you billions of dollars and also king-emperor of earth or kills everybody?’ Yeah, that’s like the masculine dream. You’re like, ‘Fuck yeah. I am the doom king.’” He continues, “Like, I get it. This is very much in the Silicon Valley aesthetic.”

    Leahy also conveyed some-thing that won’t surprise people who have spent significant time in the Bay Area or certain corners of the internet:

    There are actual, completely unaccountable, unelected, techno-utopian businesspeople and technologists, living mostly in San Francisco, who are willing to risk the lives of you, your children, your grandchildren, and all of future humanity just because they might have a chance to live forever.

    In March, the MIT Technology Review reported that Altman “says he’s emptied his bank account to fund two . . . goals: limitless energy and extended life span.”

    Given all this, you might expect the ethics community to see the safety community as a natural ally in a common struggle to reign in unaccountable tech elites who are unilaterally building risky and harmful products. And, as we saw earlier, many safety advocates have made overtures to the AI ethicists. It’s also rare for people from the x-risk community to publicly attack AI ethics (while the reverse is . . . not true), but the reality is that safety proponents have sometimes been hard to stomach.

    AI ethicists, like the people they advocate for, often report feeling marginalized and cut off from real power, fighting an uphill battle with tech companies who see them as a way to cover their asses rather than as a true priority. Lending credence to this feeling is the gutting of AI ethics teams at many Big Tech companies in recent years (or days). And, in a number of cases, these companies have retaliated against ethics-oriented whistleblowers and labor organizers.

    This doesn’t necessarily imply that these companies are instead seriously prioritizing x-risk. Google DeepMind’s ethics board, which included Larry Page and prominent existential risk researcher Toby Ord, had its first meeting in 2015, but it never had a second one. One Google AI researcher wrote to me that they “don’t talk about long-term risk . . . in the office,” continuing, “Google is more focused on building the tech and on safety in the sense of legality and offensiveness.”

    Software engineer Timnit Gebru co-led Google’s ethical AI team until she was forced out of the company in late 2020 following a dispute over a draft paper — now one of the most famous machine learning publications ever. In the “stochastic parrots” paper, Gebru and her coauthors argue that LLMs damage the environment, amplify social biases, and use statistics to “haphazardly” stitch together language “without any reference to meaning.”

    Gebru, who is no fan of the AI safety community, has called for enhanced whistleblower protections for AI researchers, which are also one of the main recommendations made in GovAI’s white paper. Since Gebru was pushed out of Google, nearly 2,700 staffers have signed a solidaristic letter, but then Googler Geoff Hinton was not one of them. When asked on CNN why he didn’t support a fellow whistleblower, Hinton replied that Gebru’s critiques of AI “were rather different concerns from mine” that “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.”

    Raji told me that “a lot of cause for frustration and animosity” between the ethics and safety camps is that “one side has just way more money and power than the other side,” which “allows them to push their agenda way more directly.”

    According to one estimate, the amount of money moving into AI safety start-ups and nonprofits in 2022 quadrupled since 2020, reaching $144 million. It’s difficult to find an equivalent figure for the AI ethics community. However, civil society from either camp is dwarfed by industry spending. In just the first quarter of 2023, OpenSecrets reported roughly $94 million was spent on AI lobbying in the United States. LobbyControl estimated tech firms spent €113 million this year lobbying the EU, and we’ll recall that hundreds of billions of dollars are being invested in the AI industry as we speak.

    One thing that may drive the animosity even more than any perceived difference in power and money is the trend line. Following widely praised books like 2016’s Weapons of Math Destruction, by data scientist Cathy O’Neil, and bombshell discoveries of algorithmic bias, like the 2018 “Gender Shades” paper by Buolamwini and Gebru, the AI ethics perspective had captured the public’s attention and support.

    In 2014, the AI x-risk cause had its own surprise bestseller, philosopher Nick Bostrom’s Superintelligence, which argued that beyond-human AI could lead to extinction and earned praise from figures like Elon Musk and Bill Gates. But Yudkowsky told me that, pre-ChatGPT, outside of certain Silicon Valley circles, seriously entertaining the book’s thesis would make people look at you funny. Early AI safety proponents like Yudkowsky have occupied the strange position of maintaining close ties to wealth and power through Bay Area techies while remaining marginalized in the wider discourse.

    In the post-ChatGPT world, Turing recipients and Nobel laureates are coming out of the AI safety closet and embracing arguments popularized by Yudkowsky, whose best-known publication is a piece of Harry Potter fan fiction totaling more than 660,000 words.

    Perhaps the most shocking portent of this new world was broadcast in November, when the hosts of a New York Times tech podcast, Hard Fork, asked the Federal Trade Commission chair: “What is your p(doom), Lina Khan? What is your probability that AI will kill us all?” EA water cooler talk has gone mainstream. (Khan said she’s “an optimist” and gave a “low” estimate of 15 percent.)

    It would be easy to observe all the open letters and media cycles and think that the majority of AI researchers are mobilizing against existential risk. But when I asked Bengio about how x-risk is perceived today in the machine learning community, he said, “Oh, it’s changed a lot. It used to be, like, 0.1 percent of people paid attention to the question. And maybe now it’s 5 percent.”

    Like many others concerned about AI x-risk, the renowned philosopher of mind David Chalmers made a probabilistic argument during our conversation: “This is not a situation where you have to be 100 percent certain that we’ll have human-level AI to worry about it. If it’s 5 percent, that’s something we have to worry about.”

    This kind of statistical thinking is popular in the EA community and is a large part of what led its members to focus on AI in the first place. If you defer to expert arguments, you could end up more confused. But if you try to average the expert concern from the handful of surveys, you might end up thinking there’s at least a few-percent chance that AI extinction could happen, which could be enough to make it the most important thing in the world. And if you put any value on all the future generations that could exist, human extinction is categorically worse than survivable catastrophes.

    However, in the AI debate, allegations of arrogance abound. Skeptics like Melanie Mitchell and Oren Etzioni told me there wasn’t evidence to support the x-risk case, while believers like Bengio and Leahy point to surprising capability gains and ask: What if progress doesn’t stop? An academic AI researcher friend has likened the advent of AGI to throwing global economics and politics into a blender.

    Even if, for some reason, AGI can only match and not exceed human intelligence, the prospect of sharing the earth with an almost arbitrarily large number of human-level digital agents is terrifying, especially when they’ll probably be trying to make someone money.

    There are far too many policy ideas about how to reduce existential risk from AI to properly discuss here. But one of the clearer messages coming from the AI safety community is that we should “slow down.” Advocates for such a deceleration hope it would give policymakers and broader society a chance to catch up and actively decide how a potentially transformative technology is developed and deployed.
    International Cooperation

    One of the most common responses to any effort to regulate AI is the “but China!” objection. Altman, for example, told a Senate committee in May that “we want America to lead” and acknowledged that a peril of slowing down is that “China or somebody else makes faster progress.”

    Anderljung wrote to me that this “isn’t a strong enough reason not to regulate AI.”

    In a June Foreign Affairs article, Helen Toner and two political scientists reported that the Chinese AI researchers they interviewed thought Chinese LLMs are at least two to three years behind the American state-of-the-art models. Further, the authors argue that since Chinese AI advances “rely a great deal on reproducing and tweaking research published abroad,” a unilateral slowdown “would likely decelerate” Chinese progress as well. China has also moved faster than any other major country to meaningfully regulate AI, as Anthropic policy chief Jack Clark has observed.

    Yudkowsky says, “It’s not actually in China’s interest to commit suicide along with the rest of humanity.”

    If advanced AI really threatens the whole world, domestic regulation alone won’t cut it. But robust national restrictions could credibly signal to other countries how seriously you take the risks. Prominent AI ethicist Rumman Chowdhury has called for global oversight. Bengio says we “have to do both.”

    Yudkowsky, unsurprisingly, has taken a maximalist position, telling me that “the correct direction looks more like putting all of the AI hardware into a limited number of data centers under international supervision by bodies with a symmetric treaty whereby nobody — including the militaries, governments, China, or the CIA — can do any of the really awful things, including building superintelligences.”

    In a controversial Time op-ed from March, Yudkowsky argued to “shut it all down” by establishing an international moratorium on “new large training runs” backed by the threat of military force. Given Yudkowsky’s strong beliefs that advanced AI would be much more dangerous than any nuclear or biological weapon, this radical stance follows naturally.

    All twenty-eight countries at the recent AI Safety Summit, including the United States and China, signed the Bletchley Declaration, which recognized existing harms from AI and the fact that “substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent.”

    At the summit, the hosting British government commissioned Bengio to lead production of the first “State of the Science” report on the “capabilities and risks of frontier AI,” in a significant step toward a permanent expert body like the Intergovernmental Panel on Climate Change.

    Cooperation between the United States and China will be imperative for meaningful international coordination on AI development. And when it comes to AI, the two countries aren’t exactly on the best terms. With the 2022 CHIPS Act export controls, the United States tried to kneecap China’s AI capabilities, something an industry analyst would have previously considered an “act of war.” As Jacobin reported in May, some x-risk-oriented policy researchers likely played a role in passing the onerous controls. In October, the United States tightened CHIPS Act restrictions to close loopholes.

    However, in an encouraging sign, Biden and Xi Jinping discussed AI safety and a ban on AI in lethal weapons systems in November. A White House press release stated, “The leaders affirmed the need to address the risks of advanced AI systems and improve AI safety through U.S.-China government talks.”

    Lethal autonomous weapons are also an area of relative agreement in the AI debates. In her new book Unmasking AI: My Mission to Protect What Is Human in a World of Machines, Joy Buolamwini advocates for the Stop Killer Robots campaign, echoing a longtime concern of many AI safety proponents. The Future of Life Institute, an x-risk organization, assembled ideological opponents to sign a 2016 open letter calling for a ban on offensive LAWs, including Bengio, Hinton, Sutton, Etzioni, LeCun, Musk, Hawking, and Noam Chomsky.
    A Seat at the Table

    After years of inaction, the world’s governments are finally turning their attention to AI. But by not seriously engaging with what future systems could do, socialists are ceding their seat at the table.

    In no small part because of the types of people who became attracted to AI, many of the earliest serious adopters of the x-risk idea decided to either engage in extremely theoretical research on how to control advanced AI or started AI companies. But for a different type of person, the response to believing that AI could end the world is to try to get people to stop building it.

    Boosters keep saying that AI development is inevitable — and if enough people believe it, it becomes true. But “there is nothing about artificial intelligence that is inevitable,” writes the AI Now Institute. Managing director Myers West echoed this, mentioning that facial recognition technology looked inevitable in 2018 but has since been banned in many places. And as x-risk researcher Katja Grace points out, we shouldn’t feel the need to build every technology simply because we can.

    Additionally, many policymakers are looking at recent AI advances and freaking out. Senator Mitt Romney is “more terrified about AI” than optimistic, and his colleague Chris Murphy says, “The consequences of so many human functions being outsourced to AI is potentially disastrous.” Congresspeople Ted Lieu and Mike Johnson are literally “freaked out” by AI. If certain techies are the only people willing to acknowledge that AI capabilities have dramatically improved and could pose a species-level threat in the future, that’s who policymakers will disproportionately listen to. In May, professor and AI ethicist Kristian Lum tweeted: “There’s one existential risk I’m certain LLMs pose and that’s to the credibility of the field of FAccT [Fairness, Accountability, and Transparency] / Ethical AI if we keep pushing the snake oil narrative about them.”

    Even if the idea of AI-driven extinction strikes you as more fi than sci, there could still be enormous impact in influencing how a transformative technology is developed and what values it represents. Assuming we can get a hypothetical AGI to do what we want raises perhaps the most important question humanity will ever face: What should we want it to want?

    When I asked Chalmers about this, he said, “At some point we recapitulate all the questions of political philosophy: What kind of society do we actually want and actually value?”

    One way to think about the advent of human-level AI is that it would be like creating a new country’s constitution (Anthropic’s “constitutional AI” takes this idea literally, and the company recently experimented with incorporating democratic input into its model’s foundational document). Governments are complex systems that wield enormous power. The foundation upon which they’re established can influence the lives of millions now and in the future. Americans live under the yoke of dead men who were so afraid of the public, they built antidemocratic measures that continue to plague our political system more than two centuries later.

    AI may be more revolutionary than any past innovation. It’s also a uniquely normative technology, given how much we build it to reflect our preferences. As Jack Clark recently mused to Vox, “It’s a real weird thing that this is not a government project.” Chalmers said to me, “Once we suddenly have the tech companies trying to build these goals into AI systems, we have to really trust the tech companies to get these very deep social and political questions right. I’m not sure I do.” He emphasized, “You’re not just in technical reflection on this but in social and political reflection.”
    False Choices

    We may not need to wait to find superintelligent systems that don’t prioritize humanity. Superhuman agents ruthlessly optimize for a reward at the expense of anything else we might care about. The more capable the agent and the more ruthless the optimizer, the more extreme the results.

    Sound familiar? If so, you’re not alone. The AI Objectives Institute (AOI) looks at both capitalism and AI as examples of misaligned optimizers. Cofounded by former public radio show host Brittney Gallagher and “privacy hero” Peter Eckersley shortly before his unexpected death, the research lab examines the space between annihilation and utopia, “a continuation of existing trends of concentration of power in fewer hands — super-charged by advancing AI — rather than a sharp break with the present.” AOI president Deger Turan told me, “Existential risk is failure to coordinate in the face of a risk.” He says that “we need to create bridges between” AI safety and AI ethics.

    One of the more influential ideas in x-risk circles is the unilateralist’s curse, a term for situations in which a lone actor can ruin things for the whole group. For example, if a group of biologists discovers a way to make a disease more deadly, it only takes one to publish it. Over the last few decades, many people have become convinced that AI could wipe out humanity, but only the most ambitious and risk-tolerant of them have started the companies that are now advancing the frontier of AI capabilities, or, as Sam Altman recently put it, pushing the “veil of ignorance back.” As the CEO alludes, we have no way of truly knowing what lies beyond the technological limit.

    Some of us fully understand the risks but plow forward anyway. With the help of top scientists, ExxonMobil had discovered conclusively by 1977 that their product caused global warming. They then lied to the public about it, all while building their oil platforms higher.

    The idea that burning carbon could warm the climate was first hypothesized in the late nineteenth century, but the scientific consensus on climate change took nearly one hundred years to form. The idea that we could permanently lose control to machines is older than digital computing, but it remains far from a scientific consensus. And if recent AI progress continues at pace, we may not have decades to form a consensus before meaningfully acting.

    The debate playing out in the public square may lead you to believe that we have to choose between addressing AI’s immediate harms and its inherently speculative existential risks. And there are certainly trade-offs that require careful consideration.

    But when you look at the material forces at play, a different picture emerges: in one corner are trillion-dollar companies trying to make AI models more powerful and profitable; in another, you find civil society groups trying to make AI reflect values that routinely clash with profit maximization.

    In short, it’s capitalism versus humanity.

    #intelligence_artificielle #politique #disruptoin #

    • J’ai survolé, et il me semble que l’article n’évoque jamais le transhumanisme de Ray Kurzweil, qui est pourtant l’idéologie quasi religieuse particulièrement en vogue en Californie. Et dont Larry Page est connu pour être un des importants mécènes.

      Or dans le texte, ça transparaît en permanence, et même les critiques des développements de l’IA semblent ici largement y adhérer.

  • Au 23 Décembre 2024, l’armée israélienne avait déjà tué 20 des 105 soldats tués à Gaza. Tirs amis ou accidents Time of israel

    Sur les 105 soldats tués dans la bande de Gaza au cours de l’offensive terrestre d’Israël contre le Hamas, qui a commencé fin octobre, 20 ont été tués par des tirs « amis » et d’autres au cours d’accidents, selon de nouvelles données publiées par l’armée israélienne mardi.

    Treize des soldats ont été tués par des tirs amis dus à une erreur d’identification, y compris lors de frappes aériennes, de tirs de chars et de tirs d’armes à feu.

    Un soldat a été tué par un tir qui ne l’a pas atteint intentionnellement, et deux autres ont été tués par des tirs accidentels. Deux soldats ont été tués dans des incidents au cours desquels des véhicules blindés ont écrasé des troupes.

    Enfin, deux soldats ont été tués par des éclats d’explosifs déclenchés intentionnellement par les forces israéliennes.

    Selon l’armée israélienne, il y aurait une multitude de raisons à l’origine de ces accidents mortels, comme le grand nombre de forces opérant dans la bande de Gaza, les problèmes de communication entre les forces et la fatigue des soldats, qui les rend peu attentifs aux réglementations.
    . . . . .

    #Palestine #israel #israël #tsahal #Gaza #Hamas #armée #bavures #IA #Palestine_assassinée #guerre #intelligence_artificielle

    Source : https://fr.timesofisrael.com/tsahal-20-des-105-soldats-tues-a-gaza-ont-ete-victimes-de-tirs-ami

  • La Tribune : Amazon abandonne ses magasins sans caisse... en réalité gérés par des travailleurs indiens à distance Marine Protais

    Le géant du e-commerce, qui opère également des magasins physiques, renonce à sa technologie Just Walk Out dans ses supermarchés Amazon Fresh aux États-Unis. Ce système permet à ses clients de faire leurs emplettes sans passer par l’étape de la caisse. Mais il nécessite des caméras, des capteurs et surtout le travail de 1.000 travailleurs indiens, donnant l’illusion de l’automatisation.

    Pour faire ses courses dans les supermarchés Amazon, il suffisait d’entrer, de scanner un QR code sur une application, de prendre ses produits et de sortir. (Crédits : Amazon)

    En 2016, on les annonçait comme le futur du commerce. Plus besoin de caissiers, ni de vigiles, ni même de sortir votre portefeuille. Pour faire vos courses dans les supermarchés Amazon, il suffisait d’entrer, de scanner un QR code sur une application, de prendre vos produits et de sortir. Le montant de vos achats était calculé à la sortie du magasin grâce à un système mêlant caméras et capteurs décrit comme automatique, puis directement débité sur votre carte bancaire.

    Mais nous voici en 2024, et le géant du e-commerce, diversifié dans les magasins physiques, abandonne en partie cette technologie, nous apprend le média américain The Information https://www.theinformation.com/articles/amazons-grocery-stores-to-drop-just-walk-out-checkout-tech . Elle sera supprimée des 27 magasins « Amazon Fresh » américains (des supermarchés où l’on trouve des produits frais), où elle était installée. En guise de remplacement, ces magasins seront équipés de caddies « intelligents », capables de scanner automatiquement les produits, rapporte le média d’investigation américain. L’information a ensuite été confirmée auprès d’AP https://apnews.com/article/amazon-fresh-just-walk-out-bb36bb24803bd56747c6f99814224265 par un porte-parole de l’entreprise. Le système Just Walk Out restera pour le moment dans les plus petites boutiques « Amazon Go », et chez la centaine de partenaires de la firme.

    L’illusion de l’automatisation
    Pour se passer de caissier sur place, le système « Just Walk Out » nécessite son lot de caméras et de capteurs, permettant de suivre le client en magasin, mais surtout d’humains, chargés de vérifier à distance les achats des clients via les caméras. The Information rapporte que plus de 1.000 personnes en Inde sont chargées de ce travail.

    En plus de cette automatisation illusoire, le système « Just Walk Out » faisait depuis quelques années l’objet de critiques. Les clients se plaignent de tickets de caisse reçus des heures après leurs achats, ou de commandes mal gérées par le système. En 2023, la firme avait d’ailleurs annoncé une réorganisation de ses magasins, pour rendre les technologies moins visibles et l’ambiance moins froide. Et le rythme d’ouvertures des enseignes avait été revu à la baisse.

    Par ailleurs, la technologie soulève des questions quant à la protection de la vie privée. Fin 2023, plusieurs consommateurs ont lancé une class action, accusant Amazon de collecter les données biométriques des clients, la forme de leur main et de leur visage ainsi que la tonalité de leur voix, via le système Just Walk Out sans demander leur consentement. Une pratique contraire à une loi de l’Illinois sur le traitement des données biométriques.

    Les entrepôts « automatisés » d’Amazon également surveillés par des travailleurs indiens
    Comme le note le chercheur Antonio Casilli, spécialiste du « travail du clic », cette histoire est banale. Sur X, il rappelle qu’en 2023, Time nous apprenait qu’Alexa, l’assistant virtuel de l’entreprise de Seattle, fonctionnait grâce à l’écoute de 30.000 travailleurs qui annotaient les conversations des utilisateurs pour améliorer les algorithmes gérant l’assistant.

    Et en 2022, The Verge rapportait que les entrepôts automatisés d’Amazon nécessitaient le travail de vigiles, à distance toujours, de travailleurs au Costa-Rica et en Inde, chargés de regarder les images des caméras plus de 40 heures par semaine pour 250 dollars par mois.

    #IA#intelligence_artificielle : #Fumisterie , #arnaque ou #escroquerie ? #amazon #caméras #capteurs #automatisation #technologie #travail #Entrepôts #algorithmes #Alexa

    Source : https://www.latribune.fr/technos-medias/informatique/amazon-abandonne-ses-magasins-sans-caisse-en-realite-geres-par-des-travail

    • Amazon : pourquoi la tech autonome “Just Walk Out” passe à la trappe
      Confirmation sur le blog d’Olivier Dauvers, le web grande conso

      Amazon vient d’annoncer l’abandon de la technologie Just Walk Out dans ses magasins Fresh aux États-Unis (une cinquantaine d’unités dont la moitié sont équipés). Just Walk Out c’est la techno, totalement bluffante, de magasin autonome sans caisses que je vous ai montrée en vidéo dès 2020 (ici) ou encore à Washington et Los Angeles dans de vrais formats de supermarché Whole Foods (ici et là). 

      Des centaines de caméras dopées à l’IA au plafond couplées à des balances sur les étagères permettent de pister l’intégralité du parcours d’achat du client, lequel s’affranchit du passage en caisse. Bluffant (vraiment) je vous dis. 

      un de ces magasins où l’être humain est bani

      Appelons un chat un chat, pour Amazon, ce revirement est un aveu d’échec cuisant. Car la vente de ses technos est au cœur du modèle économique d’Amazon dans le retail physique. Si le groupe lui-même ne parvient pas à prouver la viabilité de Just Walk Out, quel concurrent irait l’acheter ?

      Ce qu’il faut retenir de cet abandon ? Que les technos de magasins autonomes ne sont, pour l’heure, déployables que sur de (très) petits formats bénéficiant d’un flux clients très élevé. Pour des raisons assez évidentes de Capex/m2… mais aussi de supervision humaine. Car, à date, l’IA seule n’est pas en mesure de gérer tous les scénarios de course (dont les tentatives de démarque), obligeant un visionnage de contrôle par l’humain (localisé dans des pays à bas salaire). 

      #techno #échec

      Source : https://www.olivierdauvers.fr/2024/04/04/amazon-pourquoi-la-tech-autonome-just-walk-out-passe-a-la-trappe

  • How Hollywood writers triumphed over AI – and why it matters | US writers’ strike 2023 | The Guardian

    Hollywood writers scored a major victory this week in the battle over artificial intelligence with a new contract featuring strong guardrails in how the technology can be used in film and television projects.

    With terms of AI use finally agreed, some writers are breathing easier – for now – and experts say the guidelines could offer a model for workers in Hollywood and other industries. The writers’ contract does not outlaw the use of AI tools in the writing process, but it sets up guardrails to make sure the new technology stays in the control of workers, rather than being used by their bosses to replace them.

    The new rules guard against several scenarios that writers had feared, comedian Adam Conover, a member of the WGA negotiating committee, told the Guardian. One such scenario was studios being allowed to generate a full script using AI tools, and then demanding that human writer complete the writing process.

    Under the new terms, studios “cannot use AI to write scripts or to edit scripts that have already been written by a writer”, Conover says. The contract also prevents studios from treating AI-generated content as “source material”, like a novel or a stage play, that screenwriters could be assigned to adapt for a lower fee and less credit than a fully original script.

    For instance, if the studios were allowed to use chatGPT to generate a 100,000-word novel and then ask writers to adapt it, “That would be an easy loophole for them to reduce the wages of screenwriters,” Conover said. “We’re not allowing that.” If writers adapt output from large language models, it will still be considered an original screenplay, he said.

    Simon Johnson, an economist at MIT who studies technological transformation, called the new terms a “fantastic win for writers”, and said that it would likely result in “better quality work and a stronger industry for longer”.

    #Intelligence_artificielle #Scénaristes #Hollywood #Grève

  • Is “Deep Learning” a Revolution in Artificial Intelligence ? | The New Yorker

    Intéressant de relire un article sur l’IA qui a 12 ans.
    Comment la technologie a progressé rapidement. Et dans le même temps, comment les interrogations subsistent.

    By Gary Marcus
    November 25, 2012

    Can a new technique known as deep learning revolutionize artificial intelligence, as yesterday’s front-page article at the New York Times suggests? There is good reason to be excited about deep learning, a sophisticated “machine learning” algorithm that far exceeds many of its predecessors in its abilities to recognize syllables and images. But there’s also good reason to be skeptical. While the Times reports that “advances in an artificial intelligence technology that can recognize patterns offer the possibility of machines that perform human activities like seeing, listening and thinking,” deep learning takes us, at best, only a small step toward the creation of truly intelligent machines. Deep learning is important work, with immediate practical applications. But it’s not as breathtaking as the front-page story in the New York Times seems to suggest.

    The technology on which the Times focusses, deep learning, has its roots in a tradition of “neural networks” that goes back to the late nineteen-fifties. At that time, Frank Rosenblatt attempted to build a kind of mechanical brain called the Perceptron, which was billed as “a machine which senses, recognizes, remembers, and responds like the human mind.” The system was capable of categorizing (within certain limits) some basic shapes like triangles and squares. Crowds were amazed by its potential, and even The New Yorker was taken in, suggesting that this “remarkable machine…[was] capable of what amounts to thought.”

    But the buzz eventually fizzled; a critical book written in 1969 by Marvin Minsky and his collaborator Seymour Papert showed that Rosenblatt’s original system was painfully limited, literally blind to some simple logical functions like “exclusive-or” (As in, you can have the cake or the pie, but not both). What had become known as the field of “neural networks” all but disappeared.

    Rosenblatt’s ideas reëmerged however in the mid-nineteen-eighties, when Geoff Hinton, then a young professor at Carnegie-Mellon University, helped build more complex networks of virtual neurons that were able to circumvent some of Minsky’s worries. Hinton had included a “hidden layer” of neurons that allowed a new generation of networks to learn more complicated functions (like the exclusive-or that had bedeviled the original Perceptron). Even the new models had serious problems though. They learned slowly and inefficiently, and as Steven Pinker and I showed, couldn’t master even some of the basic things that children do, like learning the past tense of regular verbs. By the late nineteen-nineties, neural networks had again begun to fall out of favor.

    Hinton soldiered on, however, making an important advance in 2006, with a new technique that he dubbed deep learning, which itself extends important earlier work by my N.Y.U. colleague, Yann LeCun, and is still in use at Google, Microsoft, and elsewhere. A typical setup is this: a computer is confronted with a large set of data, and on its own asked to sort the elements of that data into categories, a bit like a child who is asked to sort a set of toys, with no specific instructions. The child might sort them by color, by shape, or by function, or by something else. Machine learners try to do this on a grander scale, seeing, for example, millions of handwritten digits, and making guesses about which digits looks more like one another, “clustering” them together based on similarity. Deep learning’s important innovation is to have models learn categories incrementally, attempting to nail down lower-level categories (like letters) before attempting to acquire higher-level categories (like words).


    Friday Night Blind: Bowling Without Sight

    Deep learning excels at this sort of problem, known as unsupervised learning. In some cases it performs far better than its predecessors. It can, for example, learn to identify syllables in a new language better than earlier systems. But it’s still not good enough to reliably recognize or sort objects when the set of possibilities is large. The much-publicized Google system that learned to recognize cats for example, works about seventy per cent better than its predecessors. But it still recognizes less than a sixth of the objects on which it was trained, and it did worse when the objects were rotated or moved to the left or right of an image.

    Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like “sibling” or “identical to.” They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used. The most powerful A.I. systems, like Watson, the machine that beat humans in “Jeopardy,” use techniques like deep learning as just one element in a very complicated ensemble of techniques, ranging from the statistical technique of Bayesian inference to deductive reasoning.

    In August, I had the chance to speak with Peter Norvig, Director of Google Research, and asked him if he thought that techniques like deep learning could ever solve complicated tasks that are more characteristic of human intelligence, like understanding stories, which is something Norvig used to work on in the nineteen-eighties. Back then, Norvig had written a brilliant review of the previous work on getting machines to understand stories, and fully endorsed an approach that built on classical “symbol-manipulation” techniques. Norvig’s group is now working within Hinton, and Norvig is clearly very interested in seeing what Hinton could come up with. But even Norvig didn’t see how you could build a machine that could understand stories using deep learning alone.

    To paraphrase an old parable, Hinton has built a better ladder; but a better ladder doesn’t necessarily get you to the moon.

    Gary Marcus, Professor of Psychology at N.Y.U., is author of “Guitar Zero: The Science of Becoming Musical at Any Age” and “Kluge: The Haphazard Evolution of The Human Mind.”

    Photograph by Frederic Lewis/Archive Photos/Getty.

    #Intelligence_artificielle #Connexionnisme #Histoire

  • ChatGPT : face aux artifices de l’IA, comment l’éducation aux médias peut aider les élèves

    Comme le montre une étude de la Columbia Journalism Review, la panique n’a pas commencé en décembre 2022 avec l’événement lancé par OpenAI mais en février 2023 avec les annonces de Microsoft et Google, chacun y allant de son chatbot intégré dans leur moteur de recherche (Bing Chat et Bard, respectivement). La couverture médiatique opère un brouillage informationnel, se focalisant davantage sur le potentiel remplacement de l’humain que sur la réelle concentration de la propriété de l’IA dans les mains de quelques entreprises.

    Comme toute panique médiatique (les plus récentes étant celles sur la réalité virtuelle et le métavers), elle a pour but et effet de créer un débat public permettant à d’autres acteurs que ceux des médias et du numérique de s’en emparer. Pour l’éducation aux médias et à l’information (EMI), les enjeux sont de taille en matière d’interactions sociales et scolaires, même s’il est encore trop tôt pour mesurer les conséquences sur l’enseignement de ces modèles de langage générant automatiquement des textes et des images et de leur mise à disposition auprès du grand public.

    Les publics de l’IA, notamment à l’école, se doivent donc de développer des connaissances et compétences autour des risques et opportunités de ce genre de robot dit conversationnel. Outre la compréhension des mécanismes du traitement automatique de l’information et de la désinformation, d’autres précautions prêtent à éducation :

    prendre garde au monopole de la requête en ligne, tel que visé par Bing Chat et Google Bard, en jouant de la concurrence entre elles, donc en utilisant régulièrement plusieurs moteurs de recherche ;

    exiger des labels, des codes couleur et autres marqueurs pour indiquer qu’un document a été produit par une IA ou avec son aide est aussi frappé au coin du bon sens et certains médias l’ont déjà anticipé ;

    demander que les producteurs fassent de la rétro-ingénierie pour produire des IA qui surveillent l’IA. Ce qui est déjà le cas avec GPTZero ;

    entamer des poursuites judiciaires, en cas d’« hallucination » de ChatGPT- - encore un terme anthropomorphisé pour marquer une erreur du système !

    Et se souvenir que, plus on utilise ChatGPT, sous sa version gratuite comme payante, plus on l’aide à s’améliorer.

    Dans le domaine éducatif, les solutions marketing de la EdTech vantent les avantages de l’IA pour personnaliser les apprentissages, faciliter l’analyse de données, augmenter l’efficacité administrative… Mais ces métriques et statistiques ne sauraient en rien se substituer à la validation des compétences acquises et aux productions des jeunes.

    Pour tout intelligente qu’elle prétende être, l’IA ne peut remplacer la nécessité pour les élèves de développer leur esprit critique et leur propre créativité, de se former et s’informer en maîtrisant leurs sources et ressources. Alors que la EdTech, notamment aux États-Unis, se précipite pour introduire l’IA dans les classes, d’école primaire au supérieur, la vigilance des enseignants et des décideurs reste primordiale pour préserver les missions centrales de l’école et de l’université. L’intelligence collective peut ainsi s’emparer de l’intelligence artificielle.

    #Intelligence_artificielle #EMI #Education_medias_information

  • Discrimination 2.0 : ces algorithmes qui perpétuent le racisme

    L’IA et les systèmes algorithmiques peuvent désavantager des personnes en raison de leur origine, voire conduire à des discriminations raciales sur le marché du travail. A l’occasion de la Journée internationale pour l’élimination de la discrimination raciale, AlgorithmWatch CH, humanrights.ch et le National Coalition Building Institute NCBI mettent en lumière la manière dont les systèmes automatisés utilisés dans les procédures de recrutement peuvent reproduire les inégalités.

    Les procédures d’embauche sont et ont toujours été caractérisées par une certaine inégalité des chances. Aujourd’hui, les entreprises utilisent souvent des systèmes algorithmiques pour traiter les candidatures, les trier et faire des recommandations pour sélectionner des candidat·e·x·s. Si les départements des ressources humaines des grandes entreprises souhaitent augmenter leur efficacité grâce aux « Applicant Tracking Systems » (ATS), l’utilisation de tels systèmes peut renforcer les stéréotypes discriminatoires ou même en créer de nouveaux. Les personnes issues de l’immigration sont souvent concernées par cette problématique.
    Exemple 1 : un algorithme qui préfère les CV « indigènes »

    Une étude récente menée en Grande-Bretagne a comparé les CV sélectionnés par une personne experte en ressources humaines et ceux qu’un système de recommandation algorithmique avait identifiés comme étant ceux de candidat·e·x·s compétent·e·x·s. La comparaison a montré que les personnes que les recruteur·euse·x·s considéraient comme les meilleur·e·x·s candidat·e·x·s ne faisaient parfois même pas partie de la sélection effectuée par les systèmes basés sur des algorithmes. Ces systèmes ne sont pas capables pas lire tous les formats avec la même efficacité ; aussi les candidatures compétentes ne correspondant pas au format approprié sont-elles automatiquement éliminées. Une étude portant sur un autre système a également permis de constater des différences claires dans l’évaluation des CV. Ainsi, il s’est avéré que le système attribuait davantage de points aux candidatures « indigènes », en l’occurrence britanniques, qu’aux CV internationaux. Les candidat·e·x·s britanniques avaient donc un avantage par rapport aux personnes migrantes ou ayant une origine étrangère pour obtenir une meilleure place dans le classement.
    Exemple 2 : les formations à l’étranger moins bien classées

    En règle générale, les systèmes de recrutement automatisés sont entraînés de manière à éviter l’influence de facteurs tels que le pays d’origine, l’âge ou le sexe sur la sélection. Les candidatures contiennent toutefois également des attributs plus subtils, appelés « proxies » (en français : variables de substitution), qui peuvent indirectement donner des informations sur ces caractéristiques démographiques, par exemple les compétences linguistiques ou encore l’expérience professionnelle ou les études à l’étranger. Ainsi, la même étude a révélé que le fait d’avoir étudié à l’étranger entraînait une baisse des points attribués par le système pour 80% des candidatures. Cela peut conduire à des inégalités de traitement dans le processus de recrutement pour les personnes n’ayant pas grandi ou étudié dans le pays dans lequel le poste est proposé.

    Les critères de sélection de nombreux systèmes de recrutement basés sur les algorithmes utilisés par les entreprises sont souvent totalement opaques. De même, les jeux de données utilisés pour entraîner les algorithmes d’auto-apprentissage se basent généralement sur des données historiques. Si une entreprise a par exemple jusqu’à présent recruté principalement des hommes blancs âgés de 25 à 30 ans, il se peut que l’algorithme « apprenne » sur cette base que de tels profils doivent également être privilégiés pour les nouveaux postes à pourvoir. Ces stéréotypes et effets discriminatoires ne viennent pas de l’algorithme lui-même, mais découlent de structures ancrées dans notre société ; ils peuvent toutefois être répétés, repris et donc renforcés par l’algorithme.

    Ces exemples illustrent la discrimination par les algorithmes de personnes sur la base de leur origine. Les algorithmes discriminent également de nombreux autres groupes de population. En Suisse aussi, de plus en plus d’entreprises font usage d’algorithmes pour leurs processus de recrutement ainsi que sur le lieu de travail.

    Discrimination algorithmique en Suisse : le cadre légal de protection contre la discrimination en Suisse ne protège pas suffisamment contre la discrimination par les systèmes algorithmiques et doit être renforcé. Ce papier de position présente les problématiques liées à la discrimination algorithmique et décrit les moyens d’améliorer la protection contre ce type de discrimination.

    Les algorithmes discriminent également de nombreux autres groupes de population. Dans la série « Discrimination 2.0 : ces algorithmes qui discriminent », AlgorithmWatch CH et humanrights.ch, en collaboration avec d’autres organisations, mettent en lumière divers cas de discrimination algorithmique.

    #discrimination #racisme #algorithme #xénophobie #IA #AI #intelligence_artificielle #travail #recrutement #discrimination_raciale #inégalités #ressources_humaines #Applicant_Tracking_Systems (#ATS) #CV #curriculum_vitae #sélection #tri

    • « L’IA et les systèmes algorithmiques peuvent désavantager des personnes en raison de leur origine, voire conduire à des discriminations raciales sur le marché du travail. » mais l’ia et les systemes algorithmiques peuvent tout aussi bien avantager des personnes en raison de leur origine, voire conduire à des discriminations raciales sur le marché du travail. La banque mondiale exige déja une discrimination selon les pratiques sexuelles pour favoriser emprunts et subventions !

  • Belgian beer study acquires taste for machine learning • The Register

    Researchers reckon results could improve recipe development for food and beverages
    Lindsay Clark
    Wed 27 Mar 2024 // 11:45 UTC

    Joining the list of things that probably don’t need improving by machine learning but people are going to try anyway is Belgian beer.

    The ale family has long been a favorite of connoisseurs worldwide, yet one group of scientists decided it could be brewed better with the assistance of machine learning.

    In a study led by Michiel Schreurs, a doctoral student at Vlaams Instituut voor Biotechnologie (VIB) in Flanders, the researchers wanted to help develop new alcoholic and non-alcoholic beer flavors with higher rates of consumer appreciation.

    Understanding the relationship between beer chemistry and its taste can be a tricky task. Much of the work is trial and error and relies on extensive consumer testing.

    #Intelligence_artificielle #Bière #Bullshit #Statistiques_fantasques

  • #Wauquiez veut surveiller les #trains et #lycées régionaux avec l’#intelligence_artificielle

    #Laurent_Wauquiez a fait voter le déploiement de la #vidéosurveillance_algorithmique dans tous les lycées et trains d’#Auvergne-Rhône-Alpes, profitant de l’#expérimentation accordée aux #Jeux_olympiques de Paris.

    Laurent Wauquiez savoure. « Nous avions pris position sur la vidéosurveillance pendant la campagne des régionales. Depuis, les esprits ont bougé », sourit le président de la région Auvergne-Rhône-Alpes, en référence à l’expérimentation de la #vidéosurveillance algorithmique (#VSA) accordée dans le cadre des Jeux olympiques de Paris. Surfant sur l’opportunité, il a fait voter jeudi 21 mars en Conseil régional sa propre expérimentation de vidéosurveillance « intelligente » des lycées et des trains d’Auvergne-Rhône-Alpes.

    L’ancien patron des Républicains (LR) justifie cette avancée technosécuritaire par l’assassinat du professeur #Dominique_Bernard dans un lycée d’Arras en octobre. Pour l’élu, cette tragédie « confirme la nécessité de renforcer la #sécurité des lycées ».

    Reste que cette expérimentation n’est pour l’instant pas légale. Laurent Wauquiez va demander au Premier ministre, Gabriel Attal, la permission d’élargir la loi pour couvrir les lycées et les transports régionaux. « L’expérimentation des JO est faite pour tester ce qui sera appliqué. Il faut en profiter », défend Renaud Pfeffer, vice-président délégué à la sécurité de la région Auvergne-Rhône-Alpes.

    Selon la délibération votée par le Conseil régional, cette #technologie qui combine vidéosurveillance et intelligence artificielle peut détecter huit types d’événements prédéterminés : « le non-respect du sens de circulation, le franchissement d’une zone interdite, la présence ou l’utilisation d’une arme, un départ de feu, un mouvement de foule, une personne au sol, une densité trop importante, un colis abandonné. » Les événements sont ensuite vérifiés par un agent, qui décide des mesures à prendre.

    L’expérimentation doit durer deux ans

    L’exécutif régional promet d’utiliser cette vidéosurveillance algorithmique « sans mettre en œuvre de reconnaissance faciale, ni d’identification de données biométriques [qui permettent d’identifier une personne]. » « On est sur du situationnel, pas sur de l’individu », insiste #Renaud_Pfeffer. Des promesses auxquelles ne croit pas Marne Strazielle, directrice de la communication de l’association de défense des droits et libertés sur internet La Quadrature du net. « En réalité, l’#algorithme identifie des actions qui peuvent être rattachées à son auteur », insiste-t-elle.

    Cette expérimentation est programmée pour durer deux ans dans les trains, #gares, lycées et #cars_scolaires. Les flux vidéos seront examinés au #Centre_régional_de_surveillance_des_transports (#CRST) aménagé en gare de Lyon Part-Dieu. « Les caméras sont prêtes », assure Renaud Pfeffer. Depuis son arrivée à la tête de la Région en 2016, Laurent Wauquiez l’a généreusement équipée en vidéosurveillance : 129 gares sont surveillées par 2 300 caméras, dont les images sont visionnées en temps réel au CRST. 285 lycées, 750 cars et la totalité des rames ferroviaires sont également équipés.

    « L’illusion d’avoir une approche pratique de l’insécurité »

    Pour défendre son projet, l’exécutif régional s’appuie sur la loi du 19 mai 2023, adoptée pour les Jeux olympiques de Paris et qui autorise l’expérimentation à grande échelle de la VSA par la police nationale jusqu’au 31 mars 2025. « On n’a le droit à la sécurité que pendant les Jeux olympiques et que à Paris ? On ne peut pas tester [la VSA] pour nos enfants, contre les problèmes de drogue ? », s’offusque Laurent Wauquiez.

    « Cette technologie permet aux décideurs politiques d’offrir l’illusion d’avoir une approche pratique de l’insécurité car ils mettent en place un dispositif, dénonce Marne Strazielle. Mais ce n’est pas parce qu’on enregistre et détecte une action dans l’espace public qu’elle va moins se produire. Souvent, cela ne fait que déplacer le problème. Il faut faire le travail de comprendre pourquoi elle s’est produite et comment la réduire. »

    La #Commission_nationale_de_l’informatique_et_des_libertés (#Cnil), qui n’a pas été consultée par l’équipe de Laurent Wauquiez, rappelle à Reporterre sa position de principe, qui « considère que la mise en œuvre de caméras augmentées conduit fréquemment à limiter les droits des personnes filmées ». Pour l’autorité administrative indépendante, « le déploiement de ces dispositifs dans les espaces publics, où s’exercent de nombreuses libertés individuelles (liberté d’aller et venir, d’expression, de réunion, droit de manifester, liberté de culte, etc.), présente incontestablement des risques pour les droits et libertés fondamentaux des personnes et la préservation de leur anonymat ».

    #surveillance #IA #AI #France #JO #JO_2024

    • La région #AURA vote le déploiement de la VSA dans les gares et les lycées

      Il en rêvait, il l’a fait. Un article de Reporterre nous apprend que Laurent Wauquiez a fait voter jeudi 21 mars en Conseil régional, le déploiement de la vidéosurveillance algorithmique dans tous les lycées et trains d’Auvergne-Rhône-Alpes, profitant de l’expérimentation accordée aux Jeux olympiques de Paris.

      Actuellement 129 gares seraient surveillées par 2 300 caméras, dont les images sont visionnées en temps réel au CRST. 285 lycées, 750 cars et la totalité des rames ferroviaires seraient également équipés.

      Selon la délibération votée par le Conseil régional, l’équipement de ces caméras avec la vidéosurveillance automatisée pourra détecter huit types d’événements prédéterminés : « le non-respect du sens de circulation, le franchissement d’une zone interdite, la présence ou l’utilisation d’une arme, un départ de feu, un mouvement de foule, une personne au sol, une densité trop importante, un colis abandonné. ». Les événements seront ensuite vérifiés par un agent, qui décidera des mesures à prendre.

      L’exécutif régional promet d’utiliser cette vidéosurveillance algorithmique « sans mettre en œuvre de reconnaissance faciale, ni d’identification de données biométriques [qui permettent d’identifier une personne]. » Cependant, comme l’a très bien démontré la Quadrature du Net, la VSA implique nécessairement une identification biométrique.

      La VSA et la reconnaissance faciale reposent sur les mêmes algorithmes d’analyse d’images, la seule différence est que la première isole et reconnaît des corps, des mouvements ou des objets, lorsque la seconde détecte un visage.

      La VSA est capable de s’intéresser à des « événements » (déplacements rapides, altercations, immobilité prolongée) ou aux traits distinctifs des personnes : une silhouette, un habillement, une démarche, grâce à quoi ils peuvent isoler une personne au sein d’une foule et la suivre tout le long de son déplacement dans la ville. La VSA identifie et analyse donc en permanence des données biométriques.

      « En réalité, l’algorithme identifie des actions qui peuvent être rattachées à son auteur » (Marne Strazielle, directrice de la communication de La Quadrature du net.)

      Ce sont généralement les mêmes entreprises qui développent ces deux technologies. Par exemple, la start-up française Two-I s’est d’abord lancé dans la détection d’émotion, a voulu la tester dans les tramways niçois, avant d’expérimenter la reconnaissance faciale sur des supporters de football à Metz. Finalement, l’entreprise semble se concentrer sur la VSA et en vendre à plusieurs communes de France. La VSA est une technologie biométrique intrinsèquement dangereuse, l’accepter c’est ouvrir la voie aux pires outils de surveillance.
      "Loi J.O. : refusons la surveillance biométrique", La Quadrature du Net

      Cela fait longtemps que M. Wauquiez projette d’équiper massivement cars scolaires et inter-urbains, gares et TER d’Auvergne-Rhône-Alpes en caméras et de connecter le tout à la reconnaissance faciale.

      En juin 2023, nous avions déjà sorti un article sur le sujet, au moment de la signature d’une convention entre la région Auvergne Rhône Alpes, le préfet et la SNCF, autorisant le transfert aux forces de sécurité, des images des caméras de vidéosurveillance de 129 gares sur les quelque 350 que compte la région AURA.

      Depuis fin 2023, il demande également d’utiliser à titre expérimental des "logiciels de reconnaissance faciale" aux abords des lycées pour pouvoir identifier des personnes "suivies pour radicalisation terroriste".

      Une mesure qui a déjà été reconnue comme illégale par la justice, comme l’a rappelé le media Reporterre. En 2019 un projet de mise en place de portiques de reconnaissance faciale à l’entrée de lycées à Nice et Marseille avait été contesté par La Quadrature du net et la LDH. La Commission nationale de l’informatique et des libertés (CNIL), qui avait déjà formulé des recommandations, a rendu à l’époque un avis qui jugeait le dispositif pas nécessaire et disproportionné.

      Mais cela qui n’arrêtera Laurent Wauquiez, celui-ci a déclaré qu’il allait demander au Premier ministre, Gabriel Attal, la permission d’élargir la loi pour couvrir les lycées et les transports régionaux...

      La CNIL, qui n’a pas été consultée par l’équipe de Laurent Wauquiez, a rappelé à Reporterre sa position de principe, qui « considère que la mise en œuvre de caméras augmentées conduit fréquemment à limiter les droits des personnes filmées ».

      Pour elle, « le déploiement de ces dispositifs dans les espaces publics, où s’exercent de nombreuses libertés individuelles (liberté d’aller et venir, d’expression, de réunion, droit de manifester, liberté de culte, etc.), présente incontestablement des risques pour les droits et libertés fondamentaux des personnes et la préservation de leur anonymat ».

      Des dizaines d’organisations, parmi lesquelles Human Rights Watch, ont adressé une lettre publique aux députés, les alertant sur le fait que les nouvelles dispositions créent un précédent inquiétant de surveillance injustifiée et disproportionnée dans les espaces publics, et menacent les droits fondamentaux, tels que le droit à la vie privée, la liberté de réunion et d’association, et le droit à la non-discrimination.

      Résistons à la #VSA et à la technopolice !


  • Five of this year’s Pulitzer finalists are AI-powered | Nieman Journalism Lab

    Two of journalism’s most prestigious prizes — the Pulitzers and the Polk awards — on how they’re thinking about entrants using generative AI.
    By Alex Perry March 11, 2024, 10:31 a.m.

    Five of the 45 finalists in this year’s Pulitzer Prizes for journalism disclosed using AI in the process of researching, reporting, or telling their submissions, according to Pulitzer Prize administrator Marjorie Miller.

    It’s the first time the awards, which received around 1,200 submissions this year, required entrants to disclose AI usage. The Pulitzer Board only added this requirement to the journalism category. (The list of finalists is not yet public. It will be announced, along with the winners, on May 8, 2024.)

    Miller, who sits on the 18-person Pulitzer board, said the board started discussing AI policies early last year because of the rising popularity of generative AI and machine learning.

    “AI tools at the time had an ‘oh no, the devil is coming’ reputation,” she said, adding that the board was interested in learning about AI’s capabilities as well as its dangers.

    Last July — the same month OpenAI struck a deal with the Associated Press and a $5 million partnership with the American Journalism Project — a Columbia Journalism School professor was giving the Pulitzer Board a crash course in AI with the help of a few other industry experts.

    Mark Hansen, who is also the director of the David and Helen Gurley Brown Institute for Media Innovation, wanted to provide the board with a broad base of AI usage in newsrooms from interrogating large datasets to writing code for web-scraping large language models.

    He and AI experts from The Marshall Project, Harvard Innovation Labs, and Center for Cooperative Media created informational videos about the basics of large language models and newsroom use cases. Hansen also moderated a Q&A panel featuring AI experts from Bloomberg, The Markup, McClatchy, and Google.

    Miller said the board’s approach from the beginning was always exploratory. They never considered restricting AI usage because they felt doing so would discourage newsrooms from engaging with innovative technology.

    “I see it as an opportunity to sample the creativity that journalists are bringing to generative AI, even in these early days,” said Hansen, who didn’t weigh in directly on the new awards guideline.

    While the group focused on generative AI’s applications, they spent substantial time on relevant copyright law, data privacy, and bias in machine learning models. One of the experts Hansen invited was Carrie J. Cai, a staff research scientist in Google’s Responsible AI division who specializes in human-computer interaction.

    #Journalisme #Intelligence_artificielle #Pulitzer

  • Ketty Introduces AI Book Designer: Revolutionizing Book Production

    Effortless Book Design with AI

    The AI Book Designer introduces a groundbreaking approach to book design by allowing users to style and format their books using simple, intuitive commands. Users can say “make the book look modern”, “make the text more readable” or click on a chapter title and instruct the AI to “add this to the header,” and the changes are applied instantly. This eliminates the need for knowing complex design software or coding, making professional-grade design accessible to everyone.

    Where we are headed

    As we look towards the future development of the AI Book Designer, there are several ideas we are currently thinking about:

    AI-Generated Cover Designs: Generate a range of cover options based on user input.
    Collaborative AI Design: Enable multiple users to work on the same book design simultaneously (concurrently). This feature could be particularly useful for larger publishing teams or co-authored projects.
    AI-Assisted Image Management: Automatically apply styles and optimize the placement of images within the book layout.

    Join the Movement

    Coko believes AI has the potential to transform book production, making it accessible and efficient for everyone. By combining open-source code and principles with cutting-edge technology, Coko is paving the way for a new era of automated typesetting and book design.

    #Typographie #Coko #Intelligence_artificielle

  • Border security with drones and databases

    The EU’s borders are increasingly militarised, with hundreds of millions of euros paid to state agencies and military, security and IT companies for surveillance, patrols and apprehension and detention. This process has massive human cost, and politicians are planning to intensify it.

    Europe is ringed by steel fences topped by barbed wire; patrolled by border agents equipped with thermal vision systems, heartbeat detectors, guns and batons; and watched from the skies by drones, helicopters and planes. Anyone who enters is supposed to have their fingerprints and photograph taken for inclusion in an enormous biometric database. Constant additions to this technological arsenal are under development, backed by generous amounts of public funding. Three decades after the fall of the Berlin Wall, there are more walls than ever at Europe’s borders,[1] and those borders stretch ever further in and out of its territory. This situation is the result of long-term political and corporate efforts to toughen up border surveillance and controls.

    The implications for those travelling to the EU depend on whether they belong to the majority entering in a “regular” manner, with the necessary paperwork and permissions, or are unable to obtain that paperwork, and cross borders irregularly. Those with permission must hand over increasing amounts of personal data. The increasing automation of borders is reliant on the collection of sensitive personal data and the use of algorithms, machine learning and other forms of so-called artificial intelligence to determine whether or not an individual poses a threat.

    Those without permission to enter the EU – a category that includes almost any refugee, with the notable exception of those who hold a Ukrainian passport – are faced with technology, personnel and policies designed to make journeys increasingly difficult, and thus increasingly dangerous. The reliance on smugglers is a result of the insistence on keeping people in need out at any cost – and the cost is substantial. Thousands of people die at Europe’s borders every year, families are separated, and people suffer serious physical and psychological harm as a result of those journeys and subsequent administrative detention and social marginalisation. Yet parties of all political stripes remain committed to the same harmful and dangerous policies – many of which are being worsened through the new Pact on Migration and Asylum.[2]

    The EU’s border agency, Frontex, based in Warsaw, was first set up in 2004 with the aim of providing technical coordination between EU member states’ border guards. Its remit has been gradually expanded. Following the “migration crisis” of 2015 and 2016, extensive new powers were granted to the agency. As the Max Planck Institute has noted, the 2016 law shifted the agency from a playing “support role” to acting as “a player in its own right that fulfils a regulatory, supervisory, and operational role.”[3] New tasks granted to the agency included coordinating deportations of rejected refugees and migrants, data analysis and exchange, border surveillance, and technology research and development. A further legal upgrade in 2019 introduced even more extensive powers, in particular in relation to deportations, and cooperation with and operations in third countries.

    The uniforms, guns and batons wielded by Frontex’s border guards are self-evidently militaristic in nature, as are other aspects of its work: surveillance drones have been acquired from Israeli military companies, and the agency deploys “mobile radars and thermal cameras mounted on vehicles, as well as heartbeat detectors and CO2 monitors used to detect signs of people concealed inside vehicles.”[4] One investigation described the companies that have held lobbying meetings or attended events with Frontex as “a Who’s Who of the weapons industry,” with guests including Airbus, BAE Systems, Leonardo and Thales.[5] The information acquired from the agency’s surveillance and field operations is combined with data provided by EU and third country agencies, and fed into the European Border Surveillance System, EUROSUR. This offers a God’s-eye overview of the situation at Europe’s borders and beyond – the system also claims to provide “pre-frontier situational awareness.”

    The EU and its member states also fund research and development on these technologies. From 2014 to 2022, 49 research projects were provided with a total of almost €275 million to investigate new border technologies, including swarms of autonomous drones for border surveillance, and systems that aim to use artificial intelligence to integrate and analyse data from drones, satellites, cameras, sensors and elsewhere for “analysis of potential threats” and “detection of illegal activities.”[6] Amongst the top recipients of funding have been large research institutes – for example, Germany’s Fraunhofer Institute – but companies such as Leonardo, Smiths Detection, Engineering – Ingegneria Informatica and Veridos have also been significant beneficiaries.[7]

    This is only a tiny fraction of the funds available for strengthening the EU’s border regime. A 2022 study found that between 2015 and 2020, €7.7 billion had been spent on the EU’s borders and “the biggest parts of this budget come from European funding” – that is, the EU’s own budget. The total value of the budgets that provide funds for asylum, migration and border control between 2021-27 comes to over €113 billion[8]. Proposals for the next round of budgets from 2028 until 2035 are likely to be even larger.

    Cooperation between the EU, its member states and third countries on migration control comes in a variety of forms: diplomacy, short and long-term projects, formal agreements and operational deployments. Whatever form it takes, it is frequently extremely harmful. For example, to try to reduce the number of people arriving across the Mediterranean, member states have withdrawn national sea rescue assets (as deployed, for example, in Italy’s Mare Nostrum operation) whilst increasing aerial surveillance, such as that provided by the Israel-produced drones operated by Frontex. This makes it possible to observe refugees attempting to cross the Mediterranean, whilst outsourcing their interception to authorities from countries such as Libya, Tunisia and Egypt.

    This is part of an ongoing plan “to strengthen coordination of search and rescue capacities and border surveillance at sea and land borders” of those countries. [9] Cooperation with Tunisia includes refitting search and rescue vessels and providing vehicles and equipment to the Tunisian coastguard and navy, along with substantial amounts of funding. The agreement with Egypt appears to be structured along similar lines, and five vessels have been provided to the so-called Libyan Coast Guard in 2023.[10]

    Frontex also plays a key role in the EU’s externalised border controls. The 2016 reform allowed Frontex deployments at countries bordering the EU, and the 2019 reform allowed deployments anywhere in the world, subject to agreement with the state in question. There are now EU border guards stationed in Albania, Montenegro, Serbia, Bosnia and Herzegovina, and North Macedonia.[11] The agency is seeking agreements with Niger, Senegal and Morocco, and has recently received visits from Tunisian and Egyptian officials with a view to stepping up cooperation.[12]

    In a recent report for the organisation EuroMed Rights, Antonella Napolitano highlighted “a new element” in the EU’s externalisation strategy: “the use of EU funds – including development aid – to outsource surveillance technologies that are used to entrench political control both on people on the move and local population.” Five means of doing so have been identified: provision of equipment; training; financing operations and procurement; facilitating exports by industry; and promoting legislation that enables surveillance.[13]

    The report highlights Frontex’s extended role which, even without agreements allowing deployments on foreign territory, has seen the agency support the creation of “risk analysis cells” in a number of African states, used to gather and analyse data on migration movements. The EU has also funded intelligence training in Algeria, digital evidence capacity building in Egypt, border control initiatives in Libya, and the provision of surveillance technology to Morocco. The European Ombudsman has found that insufficient attention has been given to the potential human rights impacts of this kind of cooperation.[14]

    While the EU and its member states may provide the funds for the acquisition of new technologies, or the construction of new border control systems, information on the companies that receive the contracts is not necessarily publicly available. Funds awarded to third countries will be spent in accordance with those countries’ procurement rules, which may not be as transparent as those in the EU. Indeed, the acquisition of information on the externalisation in third countries is far from simple, as a Statewatch investigation published in March 2023 found.[15]

    While EU and member state institutions are clearly committed to continuing with plans to strengthen border controls, there is a plethora of organisations, initiatives, campaigns and projects in Europe, Africa and elsewhere that are calling for a different approach. One major opportunity to call for change in the years to come will revolve around proposals for the EU’s new budgets in the 2028-35 period. The European Commission is likely to propose pouring billions more euros into borders – but there are many alternative uses of that money that would be more positive and productive. The challenge will be in creating enough political pressure to make that happen.

    This article was originally published by Welt Sichten, and is based upon the Statewatch/EuroMed Rights report Europe’s techno-borders.


    [1] https://www.tni.org/en/publication/building-walls

    [2] https://www.statewatch.org/news/2023/december/tracking-the-pact-human-rights-disaster-in-the-works-as-parliament-makes

    [3] https://www.mpg.de/14588889/frontex

    [4] https://www.theguardian.com/global-development/2021/dec/06/fortress-europe-the-millions-spent-on-military-grade-tech-to-deter-refu

    [5] https://frontexfiles.eu/en.html

    [6] https://www.statewatch.org/publications/reports-and-books/europe-s-techno-borders

    [7] https://www.statewatch.org/publications/reports-and-books/europe-s-techno-borders

    [8] https://www.statewatch.org/publications/reports-and-books/europe-s-techno-borders

    [9] https://www.statewatch.org/news/2023/november/eu-planning-new-anti-migration-deals-with-egypt-and-tunisia-unrepentant-

    [10] https://www.statewatch.org/media/4103/eu-com-von-der-leyen-ec-letter-annex-10-23.pdf

    [11] https://www.statewatch.org/analyses/2021/briefing-external-action-frontex-operations-outside-the-eu

    [12] https://www.statewatch.org/news/2023/november/eu-planning-new-anti-migration-deals-with-egypt-and-tunisia-unrepentant-, https://www.statewatch.org/publications/events/secrecy-and-the-externalisation-of-eu-migration-control

    [13] https://privacyinternational.org/challenging-drivers-surveillance

    [14] https://euromedrights.org/wp-content/uploads/2023/07/Euromed_AI-Migration-Report_EN-1.pdf

    [15] https://www.statewatch.org/access-denied-secrecy-and-the-externalisation-of-eu-migration-control

    #frontières #militarisation_des_frontières #technologie #données #bases_de_données #drones #complexe_militaro-industriel #migrations #réfugiés #contrôles_frontaliers #surveillance #sécurité_frontalière #biométrie #données_biométriques #intelligence_artificielle #algorithmes #smugglers #passeurs #Frontex #Airbus #BAE_Systems #Leonardo #Thales #EUROSUR #coût #business #prix #Smiths_Detection #Fraunhofer_Institute #Engineering_Ingegneria_Informatica #informatique #Tunisie #gardes-côtes_tunisiens #Albanie #Monténégro #Serbie #Bosnie-Herzégovine #Macédoine_du_Nord #Egypte #externalisation #développement #aide_au_développement #coopération_au_développement #Algérie #Libye #Maroc #Afrique_du_Nord

  • Médicaments non délivrés, devis et facturation en panne… Une cyberattaque perturbe sérieusement le système de santé aux États-Unis Ingrid Vergara

    La cyberattaque d’une filiale de la plus importante compagnie d’assurance-santé américaine tourne à la crise d’ampleur aux États-Unis. Victime d’un rançongiciel qui affecte une de ses divisions depuis le 21 février, le groupe UnitedHealthcare n’est plus en mesure d’assurer de nombreuses tâches nécessaires au bon fonctionnement du système de santé. Des médecins qui ne peuvent plus savoir si un patient bénéficie ou non d’une assurance-santé, des pharmacies incapables de transmettre les demandes de remboursement de patients, des factures d’hôpitaux non réglées, des retards dans les délivrances d’ordonnances de médicaments…

    Les réactions en chaîne s’étendent et s’aggravent au fur et à mesure que les jours passent. Car UnitedHealthcare est la plus grande plateforme d’échange de paiements entre médecins, pharmacies, prestataires de soins de santé et patients du système de santé américain. Sa filiale Change gère la facturation de quelque 67.000 pharmacies, . . . . .

    #Santé #internet #sécurité_informatique #cyberattaques #cybersécurité #malware #usa #UnitedHealthcare #algorithme #juste_à_temps #dématérialisation #intelligence_artificielle #artificial-intelligence #blockchain #IA

    Source et suite (payante) : https://www.lefigaro.fr/secteur/high-tech/medicaments-non-delivres-devis-et-facturation-en-panne-une-cyberattaque-per


    Les géants de la tech déploient une vision du monde cohérente, comme dans toute folie

    Dans l’essai « Technopolitique », la chercheuse à l’EHESS analyse la façon dont les grandes entreprises technologiques, avec leurs innovations ultrasophistiquées, redessinent les rapports de pouvoir avec l’Etat. Ce qui rend nécessaire une réponse démocratique.



  • Intelligence artificielle et traduction, l’impossible dialogue ?

    Le recours à ces outils par des éditeurs n’est pas si exceptionnel, si bien que les professionnels craignent d’être relégués au rang de relecteurs des travaux de la machine.

    L’Association des traducteurs littéraires de France (ATLF) et l’Association pour la promotion de la traduction littéraire (ATLAS) rappelaient, en début d’année 2023 : « Tous ceux et celles qui pensent la traduction ou qui l’ont pratiquée le savent : on ne traduit pas des mots, mais une intention, des sous-entendus, l’équivoque, ce qui n’est pas dit et pourtant existe dans les plis d’un texte littéraire. »

    #Edition #Traduction #Intelligence_artificielle

  • Glüxkind AI Stroller - The Best Smart Stroller for your family

    Autant de conneries en si peu d’espace... pas mal !!!
    Les enfants, nouvelle victimes de la prédation algorithmique.

    Unlock Your Helping hand

    Designed by parents with sky-high standards, our AI Powered Smart Strollers elevates the happy moments of daily parenting and lightens the stressful ones.

    Feel supported and savour moments of peace and quiet with features like Automatic Rock-My-Baby or the built-in White Noise Machine to help soothe your little one.

    #Poussette #Intelligence_artificielle #Parentalité

  • One big thing missing from the AI conversation | Zeynep Tufekci - GZERO Media

    When deployed cheaply and at scale, artificial intelligence will be able to infer things about people, places, and entire nations, which humans alone never could. This is both good and potentially very, very bad.

    If you were to think of some of the most overlooked stories of 2023, artificial intelligence would probably not make your list. OpenAI’s ChatGPT has changed how we think about AI, and you’ve undoubtedly read plenty of quick takes about how AI will save or destroy the planet. But according to Princeton sociologist Zeynep Tufekci, there is a super important implication of AI that not enough people are talking about.

    “Rather than looking at what happens between you and me if we use AI,” Tufekci said to Ian on the sidelines of the Paris Peace Forum, “What I would like to see discussed is what happens if it’s used by a billion people?” In a short but substantive interview for GZERO World, Tufekci breaks down just how important it is to think about the applications of AI “at scale” when its capabilities can be deployed cheaply. Tufekci cites the example of how AI could change hiring practices in ways we might not intend, like weeding out candidates with clinical depression or with a history of unionizing. AI at scale will demonstrate a remarkable ability to infer things that humans cannot, Tufekci explains.

    #Intelligence_artificielle #Zeynep_Tufekci