• Zuckerberg’s Meta Is Spending Billions To Buy 350,000 Nvidia H100 GPUs | PCMag
    https://www.pcmag.com/news/zuckerbergs-meta-is-spending-billions-to-buy-350000-nvidia-h100-gpus

    But to get there, Meta is going to need Nvidia’s H100, an enterprise GPU that’s adept at training large language models. “We’re building an absolutely massive amount of infrastructure to support this,” Zuckerberg said. “By the end of this year, we’re going to have around 350,000 Nvidia H100s. Or around 600,000 H100 equivalents of compute if you include other GPUs.”

    #meta #business #ia #intelligence_artificielle #investissement

  • One-Third of Game Developers Say Their Company Was Hit By Layoffs Last Year - IGN
    https://www.ign.com/articles/one-third-of-game-developers-say-their-company-was-hit-by-layoffs-last-year

    In stark contrast to a year of blockbuster video game hits, one of the biggest ongoing industry trends in 2023 was the prevalence of mass layoffs. While actual figures are difficult to get ahold of, estimates suggest the number of workers laid off in games last year approached or exceeded 10,000, and 2024 isn’t looking much better. Now, a GDC survey of developers suggests that one-third of all game developers were impacted by layoffs last year, either directly or by witnessing them happen at their company.

    L’article traite par ailleurs d’IA et de chaîne de blocs basés aussi sur la même enquête de la GDC.

    #jeu_vidéo #jeux_vidéo #business #ressources_humaines #licenciements #blockchain #ia #intelligence_artificielle

  • GPT-4 kostenlos nutzen : Microsoft Copilot für iPhone, iPad und Mac
    https://www.heise.de/news/GPT-4-kostenlos-nutzen-Microsoft-Copilot-fuer-iPhone-iPad-und-Mac-9585553.html

    Encore une offre « gratuite » où on paye avec ses données personnelles. Pourtant c’est tentant d’avoir les générateurs d’images et textes artificiels à sa disposition. Nous avons tellement pris l’habitude de’accepter des pactes avec le diable qu’on peut se permettre celui-ci aussi. On verra bien si c’est vrai.

    2.1.2024 von Malte Kirchner - Wenige Tage nach der Android-Version veröffentlichte Microsoft jetzt auch Copilot für iOS und iPadOS. Auch Mac-Nutzer können die App verwenden.

    Microsoft hat seine App Copilot jetzt auch für iOS, iPadOS und macOS veröffentlicht. Sie ist im App Store kostenlos verfügbar. Der KI-Chatbot erlaubt das Generieren von Texten unter anderem auch mit dem Large Language Model (LLM) GPT-4 – und das kostenlos und ohne Anmeldung. Von Interesse dürfte für einige sicherlich auch der Text-zu-Bild-Generator DALL-E3 sein, der integriert ist.

    Die Variante für Apple-Geräte folgte wenige Tage, nachdem Microsoft die App für Android-Geräte veröffentlicht hatte. Offiziell ist sie nur für iOS und iPadOS vorgesehen. Doch Microsoft lässt es zu, dass sie auch auf Macs mit Apple Silicon aus dem Mac App Store geladen werden kann. Dort gilt sie zwar als ungeprüft, funktionierte aber in unseren Tests einwandfrei.
    Bis zu 30 Antworten pro Thread

    Der zuerst Bing Chat genannte KI-Assistent ist auch ohne Anmeldung nutzbar. Dann allerdings sind pro Thread nur fünf Fragen und Antworten möglich. Das Erzeugen von Bildern ist nur nach Anmeldung möglich. Mit einem Microsoft-Account erhöht sich zudem die Zahl der Antworten pro Thread auf 30. Neben Texteingaben können auch Fotos und Spracheingaben zur Verarbeitung hochgeladen werden. Gegenwärtig lassen sich komplette Chatverläufe nicht sichern. Einzelne Antworten können kopiert werden.

    Microsofts Copilot gesellt sich im App Store zur offiziellen App ChatGPT von OpenAI, die aber nicht für den Mac bereitsteht. Zudem ist die Nutzung von GPT-4 bei ChatGPT nur mit einem kostenpflichtigen Plus-Abo möglich. Der Copilot wurde bereits auch in Windows, in Office-Anwendungen und weiterer Software integriert.

    #Microsift #intelligence_artificielle #service_gratuit

  • Pourquoi la #promesse de « vidéogérer » les #villes avec des caméras couplées à une #intelligence_artificielle séduit et inquiète

    Sécurité, stationnement, déchets… #Nîmes a inauguré, à l’automne 2023, son « #hyperviseur_urbain ». Alors que la collecte et la circulation des #données sont au cœur de ce système, l’antenne locale de la Ligue des droits de l’homme s’inquiète. D’autres villes, comme #Dijon, ont déjà fait ce choix.

    La salle a des allures de centre spatial : un mur de plus de 20 mètres de long totalement recouvert d’écrans, 76 au total, chacun pouvant se diviser en neuf. Ici parviennent les images des 1 300 #caméras disposées dans la ville de Nîmes et dans certaines communes de son agglomération.

    A la pointe depuis 2001 sur le thème des #caméras_urbaines, se classant sur le podium des villes les plus vidéosurveillées du pays, Nîmes a inauguré, le 13 novembre 2023, son « #hyperviseur ». Ce plateau technique et confidentiel de 600 mètres carrés est entièrement consacré à une « nouvelle démarche de #territoire_intelligent », indique le maire (Les Républicains), Jean-Paul Fournier, réélu pour un quatrième mandat en 2020.

    Avec cet outil dernier cri, sur lequel se relaient nuit et jour une cinquantaine de personnes, la ville fait un grand pas de plus vers la #smart_city (la « #ville_connectée »), une tendance en plein développement pour la gestion des collectivités.

    Ce matin-là, les agents en poste peuvent facilement repérer, à partir d’images de très haute qualité, un stationnement gênant, un véhicule qui circule trop vite, un dépotoir sauvage, un comportement étrange… L’hyperviseur concentre toutes les informations en lien avec la gestion de l’#espace_public (sécurité, circulation, stationnement, environnement…), permet de gérer d’un simple clic l’éclairage public d’un quartier, de mettre une amende à distance (leur nombre a augmenté de 23 % en un an avec la #vidéoverbalisation) ou de repérer une intrusion dans un des 375 bâtiments municipaux connectés.

    La collecte et la circulation des données en temps réel sont au cœur du programme. Le système s’appuie sur des caméras dotées, et c’est la nouveauté, de logiciels d’intelligence artificielle dont les #algorithmes fournissent de nouvelles informations. Car il ne s’agit plus seulement de filmer et de surveiller. « Nous utilisons des caméras qui permettent de gérer en temps réel la ville et apportent des analyses pour optimiser la consommation d’énergie, par exemple, ou gérer un flux de circulation grâce à un logiciel capable de faire du comptage et de la statistique », explique Christelle Michalot, responsable de ce centre opérationnel d’#hypervision_urbaine.

    #Reconnaissance_faciale

    Si la municipalité n’hésite pas à présenter, sur ses réseaux sociaux, ce nouveau dispositif, elle est en revanche beaucoup plus discrète lorsqu’il s’agit d’évoquer les #logiciels utilisés. Selon nos informations, la ville travaille avec #Ineo, une entreprise française spécialisée dans le domaine de la #ville_intelligente. Le centre de police municipale est également équipé du logiciel de #surveillance_automatisée #Syndex, et d’un logiciel d’analyse pour images de vidéosurveillance très performant, #Briefcam.

    Ce dernier logiciel, de plus en plus répandu dans les collectivités françaises, a été mis au point par une société israélienne rachetée par le japonais #Canon, en 2018. Il est surtout au cœur de plusieurs polémiques et d’autant d’actions en justice intentées par des syndicats, des associations et des collectifs qui lui reprochent, notamment, de permettre la reconnaissance faciale de n’importe quel individu en activant une fonctionnalité spécifique.

    Le 22 novembre 2023, le tribunal administratif de Caen a condamné la communauté de communes normande #Cœur-Côte-Fleurie, ardente promotrice de cette solution technologique, « à l’effacement des données à caractère personnel contenues dans le fichier », en estimant que l’utilisation de ce type de caméras dites « intelligentes » était susceptible de constituer « une atteinte grave et manifestement illégale au #respect_de_la_vie_privée ». D’autres décisions de la #justice administrative, comme à #Nice et à #Lille, n’ont pas condamné l’usage en soi du #logiciel, dès lors que la possibilité de procéder à la reconnaissance faciale n’était pas activée.

    A Nîmes, le développement de cette « surveillance de masse » inquiète la Ligue des droits de l’homme (LDH), la seule association locale à avoir soulevé la question de l’utilisation des #données_personnelles au moment de la campagne municipale, et qui, aujourd’hui encore, s’interroge. « Nous avons le sentiment qu’on nous raconte des choses partielles quant à l’utilisation de ces données personnelles », explique le vice-président de l’antenne nîmoise, Jean Launay.

    « Nous ne sommes pas vraiment informés, et cela pose la question des #libertés_individuelles, estime celui qui craint une escalade sans fin. Nous avons décortiqué les logiciels : ils sont prévus pour éventuellement faire de la reconnaissance faciale. C’est juste une affaire de #paramétrage. » Reconnaissance faciale officiellement interdite par la loi. Il n’empêche, la LDH estime que « le #droit_à_la_vie_privée passe par l’existence d’une sphère intime. Et force est de constater que cette sphère, à Nîmes, se réduit comme peau de chagrin », résume M. Launay.

    « Des progrès dans de nombreux domaines »

    L’élu à la ville et à Nîmes Métropole Frédéric Escojido s’en défend : « Nous ne sommes pas Big Brother ! Et nous ne pouvons pas faire n’importe quoi. L’hyperviseur fonctionne en respectant la loi, le #RGPD [règlement général sur la protection des données] et selon un cahier des charges très précis. » Pour moderniser son infrastructure et la transformer en hyperviseur, Nîmes, qui consacre 8 % de son budget annuel à la #sécurité et dépense 300 000 euros pour installer entre vingt-cinq et trente nouvelles caméras par an, a déboursé 1 million d’euros.

    La métropole s’est inspirée de Dijon, qui a mis en place un poste de commandement partagé avec les vingt-trois communes de son territoire il y a cinq ans. En 2018, elle est arrivée deuxième aux World Smart City Awards, le prix mondial de la ville intelligente.

    Dans l’agglomération, de grands panneaux lumineux indiquent en temps réel des situations précises. Un accident, et les automobilistes en sont informés dans les secondes qui suivent par le biais de ces mâts citadins ou sur leur smartphone, ce qui leur permet d’éviter le secteur. Baptisé « #OnDijon », ce projet, qui mise aussi sur l’open data, a nécessité un investissement de 105 millions d’euros. La ville s’est associée à des entreprises privées (#Bouygues_Telecom, #Citelum, #Suez et #Capgemini).

    A Dijon, un #comité_d’éthique et de gouvernance de la donnée a été mis en place. Il réunit des habitants, des représentants de la collectivité, des associations et des entreprises pour établir une #charte « de la #donnée_numérique et des usages, explique Denis Hameau, adjoint au maire (socialiste) François Rebsamen et élu communautaire. La technique permet de faire des progrès dans de nombreux domaines, il faut s’assurer qu’elle produit des choses justes dans un cadre fixe. Les données ne sont pas là pour opprimer les gens, ni les fliquer ».

    Des « systèmes susceptibles de modifier votre #comportement »

    Nice, Angers, Lyon, Deauville (Calvados), Orléans… Les villes vidéogérées, de toutes tailles, se multiplient, et avec elles les questions éthiques concernant l’usage, pour le moment assez flou, des données personnelles et la #surveillance_individuelle, même si peu de citoyens semblent s’en emparer.

    La Commission nationale de l’informatique et des libertés (CNIL), elle, veille. « Les systèmes deviennent de plus en plus performants, avec des #caméras_numériques capables de faire du 360 degrés et de zoomer, observe Thomas Dautieu, directeur de l’accompagnement juridique de la CNIL. Et il y a un nouveau phénomène : certaines d’entre elles sont augmentées, c’est-à-dire capables d’analyser, et ne se contentent pas de filmer. Elles intègrent un logiciel capable de faire parler les images, et ces images vont dire des choses. »

    Cette nouveauté est au cœur de nouveaux enjeux : « On passe d’une situation où on était filmé dans la rue à une situation où nous sommes analysés, reprend Thomas Dautieu. Avec l’éventuel développement des #caméras_augmentées, quand vous mettrez un pied dans la rue, si vous restez trop longtemps sur un banc, si vous prenez un sens interdit, vous pourrez être filmé et analysé. Ces systèmes sont susceptibles de modifier votre comportement dans l’espace public. Si l’individu sait qu’il va déclencher une alerte s’il se met à courir, peut-être qu’il ne va pas courir. Et cela doit tous nous interpeller. »

    Actuellement, juridiquement, ces caméras augmentées ne peuvent analyser que des objets (camions, voitures, vélos) à des fins statistiques. « Celles capables d’analyser des comportements individuels ne peuvent être déployées », assure le directeur à la CNIL. Mais c’est une question de temps. « Ce sera prochainement possible, sous réserve qu’elles soient déployées à l’occasion d’événements particuliers. » Comme les Jeux olympiques.

    Le 19 mai 2023, le Parlement a adopté une loi pour mieux encadrer l’usage de la #vidéoprotection dite « intelligente ». « Le texte permet une expérimentation de ces dispositifs, et impose que ces algorithmes ne soient mis en place, avec autorisation préfectorale, dans le temps et l’espace, que pour une durée limitée, par exemple pour un grand événement comme un concert. Ce qui veut dire que, en dehors de ces cas, ce type de dispositif ne peut pas être déployé », insiste Thomas Dautieu. La CNIL, qui a déjà entamé des contrôles de centres d’hypervision urbains en 2023, en fait l’une de ses priorités pour 2024.

    https://www.lemonde.fr/societe/article/2024/01/02/pourquoi-la-promesse-de-videogerer-les-villes-avec-des-cameras-couplees-a-un
    #vidéosurveillance #AI #IA #caméras_de_vidéosurveillance

  • « C’est la première fois de l’histoire qu’une IA remporte un prix littéraire »
    https://actualitte.com/article/114902/technologie/c-est-la-premiere-fois-de-l-histoire-qu-une-ia-remporte-un-prix-litterai

    En Chine, un professeur de journalisme a reçu le deuxième prix dans un concours d’écriture de science-fiction. Sauf que ce n’est pas sa création propre qu’il y avait présentée, mais celle d’une IA qu’il avait simplement aidé à écrire l’histoire.

    Publié le :

    27/12/2023 à 16:25

    Ugo Loumé

    3h de temps et 66 prompts choisis minutieusement. Voici la recette pour faire écrire à une intelligence artificielle une nouvelle de science-fiction capable de remporter un prix littéraire en Chine.

    Shen Yang, professeur de journalisme et de communication à l’Université Tsinghua de Pékin, est l’auteur de cette recette qui a débouché sur un récit de 6 000 caractères, Le pays des souvenirs.

    Aux confins du métavers, se trouve le « Pays des souvenirs », un royaume interdit où les humains sont bannis. Des illusions solides créées par des robots humanoïdes amnésiques et des IA ayant perdu la mémoire peuplent ce domaine. Tout intrus, qu’il soit humain ou artificiel, verra ses souvenirs effacés et sera à jamais piégé dans son étreinte interdite.

    Ainsi débute la nouvelle. Ensuite, c’est l’histoire de Li Xiao, une ancienne « ingénieure neuronale » qui a accidentellement perdu toute sa mémoire, et qui tente de la retrouver en explorant tranquillement ce fameux et effrayant « pays des souvenirs ».
    Une nouvelle kafkaïenne dans un monde kafkaïen

    Un récit qui se voulait « kafkaïen », référence, peut-être, à l’absurdité d’un tel endroit où les souvenirs s’évanouissent à peine le seuil en est franchi. L’exercice semble avoir convaincu, puisque la nouvelle a gagné le deuxième prix du concours de science fiction organisé par la Jiangsu Science Writers Association. La travail effectué par l’IA, qui a récolté 3 votes sur 6, concourait avec 17 autres histoires.

    Parmi le jury, un seul membre avait été informé du fait que le récit était le produit d’une intelligence artificielle. Un autre juge, qui a étudié en profondeur la création de contenu par IA, a déclaré avoir reconnu la plume d’un cerveau non-humain, et avoir d’emblée écarté ce récit qui n’était pas conforme aux règles du concours et « manquait d’émotion ».

    Shen Yang, de son côté, se réjouit : « C’est la première fois qu’une IA remporte un prix littéraire dans l’histoire de la littérature et de l’intelligence artificielle. » Il a déclaré vouloir partager au plus vite sa manière de faire pour que chacun puisse à son tour écrire de la bonne fiction assisté d’une IA.

    Dans le contexte actuel, il n’en fallait pas beaucoup plus pour créer du débat, alors que Clarkesworld, célèbre magazine de science fiction, interrompait en février dernier la reception de manuscrit, submergé par les textes produits par des entités non-humaines.

    Fu Ruchu, éditrice chinoise, s’interroge sur le futur de l’écriture de science-fiction, un genre qui selon elle s’intéresse de manière globale un peu moins au langage. Même si elle reconnait que le récit présenté par Shen Yang et son IA est bien construit et n’est pas dénué de logique, elle ajoute : « le rapport au langage dans cette nouvelle est très pauvre, il se pourrait qu’il s’appauvrit encore plus avec le temps. »

    #Intelligence_artificielle #Littérature #Science_fiction

  • EU’s AI Act Falls Short on Protecting Rights at Borders

    Despite years of tireless advocacy by a coalition of civil society and academics (including the author), the European Union’s new law regulating artificial intelligence falls short on protecting the most vulnerable. Late in the night on Friday, Dec. 8, the European Parliament reached a landmark deal on its long-awaited Act to Govern Artificial Intelligence (AI Act). After years of meetings, lobbying, and hearings, the EU member states, Commission, and the Parliament agreed on the provisions of the act, awaiting technical meetings and formal approval before the final text of the legislation is released to the public. A so-called “global first” and racing ahead of the United States, the EU’s bill is the first ever regional attempt to create an omnibus AI legislation. Unfortunately, this bill once again does not sufficiently recognize the vast human rights risks of border technologies and should go much further protecting the rights of people on the move.

    From surveillance drones patrolling the Mediterranean to vast databases collecting sensitive biometric information to experimental projects like robo-dogs and AI lie detectors, every step of a person’s migration journey is now impacted by risky and unregulated border technology projects. These technologies are fraught with privacy infringements, discriminatory decision-making, and even impact the life, liberty, and security of person seeking asylum. They also impact procedural rights, muddying responsibility over opaque and discretionary decisions and lacking clarity in mechanisms of redress when something goes wrong.

    The EU’s AI Act could have been a landmark global standard for the protection of the rights of the most vulnerable. But once again, it does not provide the necessary safeguards around border technologies. For example, while recognizing that some border technologies could fall under the high-risk category, it is not yet clear what, if any, border tech projects will be included in the final high-risk category of projects that are subject to transparency obligations, human rights impact assessments, and greater scrutiny. The Act also has various carveouts and exemptions in place, for example for matters of national security, which can encapsulate technologies used in migration and border enforcement. And crucial discussions around bans on high-risk technologies in migration never even made it into the Parliament’s final deal terms at all. Even the bans which have been announced, for example around emotion recognition, are only in place in the workplace and education, not at the border. Moreover, what exactly is banned remains to be seen, and outstanding questions to be answered in the final text include the parameters around predictive policing as well as the exceptions to the ban on real-time biometric surveillance, still allowed in instances of a “threat of terrorism,” targeted search for victims, or the prosecution of serious crimes. It is also particularly troubling that the AI Act explicitly leaves room for technologies which are of particular appetite for Frontex, the EU’s border force. Frontex released its AI strategy on Nov. 9, signaling an appetite for predictive tools and situational analysis technology. These tools, which when used without safeguards, can facilitate illegal border interdiction operations, including “pushbacks,” in which the agency has been investigated. The Protect Not Surveil Coalition has been trying to influence European policy makers to ban predictive analytics used for the purposes of border enforcement. Unfortunately, no migration tech bans at all seem to be in the final Act.

    The lack of bans and red lines under the high-risk uses of border technologies in the EU’s position is in opposition to years of academic research as well as international guidance, such as by then-U.N. Special Rapporteur on contemporary forms of racism, E. Tendayi Achiume. For example, a recently released report by the University of Essex and the UN’s Office of the Human Rights Commissioner (OHCHR), which I co-authored with Professor Lorna McGregor, argues for a human rights based approach to digital border technologies, including a moratorium on the most high risk border technologies such as border surveillance, which pushes people on the move into dangerous terrain and can even assist with illegal border enforcement operations such as forced interdictions, or “pushbacks.” The EU did not take even a fraction of this position on border technologies.

    While it is promising to see strict regulation of high-risk AI systems such as self-driving cars or medical equipment, why are the risks of unregulated AI technologies at the border allowed to continue unabated? My work over the last six years spans borders from the U.S.-Mexico corridor to the fringes of Europe to East Africa and beyond, and I have witnessed time and again how technological border violence operates in an ecosystem replete with the criminalization of migration, anti-migrant sentiments, overreliance on the private sector in an increasingly lucrative border industrial complex, and deadly practices of border enforcement, leading to thousands of deaths at borders. From vast biometric data collected without consent in refugee camps, to algorithms replacing visa officers and making discriminatory decisions, to AI lie detectors used at borders to discern apparent liars, the roll out of unregulated technologies is ever-growing. The opaque and discretionary world of border enforcement and immigration decision-making is built on societal structures which are underpinned by intersecting systemic racism and historical discrimination against people migrating, allowing for high-risk technological experimentation to thrive at the border.

    The EU’s weak governance on border technologies will allow for more and more experimental projects to proliferate, setting a global standard on how governments will approach migration technologies. The United States is no exception, and in an upcoming election year where migration will once again be in the spotlight, there does not seem to be much incentive to regulate technologies at the border. The Biden administration’s recently released Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence does not offer a regulatory framework for these high-risk technologies, nor does it discuss the impacts of border technologies on people migrating, including taking a human rights based approach to the vast impacts of these projects on people migrating. Unfortunately, the EU often sets a precedent for how other countries govern technology. With the weak protections offered by the EU AI act on border technologies, it is no surprise that the U.S. government is emboldened to do as little as possible to protect people on the move from harmful technologies.

    But real people already are at the centre of border technologies. People like Mr. Alvarado, a young husband and father from Latin America in his early 30s who perished mere kilometers away from a major highway in Arizona, in search of a better life. I visited his memorial site after hours of trekking through the beautiful yet deadly Sonora desert with a search-and-rescue group. For my upcoming book, The Walls have Eyes: Surviving Migration in the Age of Artificial Intelligence, I was documenting the growing surveillance dragnet of the so-called smart border that pushes people to take increasingly dangerous routes, leading to increasing loss of life at the U.S.-Mexico border. Border technologies as a deterrent simply do not work. People desperate for safety – and exercising their internationally protected right to asylum – will not stop coming. They will instead more circuitous routes, and scholars like Geoffrey Boyce and Samuel Chambers have already documented a threefold increase in deaths at the U.S.-Mexico frontier as the so-called smart border expands. In the not so distant future, will people like Mr. Alvarado be pursued by the Department of Homeland Security’s recently announced robo-dogs, a military grade technology that is sometimes armed?

    It is no accident that more robust governance around migration technologies is not forthcoming. Border spaces increasingly serve as testing grounds for new technologies, places where regulation is deliberately limited and where an “anything goes” frontier attitude informs the development and deployment of surveillance at the expense of people’s lives. There is also big money to be made in developing and selling high risk technologies. Why does the private sector get to time and again determine what we innovate on and why, in often problematic public-private partnerships which states are increasingly keen to make in today’s global AI arms race? For example, whose priorities really matter when we choose to create violent sound cannons or AI-powered lie detectors at the border instead of using AI to identify racist border guards? Technology replicates power structures in society. Unfortunately, the viewpoints of those most affected are routinely excluded from the discussion, particularly around areas of no-go-zones or ethically fraught usages of technology.

    Seventy-seven border walls and counting are now cutting across the landscape of the world. They are both physical and digital, justifying broader surveillance under the guise of detecting illegal migrants and catching terrorists, creating suitable enemies we can all rally around. The use of military, or quasi-military, autonomous technology bolsters the connection between immigration and national security. None of these technologies, projects, and sets of decisions are neutral. All technological choices – choices about what to count, who counts, and why – have an inherently political dimension and replicate biases that render certain communities at risk of being harmed, communities that are already under-resourced, discriminated against, and vulnerable to the sharpening of borders all around the world.

    As is once again clear with the EU’s AI Act and the direction of U.S. policy on AI so far, the impacts on real people seems to have been forgotten. Kowtowing to industry and making concessions for the private sector not to stifle innovation does not protect people, especially those most marginalized. Human rights standards and norms are the bare minimum in the growing panopticon of border technologies. More robust and enforceable governance mechanisms are needed to regulate the high-risk experiments at borders and migration management, including a moratorium on violent technologies and red lines under military-grade technologies, polygraph machines, and predictive analytics used for border interdictions, at the very least. These laws and governance mechanisms must also include efforts at local, regional, and international levels, as well as global co-operation and commitment to a human-rights based approach to the development and deployment of border technologies. However, in order for more robust policy making on border technologies to actually affect change, people with lived experiences of migration must also be in the driver’s seat when interrogating both the negative impacts of technology as well as the creative solutions that innovation can bring to the complex stories of human movement.

    https://www.justsecurity.org/90763/eus-ai-act-falls-short-on-protecting-rights-at-borders

    #droits #frontières #AI #IA #intelligence_artificielle #Artificial_Intelligence_Act #AI_act #UE #EU #drones #Méditerranée #mer_Méditerranée #droits_humains #technologie #risques #surveillance #discrimination #transparence #contrôles_migratoires #Frontex #push-backs #refoulements #privatisation #business #complexe_militaro-industriel #morts_aux_frontières #biométrie #données #racisme #racisme_systémique #expérimentation #smart_borders #frontières_intelligentes #pouvoir #murs #barrières_frontalières #terrorisme

    • The Walls Have Eyes. Surviving Migration in the Age of Artificial Intelligence

      A chilling exposé of the inhumane and lucrative sharpening of borders around the globe through experimental surveillance technology

      “Racism, technology, and borders create a cruel intersection . . . more and more people are getting caught in the crosshairs of an unregulated and harmful set of technologies touted to control borders and ‘manage migration,’ bolstering a multibillion-dollar industry.” —from the introduction

      In 2022, the U.S. Department of Homeland Security announced it was training “robot dogs” to help secure the U.S.-Mexico border against migrants. Four-legged machines equipped with cameras and sensors would join a network of drones and automated surveillance towers—nicknamed the “smart wall.” This is part of a worldwide trend: as more people are displaced by war, economic instability, and a warming planet, more countries are turning to A.I.-driven technology to “manage” the influx.

      Based on years of researching borderlands across the world, lawyer and anthropologist Petra Molnar’s The Walls Have Eyes is a truly global story—a dystopian vision turned reality, where your body is your passport and matters of life and death are determined by algorithm. Examining how technology is being deployed by governments on the world’s most vulnerable with little regulation, Molnar also shows us how borders are now big business, with defense contractors and tech start-ups alike scrambling to capture this highly profitable market.

      With a foreword by former U.N. Special Rapporteur E. Tendayi Achiume, The Walls Have Eyes reveals the profound human stakes, foregrounding the stories of people on the move and the daring forms of resistance that have emerged against the hubris and cruelty of those seeking to use technology to turn human beings into problems to be solved.

      https://thenewpress.com/books/walls-have-eyes
      #livre #Petra_Molnar

  • L’usage revendiqué de l’intelligence artificielle par l’armée israélienne questionne le droit de la guerre
    https://www.lemonde.fr/idees/article/2023/12/14/l-usage-revendique-de-l-intelligence-artificielle-par-l-armee-israelienne-qu

    Cette question des applications militaires est importante. Une opération militaire donne le plein pouvoir aux militaires, qui ont en principe un « objectif » fixé par le pouvoir civil, mais ont ensuite la maîtrise des moyens pour y parvenir, sans devoir rendre de compte devant la justice (la CPI reste trop faible). Dès lors si en deuxième main la responsabilité léthale est confiée à une IA, on est encore plus loin de la responsabilité (souvent jugée ultérieurement... pas les vainqueurs) qui pourrait être relevée pour crimes de guerre.

    Enfin, quand la guerre se résume à détruire son voisin en limitant ses risques, on peut s’attendre à des guerre de plus en plus terribles pour les populations civiles. On retrouve le débat sur les drones armés.

    Parmi toutes les horreurs de la guerre qui a éclaté le 7 octobre entre Israël et le Hamas, il en est une qui est venue, de façon inattendue, ajouter une dimension dystopique à ce conflit : le recours assumé, par l’armée israélienne, à la puissance de l’intelligence artificielle (IA) pour maximiser son écrasement du mouvement islamiste. Une IA présentée comme une des composantes-clés de l’un de ses outils de ciblage pour ses campagnes de frappes aériennes sur la bande de Gaza, baptisé Habsora (« Evangile »).

    Difficile de savoir à quel point cette révélation inopinée, début novembre, au lendemain de la trêve de sept jours ayant permis la libération de 110 otages, a été le résultat d’une stratégie de communication maîtrisée. Des enquêtes de presse rapportaient alors les états d’âme d’anciens membres de l’armée israélienne sur l’emploi de ce logiciel capable de proposer des cibles à une vitesse inédite à partir d’une masse de données hétérogènes. Les mots « intelligence artificielle » sont parfois un fourre-tout qui englobe beaucoup d’applications numériques, qu’elles soient civiles ou militaires.

    Une chose apparaît néanmoins évidente, depuis, aux yeux d’experts : l’ampleur des destructions et le nombre inédit de victimes civiles à Gaza – plus de 18 000, selon le ministère de la santé du Hamas – pourraient faire bouger les lignes sur l’encadrement de l’IA dans les systèmes d’armes. « Cela fait des années que le sujet ne fait l’objet d’aucun consensus chez les spécialistes. Cette guerre pourrait permettre d’accélérer certains débats », soutient ainsi Julien Nocetti, chercheur associé à l’Institut français des relations internationales (IFRI), spécialiste des conflits numériques.
    Lire le décryptage : Article réservé à nos abonnés Israël-Hamas : « Le Monde » face à la guerre des images

    Les armements sont en effet aujourd’hui divisés en deux grandes catégories. D’un côté, les systèmes d’armes létales autonomes, totalement automatisés, dont il n’existe pas réellement d’exemples sur le marché. De l’autre, les systèmes d’armes létales « intégrant » de l’autonomie (SALIA), qui permettent en principe à l’homme de rester « dans la boucle ». Or l’immense majorité des puissances militaires occidentales – dont Israël avec Habsora – assurent aujourd’hui avoir fait le choix des SALIA, et peuvent ainsi jurer être du côté respectable de l’emploi de la force.

    Mais, pour Laure de Roucy-Rochegonde, également chercheuse à l’IFRI, autrice d’une thèse sur la régulation des systèmes d’armes autonomes, les spécificités de la guerre entre Israël et le Hamas pourraient ringardiser ces catégories aux contours flous et redonner de la vigueur à un autre concept de régulation, celui de « contrôle humain significatif ». Une définition plus stricte, poussée sans grand succès jusqu’ici par certains défenseurs des droits de l’homme, dont une ONG dénommée Article 36. « Le problème, c’est que l’on ne sait pas quel type d’algorithme est utilisé [par l’armée israélienne], ni comment les données ont été agrégées. Ce ne serait pas un problème s’il n’y avait pas, au bout, une décision de vie ou de mort », reprend Mme de Roucy-Rochegonde.

    #Intelligence_artificielle #militarisme #Guerre

    • Article36
      https://article36.org

      Article 36 is a specialist non-profit organisation, focused on reducing harm from weapons. We are a small and effective team of advocacy and policy experts based in the UK.

      We partner with civil society groups and governments to develop new policies and legal standards to prevent civilian harm from existing and future weapons.

      Our team has more than a decade of experience in diplomatic negotiations and developing practical, actionable policies.

    • Traités de DIH - Protocole additionnel (I) aux Conventions de Genève, 1977 - Article 36
      https://ihl-databases.icrc.org/fr/ihl-treaties/api-1977/article-36

      Article 36 - Armes nouvelles
      Dans l’étude, la mise au point, l’acquisition ou l’adoption d’une nouvelle arme, de nouveaux moyens ou d’une nouvelle méthode de guerre, une Haute Partie contractante à l’obligation de déterminer si l’emploi en serait interdit, dans certaines circonstances ou en toutes circonstances, par les dispositions du présent Protocole ou par toute autre règle du droit international applicable à cette Haute Partie contractante.

      Protocole I — Wikipédia
      https://fr.wikipedia.org/wiki/Protocole_I

      Ratification
      En février 2020, ce protocole avait été ratifié par 174 États ; certains pays importants n’ayant pas ratifié le protocole sont les États-Unis, Israël, l’Iran, le Pakistan, l’Inde et la Turquie.

      surprise !

  • EU lawmakers bag late night deal on ‘global first’ AI rules | TechCrunch
    https://techcrunch.com/2023/12/08/eu-ai-act-political-deal

    Tout l’article est très intéressant.

    Full details of what’s been agreed won’t be entirely confirmed until a final text is compiled and made public, which may take some weeks. But a press release put out by the European Parliament confirms the deal reached with the Council includes a total prohibition on the use of AI for:

    biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race);
    untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases;
    emotion recognition in the workplace and educational institutions;
    social scoring based on social behaviour or personal characteristics;
    AI systems that manipulate human behaviour to circumvent their free will;
    AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

    The use of remote biometric identification technology in public places by law enforcement has not been completely banned — but the parliament said negotiators had agreed on a series of safeguards and narrow exceptions to limit use of technologies such as facial recognition. This includes a requirement for prior judicial authorisation — and with uses limited to a “strictly defined” lists of crime.

    Civil society groups have reacted sceptically — raising concerns the agreed limitations on state agencies’ use of biometric identification technologies will not go far enough to safeguard human rights. Digital rights group EDRi, which was among those pushing for a full ban on remote biometrics, said that whilst the deal contains “some limited gains for human rights”, it looks like “a shell of the AI law Europe really needs”.

    There was also agreement on a “two-tier” system of guardrails to be applied to “general” AI systems, such as the so-called foundational models underpinning the viral boom in generative AI applications like ChatGPT.

    As we reported earlier, the deal reached on foundational models/general purpose AIs (GPAIs) includes some transparency requirements for what co-legislators referred to as “low tier” AIs — meaning model makers must draw up technical documentation and produce (and publish) detailed summaries about the content used for training in order to support compliance with EU copyright law. For “high-impact” GPAIs (defined as the cumulative amount of compute used for their training measured in floating point operations is greater than 10^25) with so-called “systemic risk” there are more stringent obligations.

    “If these models meet certain criteria they will have to conduct model evaluations, assess and mitigate systemic risks, conduct adversarial testing, report to the Commission on serious incidents, ensure cybersecurity and report on their energy efficiency,” the parliament wrote. “MEPs also insisted that, until harmonised EU standards are published, GPAIs with systemic risk may rely on codes of practice to comply with the regulation.”

    The Commission has been working with industry on a stop-gap AI Pact for some months — and it confirmed today this is intended to plug the practice gap until the AI Act comes into force.

    While foundational models/GPAIs that have been commercialized face regulation under the Act, R&D is not intended to be in scope of the law — and fully open sourced models will have lighter regulatory requirements than closed source, per today’s pronouncements.

    The package agreed also promotes regulatory sandboxes and real-world-testing being established by national authorities to support startups and SMEs to develop and train AIs before placement on the market.

    #Intelligence_artificielle #AIAct #Europe #Régulation

  • ChatGPT Replicates Gender Bias in Recommendation Letters | Scientific American
    https://www.scientificamerican.com/article/chatgpt-replicates-gender-bias-in-recommendation-letters

    Generative artificial intelligence has been touted as a valuable tool in the workplace. Estimates suggest it could increase productivity growth by 1.5 percent in the coming decade and boost global gross domestic product by 7 percent during the same period. But a new study advises that it should only be used with careful scrutiny—because its output discriminates against women.

    The researchers asked two large language model (LLM) chatbots—ChatGPT and Alpaca, a model developed by Stanford University—to produce recommendation letters for hypothetical employees. In a paper shared on the preprint server arXiv.org, the authors analyzed how the LLMs used very different language to describe imaginary male and female workers.

    “We observed significant gender biases in the recommendation letters,” says paper co-author Yixin Wan, a computer scientist at the University of California, Los Angeles. While ChatGPT deployed nouns such as “expert” and “integrity” for men, it was more likely to call women a “beauty” or “delight.” Alpaca had similar problems: men were “listeners” and “thinkers,” while women had “grace” and “beauty.” Adjectives proved similarly polarized. Men were “respectful,” “reputable” and “authentic,” according to ChatGPT, while women were “stunning,” “warm” and “emotional.” Neither OpenAI nor Stanford immediately responded to requests for comment from Scientific American.

    The issues encountered when artificial intelligence is used in a professional context echo similar situations with previous generations of AI. In 2018 Reuters reported that Amazon had disbanded a team that had worked since 2014 to try and develop an AI-powered résumé review tool. The company scrapped this project after realizing that any mention of “women” in a document would cause the AI program to penalize that applicant. The discrimination arose because the system was trained on data from the company, which had, historically, employed mostly men.

    The new study results are “not super surprising to me,” says Alex Hanna, director of research at the Distributed AI Research Institute, an independent research group analyzing the harms of AI. The training data used to develop LLMs are often biased because they’re based on humanity’s past written records—many of which have historically depicted men as active workers and women as passive objects. The situation is compounded by LLMs being trained on data from the Internet, where more men than women spend time: globally, 69 percent of men use the Internet, compared with 63 percent of women, according to the United Nations’ International Telecommunication Union.

    Fixing the problem isn’t simple. “I don’t think it’s likely that you can really debias the data set,” Hanna says. “You need to acknowledge what these biases are and then have some kind of mechanism to capture that.” One option, Hanna suggests, is to train the model to de-emphasize biased outputs through an intervention called reinforcement learning. OpenAI has worked to rein in the biased tendencies of ChatGPT, Hanna says, but “one needs to know that these are going to be perennial problems.”

    This all matters because women have already long faced inherent biases in business and the workplace. For instance, women often have to tiptoe around workplace communication because their words are judged more harshly than those of their male colleagues, according to a 2022 study. And of course, women earn 83 cents for every dollar a man makes. Generative AI platforms are “propagating those biases,” Wan says. So as this technology becomes more ubiquitous throughout the working world, there’s a chance that the problem will become even more firmly entrenched.

    “I welcome research like this that is exploring how these systems operate and their risks and fallacies,” says Gem Dale, a lecturer in human resources at Liverpool John Moores University in England. “It is through this understanding we will learn the issues and then can start to tackle them.”

    Dale says anyone thinking of using generative AI chatbots in the workplace should be wary of such problems. “If people use these systems without rigor—as in letters of recommendation in this research—we are just sending the issue back out into the world and perpetuating it,” she says. “It is an issue I would like to see the tech firms address in the LLMs. Whether they will or not will be interesting to find out.”
    Rights & Permissions
    Chris Stokel-Walker is a freelance journalist in Newcastle, UK.

    #Intelligence_artificielle #discrimination #Lettres_de_recommandation #genre

  • Who Is BasedBeffJezos, The Leader Of The Tech Elite’s ‘E/Acc’ Movement?
    https://www.forbes.com/sites/emilybaker-white/2023/12/01/who-is-basedbeffjezos-the-leader-of-effective-accelerationism-eacc/?sh=80470f87a13f

    ndreessen Horowitz cofounder Marc Andreessen says @BasedBeffJezos is a “patron saint of techno-optimism.” Garry Tan, who cofounded the venture firm Initialized Capital before becoming CEO of Y Combinator, calls him “brother.” Sam Altman, who founded OpenAI — the company that finally mainstreamed artificial intelligence — has jokingly sparred with him on Twitter. Elon Musk says his memes are “🔥🔥🔥.”

    Andreessen, Tan, and several dozen other Silicon Valley luminaries have also begun aligning with a movement that “Jezos” claims to have founded: “effective accelerationism,” or e/acc. At its core, e/acc argues that technology companies should innovate faster, with less opposition from “decels” or “decelerationists” — folks like AI safety advocates and regulators who want to slow the growth of technology.

    At first blush, e/acc sounds a lot like Facebook’s old motto: “move fast and break things.” But Jezos also embraces more extreme ideas, borrowing concepts from “accelerationism,” which argues we should hasten the growth of technology and capitalism at the expense of nearly anything else. On X, the platform formally known as Twitter where he has 50,000 followers, Jezos has claimed that “institutions have decayed beyond the point of salvaging and that the media is a “vector for cybernetic control of culture.”

    So just who is the anonymous Twitter personality whose message of unfettered, technology-crazed capitalism at all costs has captivated many of Silicon Valley’s most powerful?

    Forbes has learned that the Jezos persona is run by a former Google quantum computing engineer named Guillaume Verdon who founded a stealth AI hardware startup Extropic in 2022. Forbes first identified Verdon as Jezos by matching details that Jezos revealed about himself to publicly available facts about Verdon. A voice analysis conducted by Catalin Grigoras, Director of the National Center for Media Forensics, compared audio recordings of Jezos and talks given by Verdon and found that it was 2,954,870 times more likely that the speaker in one recording of Jezos was Verdon than that it was any other person. Forbes is revealing his identity because we believe it to be in the public interest as Jezos’s influence grows.

    In a wide-ranging interview with Forbes, Verdon confirmed that he is behind the account, and extolled the e/acc philosophy. “Our goal is really to increase the scope and scale of civilization as measured in terms of its energy production and consumption,” he said. Of the Jezos persona, he said: “If you’re going to create an ideology in the time of social media, you’ve got to engineer it to be viral.”

    At its core, effective accelerationism embraces the idea that social problems can be solved purely with advances in technology, rather than by messy human deliberation. “We’re trying to solve culture by engineering,” Verdon said. “When you’re an entrepreneur, you engineer ways to incentivize certain behaviors via gradients and reward, and you can program a civilizational system."

    He expects computers to eventually solve legal problems too: "At the end of the day, law is just natural language code for how to operate the system, and there’s no reason why technology can’t have impact there in terms of social problems.”

    Not everyone agrees that engineering is the answer to societal problems. “The world is just not like that. It just isn’t,” said Fred Turner, a professor of communications at Stanford University who has studied accelerationism. “But if you can convince people that it is, then you get a lot of the power that normally accrues to governments.”

    E/acc is also a reaction to another Silicon Valley movement: effective altruism (or EA). While it was originally focused on optimizing each person’s ability to help others (and is known for some of its most famous adherents’ willingness to engage in fraud), EA has also become a hotbed for people concerned about whether AI might become sentient and murder humans — so-called “doomers” that Jezos says “are instrumental to forces of evil and civilizational decline.”

    “We’ve got to make sure AI doesn’t end up in the hands of a single company.”
    Guillaume Verdon

    “If we only focus on the end of the world, bio weapons, [artificial general intelligence] ending us all, then … we might engender our own doom by obsessing over it and it demoralizes people and it doesn’t make them want to build,” Verdon said.

    #Accelerationisme #Jezos #Religion #Intelligence_artificielle #Silicon_Valley

  • ‘A mass assassination factory’: Inside Israel’s calculated bombing of Gaza

    Permissive airstrikes on non-military targets and the use of an artificial intelligence system have enabled the Israeli army to carry out its deadliest war on Gaza, a +972 and Local Call investigation reveals.

    The Israeli army’s expanded authorization for bombing non-military targets, the loosening of constraints regarding expected civilian casualties, and the use of an artificial intelligence system to generate more potential targets than ever before, appear to have contributed to the destructive nature of the initial stages of Israel’s current war on the Gaza Strip, an investigation by +972 Magazine and Local Call reveals. These factors, as described by current and former Israeli intelligence members, have likely played a role in producing what has been one of the deadliest military campaigns against Palestinians since the Nakba of 1948.

    The investigation by +972 and Local Call is based on conversations with seven current and former members of Israel’s intelligence community — including military intelligence and air force personnel who were involved in Israeli operations in the besieged Strip — in addition to Palestinian testimonies, data, and documentation from the Gaza Strip, and official statements by the IDF Spokesperson and other Israeli state institutions.

    Compared to previous Israeli assaults on Gaza, the current war — which Israel has named “Operation Iron Swords,” and which began in the wake of the Hamas-led assault on southern Israel on October 7 — has seen the army significantly expand its bombing of targets that are not distinctly military in nature. These include private residences as well as public buildings, infrastructure, and high-rise blocks, which sources say the army defines as “power targets” (“matarot otzem”).

    The bombing of power targets, according to intelligence sources who had first-hand experience with its application in Gaza in the past, is mainly intended to harm Palestinian civil society: to “create a shock” that, among other things, will reverberate powerfully and “lead civilians to put pressure on Hamas,” as one source put it.

    Several of the sources, who spoke to +972 and Local Call on the condition of anonymity, confirmed that the Israeli army has files on the vast majority of potential targets in Gaza — including homes — which stipulate the number of civilians who are likely to be killed in an attack on a particular target. This number is calculated and known in advance to the army’s intelligence units, who also know shortly before carrying out an attack roughly how many civilians are certain to be killed.

    In one case discussed by the sources, the Israeli military command knowingly approved the killing of hundreds of Palestinian civilians in an attempt to assassinate a single top Hamas military commander. “The numbers increased from dozens of civilian deaths [permitted] as collateral damage as part of an attack on a senior official in previous operations, to hundreds of civilian deaths as collateral damage,” said one source.

    “Nothing happens by accident,” said another source. “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed — that it was a price worth paying in order to hit [another] target. We are not Hamas. These are not random rockets. Everything is intentional. We know exactly how much collateral damage there is in every home.”

    According to the investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”

    According to the sources, the increasing use of AI-based systems like Habsora allows the army to carry out strikes on residential homes where a single Hamas member lives on a massive scale, even those who are junior Hamas operatives. Yet testimonies of Palestinians in Gaza suggest that since October 7, the army has also attacked many private residences where there was no known or apparent member of Hamas or any other militant group residing. Such strikes, sources confirmed to +972 and Local Call, can knowingly kill entire families in the process.

    In the majority of cases, the sources added, military activity is not conducted from these targeted homes. “I remember thinking that it was like if [Palestinian militants] would bomb all the private residences of our families when [Israeli soldiers] go back to sleep at home on the weekend,” one source, who was critical of this practice, recalled.

    Another source said that a senior intelligence officer told his officers after October 7 that the goal was to “kill as many Hamas operatives as possible,” for which the criteria around harming Palestinian civilians were significantly relaxed. As such, there are “cases in which we shell based on a wide cellular pinpointing of where the target is, killing civilians. This is often done to save time, instead of doing a little more work to get a more accurate pinpointing,” said the source.

    The result of these policies is the staggering loss of human life in Gaza since October 7. Over 300 families have lost 10 or more family members in Israeli bombings in the past two months — a number that is 15 times higher than the figure from what was previously Israel’s deadliest war on Gaza, in 2014. At the time of writing, around 15,000 Palestinians have been reported killed in the war, and counting.

    “All of this is happening contrary to the protocol used by the IDF in the past,” a source explained. “There is a feeling that senior officials in the army are aware of their failure on October 7, and are busy with the question of how to provide the Israeli public with an image [of victory] that will salvage their reputation.”
    ‘An excuse to cause destruction’

    Israel launched its assault on Gaza in the aftermath of the October 7 Hamas-led offensive on southern Israel. During that attack, under a hail of rocket fire, Palestinian militants massacred more than 840 civilians and killed 350 soldiers and security personnel, kidnapped around 240 people — civilians and soldiers — to Gaza, and committed widespread sexual violence, including rape, according to a report by the NGO Physicians for Human Rights Israel.

    From the first moment after the October 7 attack, decisionmakers in Israel openly declared that the response would be of a completely different magnitude to previous military operations in Gaza, with the stated aim of totally eradicating Hamas. “The emphasis is on damage and not on accuracy,” said IDF Spokesperson Daniel Hagari on Oct. 9. The army swiftly translated those declarations into actions.

    According to the sources who spoke to +972 and Local Call, the targets in Gaza that have been struck by Israeli aircraft can be divided roughly into four categories. The first is “tactical targets,” which include standard military targets such as armed militant cells, weapon warehouses, rocket launchers, anti-tank missile launchers, launch pits, mortar bombs, military headquarters, observation posts, and so on.

    The second is “underground targets” — mainly tunnels that Hamas has dug under Gaza’s neighborhoods, including under civilian homes. Aerial strikes on these targets could lead to the collapse of the homes above or near the tunnels.

    The third is “power targets,” which includes high-rises and residential towers in the heart of cities, and public buildings such as universities, banks, and government offices. The idea behind hitting such targets, say three intelligence sources who were involved in planning or conducting strikes on power targets in the past, is that a deliberate attack on Palestinian society will exert “civil pressure” on Hamas.

    The final category consists of “family homes” or “operatives’ homes.” The stated purpose of these attacks is to destroy private residences in order to assassinate a single resident suspected of being a Hamas or Islamic Jihad operative. However, in the current war, Palestinian testimonies assert that some of the families that were killed did not include any operatives from these organizations.

    In the early stages of the current war, the Israeli army appears to have given particular attention to the third and fourth categories of targets. According to statements on Oct. 11 by the IDF Spokesperson, during the first five days of fighting, half of the targets bombed — 1,329 out of a total 2,687 — were deemed power targets.

    “We are asked to look for high-rise buildings with half a floor that can be attributed to Hamas,” said one source who took part in previous Israeli offensives in Gaza. “Sometimes it is a militant group’s spokesperson’s office, or a point where operatives meet. I understood that the floor is an excuse that allows the army to cause a lot of destruction in Gaza. That is what they told us.

    “If they would tell the whole world that the [Islamic Jihad] offices on the 10th floor are not important as a target, but that its existence is a justification to bring down the entire high-rise with the aim of pressuring civilian families who live in it in order to put pressure on terrorist organizations, this would itself be seen as terrorism. So they do not say it,” the source added.

    Various sources who served in IDF intelligence units said that at least until the current war, army protocols allowed for attacking power targets only when the buildings were empty of residents at the time of the strike. However, testimonies and videos from Gaza suggest that since October 7, some of these targets have been attacked without prior notice being given to their occupants, killing entire families as a result.

    The wide-scale targeting of residential homes can be derived from public and official data. According to the Government Media Office in Gaza — which has been providing death tolls since the Gaza Health Ministry stopped doing so on Nov. 11 due to the collapse of health services in the Strip — by the time the temporary ceasefire took hold on Nov. 23, Israel had killed 14,800 Palestinians in Gaza; approximately 6,000 of them were children and 4,000 were women, who together constitute more than 67 percent of the total. The figures provided by the Health Ministry and the Government Media Office — both of which fall under the auspices of the Hamas government — do not deviate significantly from Israeli estimates.

    The Gaza Health Ministry, furthermore, does not specify how many of the dead belonged to the military wings of Hamas or Islamic Jihad. The Israeli army estimates that it has killed between 1,000 and 3,000 armed Palestinian militants. According to media reports in Israel, some of the dead militants are buried under the rubble or inside Hamas’ underground tunnel system, and therefore were not tallied in official counts.

    UN data for the period up until Nov. 11, by which time Israel had killed 11,078 Palestinians in Gaza, states that at least 312 families have lost 10 or more people in the current Israeli attack; for the sake of comparison, during “Operation Protective Edge” in 2014, 20 families in Gaza lost 10 or more people. At least 189 families have lost between six and nine people according to the UN data, while 549 families have lost between two and five people. No updated breakdowns have yet been given for the casualty figures published since Nov. 11.

    The massive attacks on power targets and private residences came at the same time as the Israeli army, on Oct. 13, called on the 1.1 million residents of the northern Gaza Strip — most of them residing in Gaza City — to leave their homes and move to the south of the Strip. By that date, a record number of power targets had already been bombed, and more than 1,000 Palestinians had already been killed, including hundreds of children.

    In total, according to the UN, 1.7 million Palestinians, the vast majority of the Strip’s population, have been displaced within Gaza since October 7. The army claimed that the demand to evacuate the Strip’s north was intended to protect civilian lives. Palestinians, however, see this mass displacement as part of a “new Nakba” — an attempt to ethnically cleanse part or all of the territory.
    ‘They knocked down a high-rise for the sake of it’

    According to the Israeli army, during the first five days of fighting it dropped 6,000 bombs on the Strip, with a total weight of about 4,000 tons. Media outlets reported that the army had wiped out entire neighborhoods; according to the Gaza-based Al Mezan Center for Human Rights, these attacks led to “the complete destruction of residential neighborhoods, the destruction of infrastructure, and the mass killing of residents.”

    As documented by Al Mezan and numerous images coming out of Gaza, Israel bombed the Islamic University of Gaza, the Palestinian Bar Association, a UN building for an educational program for outstanding students, a building belonging to the Palestine Telecommunications Company, the Ministry of National Economy, the Ministry of Culture, roads, and dozens of high-rise buildings and homes — especially in Gaza’s northern neighborhoods.

    On the fifth day of fighting, the IDF Spokesperson distributed to military reporters in Israel “before and after” satellite images of neighborhoods in the northern Strip, such as Shuja’iyya and Al-Furqan (nicknamed after a mosque in the area) in Gaza City, which showed dozens of destroyed homes and buildings. The Israeli army said that it had struck 182 power targets in Shuja’iyya and 312 power targets in Al-Furqan.

    The Chief of Staff of the Israeli Air Force, Omer Tishler, told military reporters that all of these attacks had a legitimate military target, but also that entire neighborhoods were attacked “on a large scale and not in a surgical manner.” Noting that half of the military targets up until Oct. 11 were power targets, the IDF Spokesperson said that “neighborhoods that serve as terror nests for Hamas” were attacked and that damage was caused to “operational headquarters,” “operational assets,” and “assets used by terrorist organizations inside residential buildings.” On Oct. 12, the Israeli army announced it had killed three “senior Hamas members” — two of whom were part of the group’s political wing.

    Yet despite the unbridled Israeli bombardment, the damage to Hamas’ military infrastructure in northern Gaza during the first days of the war appears to have been very minimal. Indeed, intelligence sources told +972 and Local Call that military targets that were part of power targets have previously been used many times as a fig leaf for harming the civilian population. “Hamas is everywhere in Gaza; there is no building that does not have something of Hamas in it, so if you want to find a way to turn a high-rise into a target, you will be able to do so,” said one former intelligence official.

    “They will never just hit a high-rise that does not have something we can define as a military target,” said another intelligence source, who carried out previous strikes against power targets. “There will always be a floor in the high-rise [associated with Hamas]. But for the most part, when it comes to power targets, it is clear that the target doesn’t have military value that justifies an attack that would bring down the entire empty building in the middle of a city, with the help of six planes and bombs weighing several tons.”

    Indeed, according to sources who were involved in the compiling of power targets in previous wars, although the target file usually contains some kind of alleged association with Hamas or other militant groups, striking the target functions primarily as a “means that allows damage to civil society.” The sources understood, some explicitly and some implicitly, that damage to civilians is the real purpose of these attacks.

    In May 2021, for example, Israel was heavily criticized for bombing the Al-Jalaa Tower, which housed prominent international media outlets such as Al Jazeera, AP, and AFP. The army claimed that the building was a Hamas military target; sources have told +972 and Local Call that it was in fact a power target.

    “The perception is that it really hurts Hamas when high-rise buildings are taken down, because it creates a public reaction in the Gaza Strip and scares the population,” said one of the sources. “They wanted to give the citizens of Gaza the feeling that Hamas is not in control of the situation. Sometimes they toppled buildings and sometimes postal service and government buildings.”

    Although it is unprecedented for the Israeli army to attack more than 1,000 power targets in five days, the idea of causing mass devastation to civilian areas for strategic purposes was formulated in previous military operations in Gaza, honed by the so-called “Dahiya Doctrine” from the Second Lebanon War of 2006.

    According to the doctrine — developed by former IDF Chief of Staff Gadi Eizenkot, who is now a Knesset member and part of the current war cabinet — in a war against guerrilla groups such as Hamas or Hezbollah, Israel must use disproportionate and overwhelming force while targeting civilian and government infrastructure in order to establish deterrence and force the civilian population to pressure the groups to end their attacks. The concept of “power targets” seems to have emanated from this same logic.

    The first time the Israeli army publicly defined power targets in Gaza was at the end of Operation Protective Edge in 2014. The army bombed four buildings during the last four days of the war — three residential multi-story buildings in Gaza City, and a high-rise in Rafah. The security establishment explained at the time that the attacks were intended to convey to the Palestinians of Gaza that “nothing is immune anymore,” and to put pressure on Hamas to agree to a ceasefire. “The evidence we collected shows that the massive destruction [of the buildings] was carried out deliberately, and without any military justification,” stated an Amnesty report in late 2014.

    In another violent escalation that began in November 2018, the army once again attacked power targets. That time, Israel bombed high-rises, shopping centers, and the building of the Hamas-affiliated Al-Aqsa TV station. “Attacking power targets produces a very significant effect on the other side,” one Air Force officer stated at the time. “We did it without killing anyone and we made sure that the building and its surroundings were evacuated.”

    Previous operations have also shown how striking these targets is meant not only to harm Palestinian morale, but also to raise the morale inside Israel. Haaretz revealed that during Operation Guardian of the Walls in 2021, the IDF Spokesperson’s Unit conducted a psy-op against Israeli citizens in order to boost awareness of the IDF’s operations in Gaza and the damage they caused to Palestinians. Soldiers, who used fake social media accounts to conceal the campaign’s origin, uploaded images and clips of the army’s strikes in Gaza to Twitter, Facebook, Instagram, and TikTok in order to demonstrate the army’s prowess to the Israeli public.

    During the 2021 assault, Israel struck nine targets that were defined as power targets — all of them high-rise buildings. “The goal was to collapse the high-rises in order to put pressure on Hamas, and also so that the [Israeli] public would see a victory image,” one security source told +972 and Local Call.

    However, the source continued, “it didn’t work. As someone who has followed Hamas, I heard firsthand how much they did not care about the civilians and the buildings that were taken down. Sometimes the army found something in a high-rise building that was related to Hamas, but it was also possible to hit that specific target with more accurate weaponry. The bottom line is that they knocked down a high-rise for the sake of knocking down a high-rise.”
    ‘Everyone was looking for their children in these piles’

    Not only has the current war seen Israel attack an unprecedented number of power targets, it has also seen the army abandon prior policies that aimed at avoiding harm to civilians. Whereas previously the army’s official procedure was that it was possible to attack power targets only after all civilians had been evacuated from them, testimonies from Palestinian residents in Gaza indicate that, since October 7, Israel has attacked high-rises with their residents still inside, or without having taken significant steps to evacuate them, leading to many civilian deaths.

    Such attacks very often result in the killing of entire families, as experienced in previous offensives; according to an investigation by AP conducted after the 2014 war, about 89 percent of those killed in the aerial bombings of family homes were unarmed residents, and most of them were children and women.

    Tishler, the air force chief of staff, confirmed a shift in policy, telling reporters that the army’s “roof knocking” policy — whereby it would fire a small initial strike on the roof of a building to warn residents that it is about to be struck — is no longer in use “where there is an enemy.” Roof knocking, Tishler said, is “a term that is relevant to rounds [of fighting] and not to war.”

    The sources who have previously worked on power targets said that the brazen strategy of the current war could be a dangerous development, explaining that attacking power targets was originally intended to “shock” Gaza but not necessarily to kill large numbers of civilians. “The targets were designed with the assumption that high-rises would be evacuated of people, so when we were working on [compiling the targets], there was no concern whatsoever regarding how many civilians would be harmed; the assumption was that the number would always be zero,” said one source with deep knowledge of the tactic.

    “This would mean there would be a total evacuation [of the targeted buildings], which takes two to three hours, during which the residents are called [by phone to evacuate], warning missiles are fired, and we also crosscheck with drone footage that people are indeed leaving the high-rise,” the source added.

    However, evidence from Gaza suggests that some high-rises — which we assume to have been power targets — were toppled without prior warning. +972 and Local Call located at least two cases during the current war in which entire residential high-rises were bombed and collapsed without warning, and one case in which, according to the evidence, a high-rise building collapsed on civilians who were inside.

    On Oct. 10, Israel bombed the Babel Building in Gaza, according to the testimony of Bilal Abu Hatzira, who rescued bodies from the ruins that night. Ten people were killed in the attack on the building, including three journalists.

    On Oct. 25, the 12-story Al-Taj residential building in Gaza City was bombed to the ground, killing the families living inside it without warning. About 120 people were buried under the ruins of their apartments, according to the testimonies of residents. Yousef Amar Sharaf, a resident of Al-Taj, wrote on X that 37 of his family members who lived in the building were killed in the attack: “My dear father and mother, my beloved wife, my sons, and most of my brothers and their families.” Residents stated that a lot of bombs were dropped, damaging and destroying apartments in nearby buildings too.

    Six days later, on Oct. 31, the eight-story Al-Mohandseen residential building was bombed without warning. Between 30 and 45 bodies were reportedly recovered from the ruins on the first day. One baby was found alive, without his parents. Journalists estimated that over 150 people were killed in the attack, as many remained buried under the rubble.

    The building used to stand in Nuseirat Refugee Camp, south of Wadi Gaza — in the supposed “safe zone” to which Israel directed the Palestinians who fled their homes in northern and central Gaza — and therefore served as temporary shelter for the displaced, according to testimonies.

    According to an investigation by Amnesty International, on Oct. 9, Israel shelled at least three multi-story buildings, as well as an open flea market on a crowded street in the Jabaliya Refugee Camp, killing at least 69 people. “The bodies were burned … I didn’t want to look, I was scared of looking at Imad’s face,” said the father of a child who was killed. “The bodies were scattered on the floor. Everyone was looking for their children in these piles. I recognized my son only by his trousers. I wanted to bury him immediately, so I carried my son and got him out.”

    According to Amnesty’s investigation, the army said that the attack on the market area was aimed at a mosque “where there were Hamas operatives.” However, according to the same investigation, satellite images do not show a mosque in the vicinity.

    The IDF Spokesperson did not address +972’s and Local Call’s queries about specific attacks, but stated more generally that “the IDF provided warnings before attacks in various ways, and when the circumstances allowed it, also delivered individual warnings through phone calls to people who were at or near the targets (there were more from 25,000 live conversations during the war, alongside millions of recorded conversations, text messages and leaflets dropped from the air for the purpose of warning the population). In general, the IDF works to reduce harm to civilians as part of the attacks as much as possible, despite the challenge of fighting a terrorist organization that uses the citizens of Gaza as human shields.”
    ‘The machine produced 100 targets in one day’

    According to the IDF Spokesperson, by Nov. 10, during the first 35 days of fighting, Israel attacked a total of 15,000 targets in Gaza. Based on multiple sources, this is a very high figure compared to the four previous major operations in the Strip. During Guardian of the Walls in 2021, Israel attacked 1,500 targets in 11 days. In Protective Edge in 2014, which lasted 51 days, Israel struck between 5,266 and 6,231 targets. During Pillar of Defense in 2012, about 1,500 targets were attacked over eight days. In Cast Lead” in 2008, Israel struck 3,400 targets in 22 days.

    Intelligence sources who served in the previous operations also told +972 and Local Call that, for 10 days in 2021 and three weeks in 2014, an attack rate of 100 to 200 targets per day led to a situation in which the Israeli Air Force had no targets of military value left. Why, then, after nearly two months, has the Israeli army not yet run out of targets in the current war?

    The answer may lie in a statement from the IDF Spokesperson on Nov. 2, according to which it is using the AI system Habsora (“The Gospel”), which the spokesperson says “enables the use of automatic tools to produce targets at a fast pace, and works by improving accurate and high-quality intelligence material according to [operational] needs.”

    In the statement, a senior intelligence official is quoted as saying that thanks to Habsora, targets are created for precision strikes “while causing great damage to the enemy and minimal damage to non-combatants. Hamas operatives are not immune — no matter where they hide.”

    According to intelligence sources, Habsora generates, among other things, automatic recommendations for attacking private residences where people suspected of being Hamas or Islamic Jihad operatives live. Israel then carries out large-scale assassination operations through the heavy shelling of these residential homes.

    Habsora, explained one of the sources, processes enormous amounts of data that “tens of thousands of intelligence officers could not process,” and recommends bombing sites in real time. Because most senior Hamas officials head into underground tunnels with the start of any military operation, the sources say, the use of a system like Habsora makes it possible to locate and attack the homes of relatively junior operatives.

    One former intelligence officer explained that the Habsora system enables the army to run a “mass assassination factory,” in which the “emphasis is on quantity and not on quality.” A human eye “will go over the targets before each attack, but it need not spend a lot of time on them.” Since Israel estimates that there are approximately 30,000 Hamas members in Gaza, and they are all marked for death, the number of potential targets is enormous.

    In 2019, the Israeli army created a new center aimed at using AI to accelerate target generation. “The Targets Administrative Division is a unit that includes hundreds of officers and soldiers, and is based on AI capabilities,” said former IDF Chief of Staff Aviv Kochavi in an in-depth interview with Ynet earlier this year.

    “This is a machine that, with the help of AI, processes a lot of data better and faster than any human, and translates it into targets for attack,” Kochavi went on. “The result was that in Operation Guardian of the Walls [in 2021], from the moment this machine was activated, it generated 100 new targets every day. You see, in the past there were times in Gaza when we would create 50 targets per year. And here the machine produced 100 targets in one day.”

    “We prepare the targets automatically and work according to a checklist,” one of the sources who worked in the new Targets Administrative Division told +972 and Local Call. “It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.”

    A senior military official in charge of the target bank told the Jerusalem Post earlier this year that, thanks to the army’s AI systems, for the first time the military can generate new targets at a faster rate than it attacks. Another source said the drive to automatically generate large numbers of targets is a realization of the Dahiya Doctrine.

    Automated systems like Habsora have thus greatly facilitated the work of Israeli intelligence officers in making decisions during military operations, including calculating potential casualties. Five different sources confirmed that the number of civilians who may be killed in attacks on private residences is known in advance to Israeli intelligence, and appears clearly in the target file under the category of “collateral damage.”

    According to these sources, there are degrees of collateral damage, according to which the army determines whether it is possible to attack a target inside a private residence. “When the general directive becomes ‘Collateral Damage 5,’ that means we are authorized to strike all targets that will kill five or less civilians — we can act on all target files that are five or less,” said one of the sources.

    “In the past, we did not regularly mark the homes of junior Hamas members for bombing,” said a security official who participated in attacking targets during previous operations. “In my time, if the house I was working on was marked Collateral Damage 5, it would not always be approved [for attack].” Such approval, he said, would only be received if a senior Hamas commander was known to be living in the home.

    “To my understanding, today they can mark all the houses of [any Hamas military operative regardless of rank],” the source continued. “That is a lot of houses. Hamas members who don’t really matter for anything live in homes across Gaza. So they mark the home and bomb the house and kill everyone there.”
    A concerted policy to bomb family homes

    On Oct. 22, the Israeli Air Force bombed the home of the Palestinian journalist Ahmed Alnaouq in the city of Deir al-Balah. Ahmed is a close friend and colleague of mine; four years ago, we founded a Hebrew Facebook page called “Across the Wall,” with the aim of bringing Palestinian voices from Gaza to the Israeli public.

    The strike on Oct. 22 collapsed blocks of concrete onto Ahmed’s entire family, killing his father, brothers, sisters, and all of their children, including babies. Only his 12-year-old niece, Malak, survived and remained in a critical condition, her body covered in burns. A few days later, Malak died.

    Twenty-one members of Ahmed’s family were killed in total, buried under their home. None of them were militants. The youngest was 2 years old; the oldest, his father, was 75. Ahmed, who is currently living in the UK, is now alone out of his entire family.

    Ahmed’s family WhatsApp group is titled “Better Together.” The last message that appears there was sent by him, a little after midnight on the night he lost his family. “Someone let me know that everything is fine,” he wrote. No one answered. He fell asleep, but woke up in a panic at 4 a.m. Drenched in sweat, he checked his phone again. Silence. Then he received a message from a friend with the terrible news.

    Ahmed’s case is common in Gaza these days. In interviews to the press, heads of Gaza hospitals have been echoing the same description: families enter hospitals as a succession of corpses, a child followed by his father followed by his grandfather. The bodies are all covered in dirt and blood.

    According to former Israeli intelligence officers, in many cases in which a private residence is bombed, the goal is the “assassination of Hamas or Jihad operatives,” and such targets are attacked when the operative enters the home. Intelligence researchers know if the operative’s family members or neighbors may also die in an attack, and they know how to calculate how many of them may die. Each of the sources said that these are private homes, where in the majority of cases, no military activity is carried out.

    +972 and Local Call do not have data regarding the number of military operatives who were indeed killed or wounded by aerial strikes on private residences in the current war, but there is ample evidence that, in many cases, none were military or political operatives belonging to Hamas or Islamic Jihad.

    On Oct. 10, the Israeli Air Force bombed an apartment building in Gaza’s Sheikh Radwan neighborhood, killing 40 people, most of them women and children. In one of the shocking videos taken following the attack, people are seen screaming, holding what appears to be a doll pulled from the ruins of the house, and passing it from hand to hand. When the camera zooms in, one can see that it is not a doll, but the body of a baby.

    One of the residents said that 19 members of his family were killed in the strike. Another survivor wrote on Facebook that he only found his son’s shoulder in the rubble. Amnesty investigated the attack and discovered that a Hamas member lived on one of the upper floors of the building, but was not present at the time of the attack.

    The bombing of family homes where Hamas or Islamic Jihad operatives supposedly live likely became a more concerted IDF policy during Operation Protective Edge in 2014. Back then, 606 Palestinians — about a quarter of the civilian deaths during the 51 days of fighting — were members of families whose homes were bombed. A UN report defined it in 2015 as both a potential war crime and “a new pattern” of action that “led to the death of entire families.”

    In 2014, 93 babies were killed as a result of Israeli bombings of family homes, of which 13 were under 1 year old. A month ago, 286 babies aged 1 or under were already identified as having been killed in Gaza, according to a detailed ID list with the ages of victims published by the Gaza Health Ministry on Oct. 26. The number has since likely doubled or tripled.

    However, in many cases, and especially during the current attacks on Gaza, the Israeli army has carried out attacks that struck private residences even when there is no known or clear military target. For example, according to the Committee to Protect Journalists, by Nov. 29, Israel had killed 50 Palestinian journalists in Gaza, some of them in their homes with their families.

    Roshdi Sarraj, 31, a journalist from Gaza who was born in Britain, founded a media outlet in Gaza called “Ain Media.” On Oct. 22, an Israeli bomb struck his parents’ home where he was sleeping, killing him. The journalist Salam Mema similarly died under the ruins of her home after it was bombed; of her three young children, Hadi, 7, died, while Sham, 3, has not yet been found under the rubble. Two other journalists, Duaa Sharaf and Salma Makhaimer, were killed together with their children in their homes.

    Israeli analysts have admitted that the military effectiveness of these kinds of disproportionate aerial attacks is limited. Two weeks after the start of the bombings in Gaza (and before the ground invasion) — after the bodies of 1,903 children, approximately 1,000 women, and 187 elderly men were counted in the Gaza Strip — Israeli commentator Avi Issacharoff tweeted: “As hard as it is to hear, on the 14th day of fighting, it does not appear that the military arm of Hamas has been significantly harmed. The most significant damage to the military leadership is the assassination of [Hamas commander] Ayman Nofal.”
    ‘Fighting human animals’

    Hamas militants regularly operate out of an intricate network of tunnels built under large stretches of the Gaza Strip. These tunnels, as confirmed by the former Israeli intelligence officers we spoke to, also pass under homes and roads. Therefore, Israeli attempts to destroy them with aerial strikes are in many cases likely to lead to the killing of civilians. This may be another reason for the high number of Palestinian families wiped out in the current offensive.

    The intelligence officers interviewed for this article said that the way Hamas designed the tunnel network in Gaza knowingly exploits the civilian population and infrastructure above ground. These claims were also the basis of the media campaign that Israel conducted vis-a-vis the attacks and raids on Al-Shifa Hospital and the tunnels that were discovered under it.

    Israel has also attacked a large number of military targets: armed Hamas operatives, rocket launcher sites, snipers, anti-tank squads, military headquarters, bases, observation posts, and more. From the beginning of the ground invasion, aerial bombardment and heavy artillery fire have been used to provide backup to Israeli troops on the ground. Experts in international law say these targets are legitimate, as long as the strikes comply with the principle of proportionality.

    In response to an enquiry from +972 and Local Call for this article, the IDF Spokesperson stated: “The IDF is committed to international law and acts according to it, and in doing so attacks military targets and does not attack civilians. The terrorist organization Hamas places its operatives and military assets in the heart of the civilian population. Hamas systematically uses the civilian population as a human shield, and conducts combat from civilian buildings, including sensitive sites such as hospitals, mosques, schools, and UN facilities.”

    Intelligence sources who spoke to +972 and Local Call similarly claimed that in many cases Hamas “deliberately endangers the civilian population in Gaza and tries to forcefully prevent civilians from evacuating.” Two sources said that Hamas leaders “understand that Israeli harm to civilians gives them legitimacy in fighting.”

    At the same time, while it’s hard to imagine now, the idea of dropping a one-ton bomb aimed at killing a Hamas operative yet ending up killing an entire family as “collateral damage” was not always so readily accepted by large swathes of Israeli society. In 2002, for example, the Israeli Air Force bombed the home of Salah Mustafa Muhammad Shehade, then the head of the Al-Qassam Brigades, Hamas’ military wing. The bomb killed him, his wife Eman, his 14-year-old daughter Laila, and 14 other civilians, including 11 children. The killing caused a public uproar in both Israel and the world, and Israel was accused of committing war crimes.

    That criticism led to a decision by the Israeli army in 2003 to drop a smaller, quarter-ton bomb on a meeting of top Hamas officials — including the elusive leader of Al-Qassam Brigades, Mohammed Deif — taking place in a residential building in Gaza, despite the fear that it would not be powerful enough to kill them. In his book “To Know Hamas,” veteran Israeli journalist Shlomi Eldar wrote that the decision to use a relatively small bomb was due to the Shehade precedent, and the fear that a one-ton bomb would kill the civilians in the building as well. The attack failed, and the senior military wing officers fled the scene.

    In December 2008, in the first major war that Israel waged against Hamas after it seized power in Gaza, Yoav Gallant, who at the time headed the IDF Southern Command, said that for the first time Israel was “hitting the family homes” of senior Hamas officials with the aim of destroying them, but not harming their families. Gallant emphasized that the homes were attacked after the families were warned by a “knock on the roof,” as well as by phone call, after it was clear that Hamas military activity was taking place inside the house.

    After 2014’s Protective Edge, during which Israel began to systematically strike family homes from the air, human rights groups like B’Tselem collected testimonies from Palestinians who survived these attacks. The survivors said the homes collapsed in on themselves, glass shards cut the bodies of those inside, the debris “smells of blood,” and people were buried alive.

    This deadly policy continues today — thanks in part to the use of destructive weaponry and sophisticated technology like Habsora, but also to a political and security establishment that has loosened the reins on Israel’s military machinery. Fifteen years after insisting that the army was taking pains to minimize civilian harm, Gallant, now Defense Minister, has clearly changed his tune. “We are fighting human animals and we act accordingly,” he said after October 7.

    https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza

    #bombardement #assassinat_de_masse #Gaza #7_octobre_2023 #Israël #bombardements #AI #IA #intelligence_artificielle #armée_israélienne #doctrine_Dahiya

    via @freakonometrics

    ici aussi via @arno:
    https://seenthis.net/messages/1029469

    • #The_Gospel’: how Israel uses AI to select bombing targets in Gaza

      Concerns over data-driven ‘factory’ that significantly increases the number of targets for strikes in the Palestinian territory

      Israel’s military has made no secret of the intensity of its bombardment of the Gaza Strip. In the early days of the offensive, the head of its air force spoke of relentless, “around the clock” airstrikes. His forces, he said, were only striking military targets, but he added: “We are not being surgical.”

      There has, however, been relatively little attention paid to the methods used by the Israel Defense Forces (IDF) to select targets in Gaza, and to the role artificial intelligence has played in their bombing campaign.

      As Israel resumes its offensive after a seven-day ceasefire, there are mounting concerns about the IDF’s targeting approach in a war against Hamas that, according to the health ministry in Hamas-run Gaza, has so far killed more than 15,000 people in the territory.

      The IDF has long burnished its reputation for technical prowess and has previously made bold but unverifiable claims about harnessing new technology. After the 11-day war in Gaza in May 2021, officials said Israel had fought its “first AI war” using machine learning and advanced computing.

      The latest Israel-Hamas war has provided an unprecedented opportunity for the IDF to use such tools in a much wider theatre of operations and, in particular, to deploy an AI target-creation platform called “the Gospel”, which has significantly accelerated a lethal production line of targets that officials have compared to a “factory”.

      The Guardian can reveal new details about the Gospel and its central role in Israel’s war in Gaza, using interviews with intelligence sources and little-noticed statements made by the IDF and retired officials.

      This article also draws on testimonies published by the Israeli-Palestinian publication +972 Magazine and the Hebrew-language outlet Local Call, which have interviewed several current and former sources in Israel’s intelligence community who have knowledge of the Gospel platform.

      Their comments offer a glimpse inside a secretive, AI-facilitated military intelligence unit that is playing a significant role in Israel’s response to the Hamas massacre in southern Israel on 7 October.

      The slowly emerging picture of how Israel’s military is harnessing AI comes against a backdrop of growing concerns about the risks posed to civilians as advanced militaries around the world expand the use of complex and opaque automated systems on the battlefield.

      “Other states are going to be watching and learning,” said a former White House security official familiar with the US military’s use of autonomous systems.

      The Israel-Hamas war, they said, would be an “important moment if the IDF is using AI in a significant way to make targeting choices with life-and-death consequences”.

      From 50 targets a year to 100 a day

      In early November, the IDF said “more than 12,000” targets in Gaza had been identified by its target administration division.

      Describing the unit’s targeting process, an official said: “We work without compromise in defining who and what the enemy is. The operatives of Hamas are not immune – no matter where they hide.”

      The activities of the division, formed in 2019 in the IDF’s intelligence directorate, are classified.

      However a short statement on the IDF website claimed it was using an AI-based system called Habsora (the Gospel, in English) in the war against Hamas to “produce targets at a fast pace”.

      The IDF said that “through the rapid and automatic extraction of intelligence”, the Gospel produced targeting recommendations for its researchers “with the goal of a complete match between the recommendation of the machine and the identification carried out by a person”.

      Multiple sources familiar with the IDF’s targeting processes confirmed the existence of the Gospel to +972/Local Call, saying it had been used to produce automated recommendations for attacking targets, such as the private homes of individuals suspected of being Hamas or Islamic Jihad operatives.

      In recent years, the target division has helped the IDF build a database of what sources said was between 30,000 and 40,000 suspected militants. Systems such as the Gospel, they said, had played a critical role in building lists of individuals authorised to be assassinated.

      Aviv Kochavi, who served as the head of the IDF until January, has said the target division is “powered by AI capabilities” and includes hundreds of officers and soldiers.

      In an interview published before the war, he said it was “a machine that produces vast amounts of data more effectively than any human, and translates it into targets for attack”.

      According to Kochavi, “once this machine was activated” in Israel’s 11-day war with Hamas in May 2021 it generated 100 targets a day. “To put that into perspective, in the past we would produce 50 targets in Gaza per year. Now, this machine produces 100 targets a single day, with 50% of them being attacked.”

      Precisely what forms of data are ingested into the Gospel is not known. But experts said AI-based decision support systems for targeting would typically analyse large sets of information from a range of sources, such as drone footage, intercepted communications, surveillance data and information drawn from monitoring the movements and behaviour patterns of individuals and large groups.

      The target division was created to address a chronic problem for the IDF: in earlier operations in Gaza, the air force repeatedly ran out of targets to strike. Since senior Hamas officials disappeared into tunnels at the start of any new offensive, sources said, systems such as the Gospel allowed the IDF to locate and attack a much larger pool of more junior operatives.

      One official, who worked on targeting decisions in previous Gaza operations, said the IDF had not previously targeted the homes of junior Hamas members for bombings. They said they believed that had changed for the present conflict, with the houses of suspected Hamas operatives now targeted regardless of rank.

      “That is a lot of houses,” the official told +972/Local Call. “Hamas members who don’t really mean anything live in homes across Gaza. So they mark the home and bomb the house and kill everyone there.”
      Targets given ‘score’ for likely civilian death toll

      In the IDF’s brief statement about its target division, a senior official said the unit “produces precise attacks on infrastructure associated with Hamas while inflicting great damage to the enemy and minimal harm to non-combatants”.

      The precision of strikes recommended by the “AI target bank” has been emphasised in multiple reports in Israeli media. The Yedioth Ahronoth daily newspaper reported that the unit “makes sure as far as possible there will be no harm to non-involved civilians”.

      A former senior Israeli military source told the Guardian that operatives use a “very accurate” measurement of the rate of civilians evacuating a building shortly before a strike. “We use an algorithm to evaluate how many civilians are remaining. It gives us a green, yellow, red, like a traffic signal.”

      However, experts in AI and armed conflict who spoke to the Guardian said they were sceptical of assertions that AI-based systems reduced civilian harm by encouraging more accurate targeting.

      A lawyer who advises governments on AI and compliance with humanitarian law said there was “little empirical evidence” to support such claims. Others pointed to the visible impact of the bombardment.

      “Look at the physical landscape of Gaza,” said Richard Moyes, a researcher who heads Article 36, a group that campaigns to reduce harm from weapons.

      “We’re seeing the widespread flattening of an urban area with heavy explosive weapons, so to claim there’s precision and narrowness of force being exerted is not borne out by the facts.”

      According to figures released by the IDF in November, during the first 35 days of the war Israel attacked 15,000 targets in Gaza, a figure that is considerably higher than previous military operations in the densely populated coastal territory. By comparison, in the 2014 war, which lasted 51 days, the IDF struck between 5,000 and 6,000 targets.

      Multiple sources told the Guardian and +972/Local Call that when a strike was authorised on the private homes of individuals identified as Hamas or Islamic Jihad operatives, target researchers knew in advance the number of civilians expected to be killed.

      Each target, they said, had a file containing a collateral damage score that stipulated how many civilians were likely to be killed in a strike.

      One source who worked until 2021 on planning strikes for the IDF said “the decision to strike is taken by the on-duty unit commander”, some of whom were “more trigger happy than others”.

      The source said there had been occasions when “there was doubt about a target” and “we killed what I thought was a disproportionate amount of civilians”.

      An Israeli military spokesperson said: “In response to Hamas’ barbaric attacks, the IDF operates to dismantle Hamas military and administrative capabilities. In stark contrast to Hamas’ intentional attacks on Israeli men, women and children, the IDF follows international law and takes feasible precautions to mitigate civilian harm.”
      ‘Mass assassination factory’

      Sources familiar with how AI-based systems have been integrated into the IDF’s operations said such tools had significantly sped up the target creation process.

      “We prepare the targets automatically and work according to a checklist,” a source who previously worked in the target division told +972/Local Call. “It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.”

      A separate source told the publication the Gospel had allowed the IDF to run a “mass assassination factory” in which the “emphasis is on quantity and not on quality”. A human eye, they said, “will go over the targets before each attack, but it need not spend a lot of time on them”.

      For some experts who research AI and international humanitarian law, an acceleration of this kind raises a number of concerns.

      Dr Marta Bo, a researcher at the Stockholm International Peace Research Institute, said that even when “humans are in the loop” there is a risk they develop “automation bias” and “over-rely on systems which come to have too much influence over complex human decisions”.

      Moyes, of Article 36, said that when relying on tools such as the Gospel, a commander “is handed a list of targets a computer has generated” and they “don’t necessarily know how the list has been created or have the ability to adequately interrogate and question the targeting recommendations”.

      “There is a danger,” he added, “that as humans come to rely on these systems they become cogs in a mechanised process and lose the ability to consider the risk of civilian harm in a meaningful way.”

      https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets

    • Comment l’armée israélienne utilise l’intelligence artificielle pour bombarder Gaza

      Suggestions de cibles, plans d’attaque automatisés : des outils algorithmiques, développés par Tsahal ou des entreprises privées, servent à mener une guerre « totale » à Gaza. D’anciens officiers du renseignement parlent d’une « usine d’assassinat de masse ».

      L’intelligence artificielle mise au service du bombardement sur la bande de Gaza, l’un des plus destructeurs et meurtriers du XXIe siècle. L’idée, qui appartenait il y a peu à la science-fiction, est désormais une réalité. L’armée israélienne le revendique officiellement dans sa communication.

      Le sujet, qui avait déjà intéressé plusieurs titres de la presse israélienne et internationale ces dernières années, a été remis sur le devant de la scène, ces derniers jours, par une longue enquête du média israélo-palestinien de gauche +972, publiée le 30 novembre. En s’appuyant sur des témoignages de militaires et d’ex-militaires, l’article détaille les rouages de la campagne aérienne sans précédent menée par Tsahal sur Gaza depuis le 7 octobre. Et l’usage, fait par l’armée dans ce contexte, d’outils d’intelligence artificielle.
      Tsahal revendique une « guerre par IA »

      L’utilisation de ce type de technologies dans un cadre militaire par les forces israéliennes a été documentée à plusieurs reprises. En 2021, après la campagne de bombardements menée pendant onze jours sur Gaza, le Jerusalem Post rapportait que Tsahal revendiquait avoir mené cette année-là la première « guerre par IA », mentionnant plusieurs outils algorithmiques destinés à optimiser l’action sur le terrain. Le quotidien israélien nommait alors trois algorithmes, nommés « Alchemist », « Gospel », et « Depth of Wisdom ». Un autre système, « Fire Factory », a été décrit en juillet 2023 par le média Bloomberg.

      Dans un contexte militaire, l’IA est utilisée pour analyser un très grand nombre de données issues du renseignement (ou de la logistique dans certains cas), et estimer rapidement les effets des différents choix stratégiques possibles. Deux outils, en particulier, seraient utilisés par Tsahal dans le cadre des attaques menées depuis le 7 octobre. Le premier, « Gospel » (ou « Habsora »), vise à suggérer les cibles les plus pertinentes pour une attaque, dans un périmètre donné. Le second, « Fire Factory », sert à optimiser, en temps réel, les plans d’attaques des avions et des drones, en fonction de la nature des cibles choisies. L’algorithme se chargerait de calculer la quantité de munitions nécessaires, d’attribuer les cibles aux différents avions et drones, ou de déterminer l’ordre le plus pertinent pour les attaques.

      Une capture d’écran de « Fire Factory », publiée en juillet par Bloomberg à titre d’illustration, montre une carte avec plusieurs cibles entourées, ainsi qu’une frise chronologique sur laquelle se succèdent différentes frappes. A noter que la séquence d’attaque présentée est fictive ou que, tout du moins, un certain nombre d’éléments à l’image ont été altérés avant publication, les noms des cibles en hébreu étant ici fantaisistes (des restaurants de Tel Aviv, par exemple).

      Toujours d’après Bloomberg, les systèmes d’intelligence artificielle de l’armée israélienne seraient développés par l’armée elle-même, mais aussi par des acteurs privés, comme l’entreprise du secteur de la défense Rafael, qui fournirait « Fire Factory ». A propos d’un outil du même genre (mais d’un autre nom), l’entreprise vante sur son site « un changement de paradigme révolutionnaire dans l’analyse de la situation et le circuit entre le capteur et le tireur, permettant une efficacité, une vitesse et une précision sans précédent ».
      De 50 cibles par an à 100 cibles par jour

      Dans les deux cas, les systèmes sont supervisés (d’après les déclarations de Tsahal cet été à Bloomberg) par des opérateurs humains qui, derrière l’écran, doivent vérifier et approuver tant les cibles que les plans de raids. Dit autrement, ces systèmes ne prendraient pas directement la décision de tirer, bien qu’une partie du processus soit automatisé. Selon des représentants des forces armées israéliennes interrogées par Bloomberg, ces solutions informatiques avaient été élaborées dans l’hypothèse de la conduire d’une « guerre totale » (« all-out war »).

      D’après le média +972, l’utilisation de ces solutions technologiques explique comment l’armée israélienne a pu bombarder la bande de Gaza à un rythme aussi effréné (15 000 cibles durant les seuls 35 premiers jours de bombardement, selon les chiffres mêmes de Tsahal). De fait, dans un communiqué publié début novembre, les forces armées israéliennes reconnaissaient elles-mêmes que « Gospel » (cité nommément) leur permettait de générer, de manière automatique, « des cibles à un rythme rapide ».

      Dans un article paru fin juin sur le média israélien YNet, l’ancien chef d’état-major de l’armée israélienne Aviv Kochavi expliquait que, lors de la guerre de 2021, « Gospel » générait 100 cibles par jour, ajoutant : « Pour mettre cela en perspective, dans le passé, nous produisions 50 cibles à Gaza par an. » Et de préciser que, lors de ces opérations militaires, la moitié des cibles suggérées par le logiciel avaient été attaquées. Au regard du rythme auquel l’algorithme propose de nouvelles cibles à bombarder, d’anciens officiers de renseignement critiques du procédé, interrogés par +972, assimilent le processus à une « usine d’assassinat de masse ».
      « Rien n’arrive par hasard »

      Les pertes civiles font partie des éléments dont « Gospel » tient compte pour identifier de nouvelles cibles. En effet, selon l’enquête de +972, l’armée israélienne dispose d’informations sur la majorité des cibles potentielles à Gaza, permettant notamment d’estimer le nombre de personnes civiles susceptibles d’être tuées en cas de frappes. Or, selon une autre source interrogée par le média israélien, depuis le 7 octobre, le nombre de morts civils jugé acceptable par le commandement militaire israélien dans l’objectif d’atteindre un dirigeant du Hamas serait passé de « dizaines » à « des centaines ».

      Nous ne sommes pas le Hamas. Ce ne sont pas des missiles aléatoires. Tout est intentionnel.
      — Une source anonyme au média israélien « +972 »

      « Rien n’arrive par hasard, déclare une autre source aux journalistes de +972. Lorsqu’une fillette de 3 ans est tuée dans une maison à Gaza, c’est parce que quelqu’un, dans l’armée, a décidé que ce n’était pas grave qu’elle soit tuée – que c’était un prix qui valait la peine d’être payé pour frapper [une autre] cible. Nous ne sommes pas le Hamas. Ce ne sont pas des missiles aléatoires. Tout est intentionnel. Nous savons exactement combien de dommages collatéraux il y a dans chaque maison. »
      Des milliers d’arbitrages invisibles

      Outre l’intensification des frappes permise par ces outils, se pose également la question de la qualité des données de renseignement sur lesquelles reposent les analyses. En 2020, une enquête du quotidien britannique The Independent, citant des militaires israéliens, pointait déjà des failles dans le cibles visées par les bombardements de l’armée de l’air israélienne, y compris sur des cibles obsolètes, pour remplir des quotas.

      Si ces données sont imprécises, périmées ou erronées, les suggestions logicielles n’auront aucune valeur stratégique. Or, si d’après un militaire interrogé par Bloomberg, une partie du choix des IA est transmise aux militaires décisionnaires, ces derniers ignorent le détail des milliers d’arbitrages invisibles réalisés par l’IA, et ne peuvent pas interroger leur fiabilité ou leur pertinence. De façon plus générale, l’utilisation de ces algorithmes rend plus difficile, pour les militaires, de comprendre ou de justifier leurs décisions.

      https://www.liberation.fr/checknews/comment-larmee-israelienne-utilise-lintelligence-artificielle-pour-bombar

    • Gaza: una “fabbrica di omicidi di massa” grazie all’intelligenza artificiale

      Israele ha impiegato un sistema di intelligenza artificiale per generare obiettivi di morte che ha trasformato Gaza in una “fabbrica di omicidi di massa”, secondo un nuovo rapporto investigativo, di forte impatto, pubblicato dall’organo israeliano di informazione +972 Magazine. Il sistema differisce in modo significativo dalle precedenti operazioni militari, provocando uccisioni indiscriminate e un numero estremamente elevato di vittime civili durante l’attuale offensiva di Israele a Gaza.

      L’esercito israeliano dispone di dossier che riguardano la stragrande maggioranza dei potenziali obiettivi a Gaza – comprese le case – e che stabiliscono il numero di civili che probabilmente saranno uccisi in caso di attacco, hanno dichiarato le fonti a +972. Questo numero è calcolato e conosciuto in anticipo, e le unità di intelligence dell’esercito sanno anche, poco prima di effettuare un attacco, quanti civili saranno sicuramente uccisi.

      Mettendo in evidenza lo scioccante disprezzo per la vita dei civili, il rapporto ha rilevato che il comando militare israeliano ha consapevolmente approvato l’uccisione di centinaia di civili palestinesi nel tentativo di assassinare un singolo comandante militare di spicco di Hamas. “I numeri sono aumentati da decine di morti civili [permessi] come danni collaterali nell’ambito di un attacco a un alto funzionario nelle operazioni precedenti, a centinaia di morti civili come danni collaterali”, ha dichiarato una fonte a +972.

      I protocolli sviluppati per la selezione degli obiettivi utilizzati da Israele hanno visto l’esercito aumentare significativamente i bombardamenti di infrastrutture che non sono di natura prettamente militare. Queste includono residenze private, edifici pubblici, infrastrutture e grattacieli che, secondo le fonti, l’esercito definisce “obiettivi di potere”.

      “Nulla accade per caso”, ha riferito un’altra fonte.

      “Quando una bambina di 3 anni viene uccisa in una casa a Gaza, è perché qualcuno nell’esercito ha deciso che non costituiva un grosso problema il fatto di ucciderla, che era un prezzo da pagare per colpire [un altro] obiettivo”.

      “Noi non siamo Hamas. Questi non sono razzi casuali. Tutto è intenzionale. Sappiamo esattamente quanti danni collaterali ci sono in ogni casa”.

      Gli ingenti danni alla vita dei civili a Gaza sono dovuti all’uso diffuso di un sistema di intelligenza artificiale chiamato Habsora (Il Vangelo). A quanto pare, il sistema raccomanda potenziali obiettivi di Gaza con un ritmo automatizzato senza precedenti. Citando ex ufficiali, l’indagine sostiene che questa tecnologia consente una “fabbrica di omicidi di massa” che privilegia la quantità rispetto all’accuratezza, permettendo danni collaterali più elevati. L’obiettivo è stato esplicitamente menzionato dal portavoce dell’esercito israeliano Daniel Hagari che, all’inizio dell’operazione militare israeliana di ottobre, ha dichiarato: “L’enfasi è sul danno e non sulla precisione”.

      Sebbene non sia mai accaduto che l’esercito israeliano abbia attaccato oltre 1.000 obiettivi energetici in cinque giorni, secondo il rapporto, l’idea di provocare devastazioni di massa nelle aree civili per scopi strategici è stata formulata anche in precedenti operazioni militari a Gaza, affinate dai tempi della cosiddetta “Dottrina Dahiya” applicata durante la Seconda Guerra del Libano del 2006.

      Secondo la dottrina – sviluppata dall’ex capo di Stato Maggiore dell’IDF Gadi Eizenkot, che ora è membro della Knesset e fa parte dell’attuale gabinetto di guerra – in una guerra contro gruppi di guerriglieri come Hamas o Hezbollah, Israele deve usare una forza sproporzionata e schiacciante, colpendo le infrastrutture civili e governative, al fine di stabilire una deterrenza e costringere la popolazione civile a fare pressione sui gruppi per porre fine ai loro attacchi. Si ritiene che il concetto di “obiettivi di potere” sia nato da questa stessa logica.

      Finora sono stati uccisi oltre 15.000 palestinesi, tra cui un numero sproporzionatamente alto di donne, bambini e anziani che non erano militanti. L’uccisione indiscriminata da parte di Israele è stata descritta come un “caso da manuale di genocidio” dai maggiori esperti nel campo degli studi sui genocidi.

      Il bilancio delle vittime civili e delle distruzioni a Gaza ha spinto i gruppi per i diritti umani e alcuni studi legali a chiedere indagini indipendenti per far emergere le responsabilità di quello che, secondo molti, è un genocidio.

      https://www.osservatoriorepressione.info/gaza-fabbrica-omicidi-massa-grazie-allintelligenza-artific

  • Underage Workers Are Training AI

    Companies that provide #Big_Tech with AI data-labeling services are inadvertently hiring young teens to work on their platforms, often exposing them to traumatic content.

    Like most kids his age, 15-year-old Hassan spent a lot of time online. Before the pandemic, he liked playing football with local kids in his hometown of Burewala in the Punjab region of Pakistan. But Covid lockdowns made him something of a recluse, attached to his mobile phone. “I just got out of my room when I had to eat something,” says Hassan, now 18, who asked to be identified under a pseudonym because he was afraid of legal action. But unlike most teenagers, he wasn’t scrolling TikTok or gaming. From his childhood bedroom, the high schooler was working in the global artificial intelligence supply chain, uploading and labeling data to train algorithms for some of the world’s largest AI companies.

    The raw data used to train machine-learning algorithms is first labeled by humans, and human verification is also needed to evaluate their accuracy. This data-labeling ranges from the simple—identifying images of street lamps, say, or comparing similar ecommerce products—to the deeply complex, such as content moderation, where workers classify harmful content within data scraped from all corners of the internet. These tasks are often outsourced to gig workers, via online crowdsourcing platforms such as #Toloka, which was where Hassan started his career.

    A friend put him on to the site, which promised work anytime, from anywhere. He found that an hour’s labor would earn him around $1 to $2, he says, more than the national minimum wage, which was about $0.26 at the time. His mother is a homemaker, and his dad is a mechanical laborer. “You can say I belong to a poor family,” he says. When the pandemic hit, he needed work more than ever. Confined to his home, online and restless, he did some digging, and found that Toloka was just the tip of the iceberg.

    “AI is presented as a magical box that can do everything,” says Saiph Savage, director of Northeastern University’s Civic AI Lab. “People just simply don’t know that there are human workers behind the scenes.”

    At least some of those human workers are children. Platforms require that workers be over 18, but Hassan simply entered a relative’s details and used a corresponding payment method to bypass the checks—and he wasn’t alone in doing so. WIRED spoke to three other workers in Pakistan and Kenya who said they had also joined platforms as minors, and found evidence that the practice is widespread.

    “When I was still in secondary school, so many teens discussed online jobs and how they joined using their parents’ ID,” says one worker who joined Appen at 16 in Kenya, who asked to remain anonymous. After school, he and his friends would log on to complete annotation tasks late into the night, often for eight hours or more.

    Appen declined to give an attributable comment.

    “If we suspect a user has violated the User Agreement, Toloka will perform an identity check and request a photo ID and a photo of the user holding the ID,” Geo Dzhikaev, head of Toloka operations, says.

    Driven by a global rush into AI, the global data labeling and collection industry is expected to grow to over $17.1 billion by 2030, according to Grand View Research, a market research and consulting company. Crowdsourcing platforms such as Toloka, Appen, Clickworker, Teemwork.AI, and OneForma connect millions of remote gig workers in the global south to tech companies located in Silicon Valley. Platforms post micro-tasks from their tech clients, which have included Amazon, Microsoft Azure, Salesforce, Google, Nvidia, Boeing, and Adobe. Many platforms also partner with Microsoft’s own data services platform, the Universal Human Relevance System (UHRS).

    These workers are predominantly based in East Africa, Venezuela, Pakistan, India, and the Philippines—though there are even workers in refugee camps, who label, evaluate, and generate data. Workers are paid per task, with remuneration ranging from a cent to a few dollars—although the upper end is considered something of a rare gem, workers say. “The nature of the work often feels like digital servitude—but it’s a necessity for earning a livelihood,” says Hassan, who also now works for Clickworker and Appen.

    Sometimes, workers are asked to upload audio, images, and videos, which contribute to the data sets used to train AI. Workers typically don’t know exactly how their submissions will be processed, but these can be pretty personal: On Clickworker’s worker jobs tab, one task states: “Show us you baby/child! Help to teach AI by taking 5 photos of your baby/child!” for €2 ($2.15). The next says: “Let your minor (aged 13-17) take part in an interesting selfie project!”

    Some tasks involve content moderation—helping AI distinguish between innocent content and that which contains violence, hate speech, or adult imagery. Hassan shared screen recordings of tasks available the day he spoke with WIRED. One UHRS task asked him to identify “fuck,” “c**t,” “dick,” and “bitch” from a body of text. For Toloka, he was shown pages upon pages of partially naked bodies, including sexualized images, lingerie ads, an exposed sculpture, and even a nude body from a Renaissance-style painting. The task? Decipher the adult from the benign, to help the algorithm distinguish between salacious and permissible torsos.

    Hassan recalls moderating content while under 18 on UHRS that, he says, continues to weigh on his mental health. He says the content was explicit: accounts of rape incidents, lifted from articles quoting court records; hate speech from social media posts; descriptions of murders from articles; sexualized images of minors; naked images of adult women; adult videos of women and girls from YouTube and TikTok.

    Many of the remote workers in Pakistan are underage, Hassan says. He conducted a survey of 96 respondents on a Telegram group chat with almost 10,000 UHRS workers, on behalf of WIRED. About a fifth said they were under 18.

    Awais, 20, from Lahore, who spoke on condition that his first name not be published, began working for UHRS via Clickworker at 16, after he promised his girlfriend a birthday trip to the turquoise lakes and snow-capped mountains of Pakistan’s northern region. His parents couldn’t help him with the money, so he turned to data work, joining using a friend’s ID card. “It was easy,” he says.

    He worked on the site daily, primarily completing Microsoft’s “Generic Scenario Testing Extension” task. This involved testing homepage and search engine accuracy. In other words, did selecting “car deals” on the MSN homepage bring up photos of cars? Did searching “cat” on Bing show feline images? He was earning $1 to $3 each day, but he found the work both monotonous and infuriating. At times he found himself working 10 hours for $1, because he had to do unpaid training to access certain tasks. Even when he passed the training, there might be no task to complete; or if he breached the time limit, they would suspend his account, he says. Then seemingly out of nowhere, he got banned from performing his most lucrative task—something workers say happens regularly. Bans can occur for a host of reasons, such as giving incorrect answers, answering too fast, or giving answers that deviate from the average pattern of other workers. He’d earned $70 in total. It was almost enough to take his high school sweetheart on the trip, so Awais logged off for good.

    Clickworker did not respond to requests for comment. Microsoft declined to comment.

    “In some instances, once a user finishes the training, the quota of responses has already been met for that project and the task is no longer available,” Dzhikaev said. “However, should other similar tasks become available, they will be able to participate without further training.”

    Researchers say they’ve found evidence of underage workers in the AI industry elsewhere in the world. Julian Posada, assistant professor of American Studies at Yale University, who studies human labor and data production in the AI industry, says that he’s met workers in Venezuela who joined platforms as minors.

    Bypassing age checks can be pretty simple. The most lenient platforms, like Clickworker and Toloka, simply ask workers to state they are over 18; the most secure, such as Remotasks, employ face recognition technology to match workers to their photo ID. But even that is fallible, says Posada, citing one worker who says he simply held the phone to his grandmother’s face to pass the checks. The sharing of a single account within family units is another way minors access the work, says Posada. He found that in some Venezuelan homes, when parents cook or run errands, children log on to complete tasks. He says that one family of six he met, with children as young as 13, all claimed to share one account. They ran their home like a factory, Posada says, so that two family members were at the computers working on data labeling at any given point. “Their backs would hurt because they have been sitting for so long. So they would take a break, and then the kids would fill in,” he says.

    The physical distances between the workers training AI and the tech giants at the other end of the supply chain—“the deterritorialization of the internet,” Posada calls it—creates a situation where whole workforces are essentially invisible, governed by a different set of rules, or by none.

    The lack of worker oversight can even prevent clients from knowing if workers are keeping their income. One Clickworker user in India, who requested anonymity to avoid being banned from the site, told WIRED he “employs” 17 UHRS workers in one office, providing them with a computer, mobile, and internet, in exchange for half their income. While his workers are aged between 18 and 20, due to Clickworker’s lack of age certification requirements, he knows of teenagers using the platform.

    In the more shadowy corners of the crowdsourcing industry, the use of child workers is overt.

    Captcha (Completely Automated Public Turing test to tell Computers and Humans Apart) solving services, where crowdsourcing platforms pay humans to solve captchas, are a less understood part in the AI ecosystem. Captchas are designed to distinguish a bot from a human—the most notable example being Google’s reCaptcha, which asks users to identify objects in images to enter a website. The exact purpose of services that pay people to solve them remains a mystery to academics, says Posada. “But what I can confirm is that many companies, including Google’s reCaptcha, use these services to train AI models,” he says. “Thus, these workers indirectly contribute to AI advancements.”

    There are at least 152 active services, mostly based in China, with more than half a million people working in the underground reCaptcha market, according to a 2019 study by researchers from Zhejiang University in Hangzhou.

    “Stable job for everyone. Everywhere,” one service, Kolotibablo, states on its website. The company has a promotional website dedicated to showcasing its worker testimonials, which includes images of young children from across the world. In one, a smiling Indonesian boy shows his 11th birthday cake to the camera. “I am very happy to be able to increase my savings for the future,” writes another, no older than 7 or 8. A 14-year-old girl in a long Hello Kitty dress shares a photo of her workstation: a laptop on a pink, Barbie-themed desk.

    Not every worker WIRED interviewed felt frustrated with the platforms. At 17, most of Younis Hamdeen’s friends were waiting tables. But the Pakistani teen opted to join UHRS via Appen instead, using the platform for three or four hours a day, alongside high school, earning up to $100 a month. Comparing products listed on Amazon was the most profitable task he encountered. “I love working for this platform,” Hamdeen, now 18, says, because he is paid in US dollars—which is rare in Pakistan—and so benefits from favorable exchange rates.

    But the fact that the pay for this work is incredibly low compared to the wages of in-house employees of the tech companies, and that the benefits of the work flow one way—from the global south to the global north, leads to uncomfortable parallels. “We do have to consider the type of colonialism that is being promoted with this type of work,” says the Civic AI Lab’s Savage.

    Hassan recently got accepted to a bachelor’s program in medical lab technology. The apps remain his sole income, working an 8 am to 6 pm shift, followed by 2 am to 6 am. However, his earnings have fallen to just $100 per month, as demand for tasks has outstripped supply, as more workers have joined since the pandemic.

    He laments that UHRS tasks can pay as little as 1 cent. Even on higher-paid jobs, such as occasional social media tasks on Appen, the amount of time he needs to spend doing unpaid research means he needs to work five or six hours to complete an hour of real-time work, all to earn $2, he says.

    “It’s digital slavery,” says Hassan.

    https://www.wired.co.uk/article/artificial-intelligence-data-labeling-children

    #enfants #AI #intelligence_artificielle #IA #travail #travail_des_enfants #esclavage_moderne #esclavage_digital #informatique

    signalé aussi par @monolecte
    https://seenthis.net/messages/1028002

  • Doing a literature review with 100s of papers ?

    ❌ Old way: Skimming papers for days
    ✅ New way: SciSpace AI + your research questions

    Lit review done in minutes, not weeks:
    👇

    You might have found dozens of good papers but there is no time to read all of them.
    Most are irrelevant and it is hard to tell which ones are worth reading.
    Before AI you would start skimming through abstracts and figures but even that can take time and you might miss things.

    SciSpace has two features that help you solve the problem.

    A. You can instruct the AI to scan each paper for custom clues

    B. You can use your entire collection of papers to answer a research question.

    If you have a collection of papers sitting in your Zotero or desktop folder you can put them to work.

    –-> https://scispace.com

    https://twitter.com/Artifexx/status/1725745524601717073

    –-> Qu’est-ce que les chercheur·es ferons CONTRE ou (probablement plutôt) AVEC cela ?

    #état_de_l'art #recherche #AI #IA #intelligence_artificielle #lecture #lectures #SciSpace

  • La #police_nationale utilise illégalement un #logiciel #israélien de #reconnaissance_faciale
    https://disclose.ngo/fr/article/la-police-nationale-utilise-illegalement-un-logiciel-israelien-de-reconnai

    En 2015, les forces de l’ordre ont acquis, en secret, un logiciel d’analyse d’images de vidéosurveillance de la #société_israélienne #Briefcam. Depuis huit ans, le ministère de l’intérieur dissimule le recours à cet outil qui permet l’emploi de la #reconnaissance faciale.

    C’est devenu une habitude. Ce mardi 14 novembre, comme ce fut le cas lors de l’édition précédente, Gérald Darmanin inaugure le salon #Milipol, au parc des Expositions de Villepinte (Seine-Saint-Denis). Consacré à la #sécurité intérieure des États, ce salon est une vitrine mondiale pour des entreprises souvent inconnues du grand public. C’est le cas de Briefcam, une société israélienne spécialisée dans le développement de logiciels destinés à la #vidéosurveillance #algorithmique (VSA). Grâce à l’#intelligence_artificielle, cette technologie permet d’analyser des images captées par des caméras ou des drones et de détecter des situations jugées « anormales ».

    Jusqu’en mai dernier, la VSA ne pouvait être utilisée par la police nationale que dans de très rares cas. Mais à l’approche des Jeux olympiques et paralympiques de Paris, le gouvernement est parvenu à faire adopter une loi au parlement qui autorise son expérimentation par la police nationale à une large échelle et ce, jusqu’au 31 mars 2025. Face aux risques d’atteinte à la vie privée, les député·es ont néanmoins interdit le recours à la reconnaissance faciale, qui permet d’identifier une personne sur des images à partir des traits du visage. Un outil #ultra-intrusif que certains logiciels commercialisés par Briefcam permettent d’activer en quelques clics. Et que les services de Gérald Darmanin connaissent bien.

  • Parlez moi d’IA #10 Découvrir la pièce de théâtre Qui a hacké Garoutzia ? L Bretzner S Abiteboul - YouTube
    https://www.youtube.com/watch?v=JJpJgZGME6M

    CONTEXTE

    « Parlez-moi d’IA » est un programme de vulgarisation de la data et de l’IA. Les échanges sont le moins technique possible. L’émission prend la forme d’un dialogue entre Jean-Philippe CLEMENT et son invité.e. où ils cherchent à comprendre et expliquer ce qui change dans nos vies, dans nos métiers, dans notre société avec cette évolution technologique de la data et de l’IA.

    EN PLATEAU

    Cette semaine nous recevons Lisa BRETZNER, comédienne et metteuse en scène au sein de la troupe Atropos et le directeur de recherche au CNRS et auteur Serge ABITEBOUL qui viennent nous faire découvrir la pièce Qui a hacké Garoutzia ?

    Cette pièce est une enquête au cœur de laquelle une IA domestique joue un rôle prépondérant. Ce spectacle drôle et intelligent nous propose de nous interroger sur la place des IA dans nos vies. Ce qu’elles apportent. Quel manque elles viennent combler ou quelle béquille émotionnelle dont on ne peut plus se passer, elles deviennent.
    Dans cette émission, on évacue rapidement les 2 ou 3 notions techniques ou culture geek (Lois de la robotique ou 42) ou qui pourraient effrayer le spectateur pour se consacrer sur la forme de la mise en scène et le fond des questions posées.

    #Garoutzia #Intelligence_artificielle #Interview

  • Greek data watchdog to rule on AI systems in refugee camps

    A forthcoming decision on the compliance of surveillance and security systems in Greek refugee camps could set a precedent for how AI and biometric systems are deployed for ‘migration management’ in Europe

    Greece’s data protection watchdog is set to issue a long-awaited decision on the legality of controversial high-tech surveillance and security systems deployed in the country’s refugee camps.

    The Greek Data Protection Authority’s (DPA) decision, expected by the end of the year, concerns in part a new multimillion-euro Artificial Intelligence Behavioural Analytics security system, which has been installed at several recently constructed refugee camps on the Aegean islands.

    The system – dubbed #Centaur and funded through the European Union (EU) – relies on algorithms and surveillance equipment – including cameras, drones, sensors and other hardware installed inside refugee camps – to automatically detect purported threats, alert authorities and keep a log of incidents. Hyperion, another system that relies on biometric fingerprint data to facilitate entry and exit from the refugee camps, is also being examined in the probe.

    Centaur and #Hyperion came under investigation in March 2022, after several Greek civil society organisations and a researcher filed a complaint to the Greek DPA questioning the legality of the programs under Greek and European laws. The Greek DPA’s decision could determine how artificial intelligence (AI) and biometric systems are used within the migration management context in Greece and beyond.

    Although the data watchdog’s decision remains to be seen, a review of dozens of documents obtained through public access to documents requests, on-the-ground reporting from the islands where the systems have been deployed, as well as interviews with Greek officials, camp staff and asylum seekers, suggest the Greek authorities likely sidestepped or botched crucial procedural requirements under the European Union’s (EU) privacy and human rights law during a mad rush to procure and deploy the systems.

    “It is difficult to see how the DPA will not find a breach,” said Niovi Vavoula, a lecturer at Queen Mary University of London, who petitioned the Greek DPA alongside Greek civil society organisations Homo Digitalis, The Hellenic League for Human Rights, and HIAS Greece.

    She said “major shortcomings” identified include the lack of appointment of a data protection officer at the Greek Migration Ministry prior to the launch of its programs.

    Security systems a hallmark of new EU camps

    Centaur and Hyperion are hallmarks of Greece’s newest migrant facilities, also known as Closed Controlled Access Centres (CCACs), which began opening in the eastern Aegean in 2021 with funding and supervision from the European Commission (EC). Greek authorities have lauded the surveillance apparatus at the revamped facilities as a silver-bullet solution to the problems that plagued previous makeshift migrant camps in Greece.

    The Centaur system allows authorities to monitor virtually every inch of the camps’ outdoor areas – and even some indoor spaces – from local command and control centres on the islands, and from a centralised control room in Athens, which Greece’s former migration minister Notis Mitarachi unveiled with much fanfare in September 2021.

    “We’re not monitoring people. We’re trying to prevent something bad from happening,” Anastasios Salis, the migration ministry’s director general of ICT and one of the self-described architects of the Centaur system, told me when I visited the ministry’s centralised control room in Athens in December 2021. “It’s not a prison, okay? It’s something different.”

    Critics have described the new camps as “prison-like” and a “dystopian nightmare”.

    Behind closed doors, the systems have also come under scrutiny by some EU authorities, including its Fundamental Rights Agency (FRA), which expressed concerns following a visit to one of the camps on Samos Island in May 2022.

    In subsequent informal input on Greece’s refugee camp security measures, the FRA said it was “concerned about the necessity and proportionality of some of the measures and their possible impact on fundamental rights of residents” and recommended “less intrusive measures”.

    Asked during the control room tour in 2021 what is being done to ensure the operation of the Centaur system respects privacy laws and the EU’s General Data Protection Regulation (GDPR), Salis responded: “GDPR? I don’t see any personal data recorded.”

    ‘Spectacular #experimentation’

    While other EU countries have experimented with myriad migration management and surveillance systems, Greece’s refugee camp deployments are unique.

    “What we see in Greece is spectacular experimentation of a variety of systems that we might not find in this condensed way in other national contexts,” said Caterina Rodelli, a policy analyst at the digital rights non-profit Access Now.

    She added: “Whereas in other European countries you might find surveillance of migrant people, asylum seekers … Greece has paved the way for having more dense testing environments” within refugee camps – particularly since the creation of its EU-funded and tech-riddled refugee camps.

    The #Samos facility, arguably the EU’s flagship camp, has been advertised as a model and visited by officials from the UK, the US and Morocco. Technology deployments at Greece’s borders have already been replicated in other European countries.

    When compared with other Mediterranean states, Greece has also received disproportionate funding from the EU for its border reinforcement projects.

    In a report published in July, the research outfit Statewatch compared commission funds to Greece between 2014 and 2020 and those projected to be paid between 2021 and 2027, finding that “the funding directed specifically towards borders has skyrocketed from almost €303m to more than €1bn – an increase of 248%”.

    Greece’s Centre for Security Studies, a research and consulting institution overseen by the Greek minister of citizen protection, for example, received €12.8m in EU funds to develop border technologies – the most of any organisation analysed in the report during an eight-year period that ended in 2022.

    Surveillance and security systems at Greek refugee camps are funded through the EU’s Covid recovery fund, known formally as the European Commission’s Recovery and Resilience Facility, as well as the Internal Security Fund.
    Early warnings

    At the heart of the Greek DPA probe are questions about whether Greece has a legal basis for the type of data processing understood to be required in the programs, and whether it followed procedures required under GDPR.

    This includes the need to conduct data protection impact assessments (DPIAs), which demonstrate compliance with the regulation as well as help identify and mitigate various risks associated with personal data processing – a procedure the GDPR stipulates must be carried out far in advance of certain systems being deployed.

    The need to conduct these assessments before technology deployments take place was underscored by the Greek DPA in a letter sent to the Greek migration ministry in March 2022 at the launch of its probe, in which it wrote that “in the case of procurement of surveillance and control systems” impact studies “should be carried out not only before their operation, but also before their procurement”.

    Official warnings for Greece to tread carefully with the use of surveillance in its camps came as early as June 2021 – months before the opening of the first EU-funded camp on Samos Island – when the FRA provided input on the use of surveillance equipment in Greek refugee camps, and the Centaur project specifically.

    In a document reviewed by Computer Weekly, the FRA wrote that the system would need to undergo “a thorough impact assessment” to check its compatibility with fundamental rights, including data protection and privacy safeguards. It also wrote that “the Greek authorities need to provide details on the equipment they are planning to use, its intended purpose and the legal basis for the automated processing of personal data, which to our understanding include sensitive biometric data”.
    A botched process?

    However, according to documents obtained through public record requests, the impact assessments related to the programs were only carried out months after the systems were deployed and operational, while the first assessments were not shared with the commission until late January 2022.

    Subsequent communications between EU and Greek authorities reveal, for the first time, glaring procedural omissions and clumsy efforts by Greek authorities to backpedal into compliance.

    For example, Greece’s initial assessments of the Centaur system covered the use of the CCTV cameras, but not the potentially more sensitive aspects of the project such as the use of motion analysis algorithms and drones, a commission representative wrote to Greek authorities in May 2022. The representative further underscored the importance of assessing “the impact of the whole project on data protection principles and fundamental rights”.

    The commission also informed the Greek authorities that some areas where cameras were understood to have been placed, such as common areas inside accommodation corridors, could be deemed as “sensitive”, and that Greece would need to assess if these deployments would interfere with data protection, privacy and other rights such as non-discrimination or child rights.

    It also requested more details on the personal data categories being processed – suggesting that relevant information on the categories and modalities of processing – such as whether the categories would be inferred by a human or an algorithm-based technology – had been excluded. At the time, Greek officials had reported that only “physical characteristics” would be collected but did not expand further.

    “No explanation is provided on why less intrusive measures cannot be implemented to prevent and detect criminal activities,” the commission wrote, reminding Greece that “all asylum seekers are considered vulnerable data subjects”, according to guidelines endorsed by the European Data Protection Board (EDPB).

    The FRA, in informal input provided after its visit to the Samos camp in May 2022, recommended basic safeguards Greece could take to ensure camp surveillance systems are in full compliance with GDPR. This included placing visible signs to inform camp residents and staff “about the operation of CCTV cameras before entering a monitored area”.

    No such signs were visible in the camp’s entry when Computer Weekly visited the Samos camp in early October this year, despite the presence of several cameras at the camp’s entry.

    Computer Weekly understands that, as of early October, procedural requirements such as impact assessments had not yet been finalised, and that the migration ministry would remain in consultation with the DPA until all the programs were fully GDPR-compliant.

    Responding to Computer Weekly’s questions about the findings of this story, a Greek migration ministry spokesperson said: “[The ministry] is already in open consultation with the Greek DPA for the ‘Centaur’ and ‘Hyperion’ programs since March 2022. The consultation has not yet been completed. Both of these programs have not been fully implemented as several secondary functions are still in the implementation phase while the primary functions (video surveillance through closed circuit television and drone, entry – exit through security turnstiles) of the programs are subject to continuous parameterisation and are in pilot application.

    “The ministry has justified to the Greek DPA as to the necessity of implementing the measure of installing and operating video surveillance systems in the hospitality structures citing the damage that the structures constantly suffer due to vandalism, resulting in substantial damage to state assets … and risking the health of vulnerable groups such as children and their companions.”

    The commission wrote to Computer Weekly that it “do[es] not comment on ongoing investigations carried out by independent data protection authorities” and did not respond to questions on the deployment of the systems.

    Previous reporting by the Greek investigative outlet Solomon has similarly identified potential violations, including that the camp programs were implemented without the Greek ministry of migration and asylum hiring a data protection officer as required under the GDPR.
    Lack of accountability and transparency?

    The commission has said it applies all relevant checks and controls but that it is ultimately up to Greece to ensure refugee camps and their systems are in line with European standards.

    Vavoula, the researcher who was involved in the Greek DPA complaint, said the EU has been “funding … these initiatives without proper oversight”.

    Saskia Bricmont, a Belgian politician and a Member of the European Parliament with the Greens/European Free Alliance, described unsuccessful efforts to obtain more information on the systems deployed at Greece’s camps and borders: “Neither the commission nor the Greek authorities are willing to share information and to be transparent about it. Why? Why do they hide things – or at least give the impression they do?”

    The European Ombudsman recently conducted a probe into how the commission ensures fundamental rights are being respected at Greece’s EU-funded camps. It also asked the commission to weigh in on the surveillance systems and whether it had conducted or reviewed the data protection and fundamental rights impact assessments.

    The commission initially reported that Greece had “completed” assessments “before the full deployment of the surveillance systems”. In a later submission in August, however, the commission changed its wording – writing instead that the Greek authorities have “drawn up” the assessments “before the full deployment” of the tools.

    The commission did not directly respond to Computer Weekly’s query asking it to clarify whether the Greek authorities have “completed” or merely “drawn up” DPIAs, and whether the commission’s understanding of the status of the DPIAs changed between the initial and final submissions to the European ombudsman.

    Eleftherios Chelioudakis, co-founder of the Greek digital rights organisation Homo Digitalis, rejected the suggestion that there are different benchmarks on deployment. “There is no legal distinction between full deployment of a system or partial deployment of a system,” he said. “In both cases, there are personal data processing operations taking place.”

    Chelioudakis added that the Greek DPA holds that even the mere transmission of footage (even if no data is recorded/stored) constitutes personal data processing, and that GDPR rules apply.
    Check… check… is this camera on?

    Greek officials, initially eager to show off the camps’ surveillance apparatus, have grown increasingly tight-lipped on the precise status of the systems.

    When visiting the ministry’s centralised control room at the end of 2021, Computer Weekly’s reporter was told by officials that three camps – on Samos, Kos and Leros islands – were already fully connected to the systems and that the ministry was working “on a very tight timeframe” to connect the more than 30 remaining refugee camps in Greece. During a rare press conference in September 2022, Greece’s then-migration minister, Notis Mitarachi, said Centaur was in use at the three refugee camps on Samos, Kos and Leros.

    In October 2022, Computer Weekly’s reporter was also granted access to the local control room on Samos Island, and confirmed that monitoring systems were set up and operational but not yet in use. A drone has since been deployed and is being used in the Samos camp, according to several eyewitnesses.

    Officials appear to have exercised more caution with Hyperion, the fingerprint entry-exit system. Computer Weekly understands the system is fully set up and functioning at several of the camps – officials proudly demonstrated its functions during the inauguration of the Kos camp – but has not been in use.

    While it’s not yet clear if the more advanced and controversial features of Centaur are in use – or if they ever will be – what is certain is that footage from the cameras installed on several islands is being fed to a centralised control room in Athens.

    In early October, Computer Weekly’s reporter tried to speak with asylum seekers outside the Samos camp, after officials abruptly announced the temporary suspension of journalist access to this and other EU-funded camps. Guards behind the barbed wire fence at the camp’s gate asked the reporter to move out of the sight of cameras – installed at the gate and the camp’s periphery – afraid they would receive a scolding call from the migration ministry in Athens.

    “If they see you in the cameras they will call and ask, ‘Why is there a journalist there?’ And we will have a problem,” one of the guards said. Lawyers and others who work with asylum seekers in the camp say they’ve had similar experiences.

    On several occasions, Computer Weekly’s reporter has asked the Greek authorities to provide proof or early indications that the systems are improving safety for camp residents, staff and local communities. All requests have been denied or ignored.

    Lawyers and non-governmental organisations (NGOs) have also documented dozens of incidents that undermine Greek officials’ claims of increased safety in the tech-riddled camps.
    Unmet promises of increased security

    In September 2022, a peaceful protest by some 40 Samos camp residents who had received negative decisions on their asylum claims escalated into a riot. Staff evacuated the camp and police were called in and arrested several people.

    Lawyers representing those accused of instigating the brawl and throwing rocks at intervening police officers said they were struck by the absence of photographic or video evidence in the case, despite their clients’ request to use the footage to prove their innocence.

    “Even with all these systems, with all the surveillance, with all the cameras … there were no photographs or video, something to show that those arrested were guilty,” said Dimitris Choulis, a lawyer with the Human Rights Legal Project on Samos.

    Asked about the incident, the Samos camp director at the time explained that the system has blind spots and that the cameras do not cover all areas of the camp, a claim contrary to other official statements.

    Choulis’s organisation and the legal NGO I Have Rights have also collected testimonies from roughly a dozen individuals who claim they were victims of police brutality in the Samos CCAC beginning in July 2022.

    According to Nikos Phokas, a resident of Leros Island, which houses one of the EU-funded facilities, while the surveillance system has proven incapable of preventing harm on several occasions, the ability it gives officials in Athens to peer into the camps at any moment has shifted dynamics for camp residents, staff and the surrounding communities. “This is the worst thing about this camp – the terror the surveillance creates for people. Everyone watches their backs because of it.”

    He added the surveillance apparatus and the closed nature of the new camp on Leros has forced some camp employees to operate “under the radar” out of fear of being accused of engaging in any behaviour that may be deemed out-of-line by officials in Athens.

    For example, when clothes were needed following an influx of arrivals last summer, camp employees coordinated privately and drove their personal vehicles to retrieve items from local volunteers.

    “In the past, it was more flexible. But now there’s so much surveillance – Athens is looking directly at what’s happening here,” said Catharina Kahane, who headed the NGO ECHO100PLUS on Leros, but was forced to cut down on services because the closed nature of the camp, along with stricter regulations by the Greek migration ministry, made it nearly impossible for her NGO to provide services to asylum seekers.

    Camp staff in one of the island facilities organised a protest to denounce being subjected to the same monitoring and security checks as asylum seekers.

    Residents of the camps have mixed views on the surveillance. Heba*, a Syrian mother of three who lodged an asylum claim in Samos and was waiting out her application, in early October said the cameras and other security measures provided a sense of safety in the camp.

    “What we need more is water and food,” said Mohammed*, a Palestinian asylum seeker who got to Samos in the midst of a recent surge in arrivals that brought the camp’s population to nearly 200% capacity and has led to “inhumane and degrading conditions” for residents, according to NGOs. He was perplexed by the presence of high-tech equipment in a refugee camp that has nearly daily water cuts.

    https://www.computerweekly.com/feature/Greek-data-watchdog-to-rule-on-AI-systems-in-refugee-camps
    #camps_de_réfugiés #surveillance #AI #IA #intelligence_artificielle #Grèce #asile #migrations #réfugiés #camps_de_réfugiés #biométrie #algorithmes

  • The Guardian rips Microsoft for distasteful generative AI poll about death
    https://www.axios.com/2023/10/31/guardian-microsoft-generative-ai-poll-death

    Sara Fischer (Axios Media Trends)

    Screenshot of the poll, which was removed Monday, Oct. 31.

    The Guardian Media Group is demanding that Microsoft take public responsibility for running a distasteful AI-generated poll alongside a Guardian article about a woman found dead at a school in Australia, according to a letter from The Guardian CEO Anna Bateson to Microsoft president Brad Smith, obtained by Axios.

    The poll, which ran within Microsoft’s curated news aggregator platform Microsoft Start, asks the reader what they think the cause was of the woman’s death featured in the article.

    Why it matters: While Microsoft did eventually remove the poll, the damage was already done.

    Readers slammed The Guardian and the article author in the poll’s comments section, whom they assumed were responsible for the blunder.

    Details: “This is clearly an inappropriate use of genAI by Microsoft on a potentially distressing public interest story, originally written and published by Guardian journalists,” Bateson wrote.

    “This application of genAI by Microsoft is exactly the sort of instance that we have warned about in relation to news, and a key reason why we have previously requested to your teams that we do not want Microsoft’s experimental genAI technologies applied to journalism licensed from the Guardian.”

    Between the lines: Bateson urged Microsoft to add a note to the poll, arguing there’s a strong case for Microsoft to take “full responsibility for it.”

    She also asked for assurance from Microsoft that it will not apply “experimental technologies on or alongside Guardian licensed journalism” without its explicit approval.
    She accused Microsoft of failing to “substantively respond” to the Guardian’s request to discuss how Microsoft intends compensate news publishers for the use of their intellectual property “in the training and live deployment of AI technologies within your wider business ventures.”

    Microsoft didn’t immediately respond to request for comment.

    The big picture: Newsrooms have been grappling with ways to leverage artificial intelligence responsibly while ensuring they don’t compromise their editorial content.

    Many are currently pushing tech firms to pay them to use their content to train AI models.

    What to watch: Following an embarrassing publishing experiment from CNET earlier this year, more media companies are including disclosures of the use of AI in their editorial products.

    In her letter to Smith, Bateson asked that Microsoft always make it clear to users “wherever genAI is involved in creating additional units and features as they apply to third party journalism from trusted news brands like the Guardian.”

    #Intelligence_artificielle #The_Guardian #Microsoft #Journalisme #Sondage

  • AI Modi singing is taking over India’s internet - Rest of World
    https://restofworld.org/2023/ai-voice-modi-singing-politics

    By Nilesh Christopher
    30 October 2023 • Bengaluru, India

    The Indian internet is rife with AI-created songs of Prime Minister Narendra Modi crooning in languages like Hindi, Tamil, and Telugu.
    AI-powered voice cloning tools are also being used ahead of the upcoming elections, with personalized messages in the voices of politicians sent to voters and party workers.

    The internet has been amused at an Instagram Reel where Indian Prime Minister Narendra Modi can be heard “singing” a hit Bollywood song. Accompanying the singing is a picture of Modi sitting cross-legged, strumming a guitar. The video, made by creator @ai_whizwires using artificial intelligence, has over 3.4 million views. “Before uploading [it], I was a little scared. But after it went live, everybody was enjoying it,” @ai_whizwires, who didn’t want to be identified by his real name over fear of political backlash, told Rest of World.

    The rise of free AI voice-cloning tools has allowed Indian meme pages like his to mix politics with entertainment and trolling, drawing more eyeballs and engagement. Over the last few weeks, Modi’s digitally rendered voice has been used for videos not just in Hindi, but also in south Indian languages like Tamil, Telugu, and Kannada, captivating audiences in regions where Hindi is not commonly spoken.

    But the videos, though lighthearted, serve a larger political purpose in India, a country with 22 official languages. Modi’s Hindi speeches can often be inaccessible to large swathes of the population that does not understand the language, but voice cloning could help make campaigns accessible, political strategist Sagar Vishnoi told Rest of World. AI voice cloning could break down this language barrier in India, especially the north-south linguistic divide, he said. “AI can be game-changing for [the] 2024 elections.”

    #Intelligence_artificielle #Voix #Politique

  • Company that used AI to revive voice of deceased Cyberpunk 2077 actor says it took “ethical” approach
    https://www.axios.com/2023/10/19/cyberpunk-2077-ai-voice-acting

    Driving the news: Respeecher’s work for Cyberpunk 2077 re-creates the voice of actor Miłogost “Miłek” Reczek, who performed the Polish voiceover for supporting character Viktor Vektor in the 2020 video game. Reczek died in 2021 prior to the recording of voice work for 2077’s expansion, released last month.

    #jeux_vidéo #jeu_vidéo #jeu_vidéo_cyberpunk_2077 #ia #intelligence_artificielle #synthèse #audio #voix #doublage #imitation #décès

  • Les IA génératives réduisent les stéréotypes à leur version la plus clichée
    https://www.nextinpact.com/lebrief/72691/les-ia-generatives-reduisent-stereotypes-a-leur-version-plus-clichee

    En juillet, Buzzfeed publiait un article contenant 195 images produites par Midjourney et censée représenter la poupée Barbie stéréotypique de chaque pays du monde.

    Parmi les Barbie Afghanistan, Barbie Albanie, Barbie Algérie et les autres, plusieurs résultats présentaient de vrais problèmes : les représentations censées correspondre à la Thaïlande, à Singapour et aux Philippines avaient les cheveux blonds, Barbie Allemagne portait des vêtements militaires, Barbie Soudan du Sud était armée…

    Si l’article a fini par être supprimé, il a malencontreusement illustré toute une série de biais et de stéréotypes qui peuplent les bases d’images servant à l’entraînement de systèmes génératifs comme Midjourney, Dall-E ou Stable Diffusion.

    Pour avoir une idée plus précise du phénomène, le média Rest of World a donc réalisé ses propres tests. Pour chaque requête combinant un élément (une personne, une maison, un plat) et une nationalité, l’équipe a généré 100 images, récoltant un total de 3 000 résultat.

    Que ce soit pour créer des images d’habitants de divers pays du monde, ou de rues des villes supposées de ces mêmes pays, Rest of World constate une tendance marquée des modèles de génération d’image à produire des stéréotypes très réducteurs.

    « Une personne indienne », par exemple, renvoie quasiment toujours un vieil homme avec une barbe et un turban. « Une personne mexicaine », un homme, plutôt âgé aussi, avec un sombrero. Les supposées « rues de New Delhi » sont représentées pleines de détritus, les « plats indonésiens », toujours servis sur des feuilles de banane.

    Pour le directeur exécutif de l’AI Now Institute Amba Kak, ce que font les machines consiste fondamentalement à « aplatir des descriptions, par exemple, d’une "personne indienne" ou d’une "maison nigériane" en stéréotypes spécifiques susceptibles d’être perçus de manière négative ». Ils effacent toute la complexité et l’hétérogénéité des cultures concernées, ajoute de son côté la chercheuse en éthique de l’IA Sasha Luccioni.

    Le problème n’est pas seulement interne aux pays concernés, il est aussi international. Une étude de l’Indian Institute of Science montre par exemple que demander la représentation d’ « un drapeau » à un modèle génératif tend à produire comme résultat… un drapeau américain.

    #Intelligence_artificielle #IA_générative #Biais #Génération_images

  • Intelligence artificielle & technofascisme - Les accointances du « camp progressiste » avec l’extrême-droite
    https://www.piecesetmaindoeuvre.com/spip.php?article1896

    « L’intelligence artificielle » - en fait, le calcul machine – constitue pour le moment l’état le plus avancé de la Machinerie générale. Le plus intégré, le plus étendu, le plus puissant ; la Machine des machines. La critique théorique et politique n’a rien de plus à en dire que tout ce qui a été dit par des milliers d’auteurs depuis que le mathématicien Norbert Wiener, en 1948, a publié La Cybernétique, ou Contrôle et Communication dans l’Animal et la Machine. Un mot forgé en 1834 par Ampère, un autre mathématicien, pour désigner « la science du gouvernement des hommes ». En clair, tout calculer pour tout pouvoir. Un projet totalitaire. En revanche, l’avènement concret, matériel, du « tout numérique », de cette Machine à tout pouvoir (mégaréseaux + mégadonnées + supercalculateurs + algorithmes), provoque (...)

    #Nécrotechnologies
    https://www.piecesetmaindoeuvre.com/IMG/pdf/ia_technofascisme.pdf

  • The State of #Chihuahua Is Building a 20-Story Tower in #Ciudad_Juarez to Surveil 13 Cities–and Texas Will Also Be Watching

    Chihuahua state officials and a notorious Mexican security contractor broke ground last summer on the #Torre_Centinela (Sentinel Tower), an ominous, 20-story high-rise in downtown Ciudad Juarez that will serve as the central node of a new AI-enhanced surveillance regime. With tentacles reaching into 13 Mexican cities and a data pipeline that will channel intelligence all the way to Austin, Texas, the monstrous project will be unlike anything seen before along the U.S.-Mexico border.

    And that’s saying a lot, considering the last 30-plus years of surging technology on the U.S side of the border.

    The Torre Centinela will stand in a former parking lot next to the city’s famous bullring, a mere half-mile south of where migrants and asylum seekers have camped and protested at the Paso del Norte International Bridge leading to El Paso. But its reach goes much further: the Torre Centinela is just one piece of the Plataforma Centinela (Sentinel Platform), an aggressive new technology strategy developed by Chihuahua’s Secretaria de Seguridad Pública Estatal (Secretary of State Public Security or SSPE) in collaboration with the company Seguritech.

    With its sprawling infrastructure, the Plataforma Centinela will create an atmosphere of surveillance and data-streams blanketing the entire region. The plan calls for nearly every cutting-edge technology system marketed at law enforcement: 10,000 surveillance cameras, face recognition, automated license plate recognition, real-time crime analytics, a fleet of mobile surveillance vehicles, drone teams and counter-drone teams, and more.

    If the project comes together as advertised in the Avengers-style trailer that SSPE released to influence public opinion, law enforcement personnel on site will be surrounded by wall-to-wall monitors (140 meters of screens per floor), while 2,000 officers in the field will be able to access live intelligence through handheld tablets.

    https://www.youtube.com/watch?v=NKPuur6s4qg

    Texas law enforcement will also have “eyes on this side of the border” via the Plataforma Centinela, Chihuahua Governor Maru Campos publicly stated last year. Texas Governor Greg Abbott signed a memorandum of understanding confirming the partnership.

    Plataforma Centinela will transform public life and threaten human rights in the borderlands in ways that aren’t easy to assess. Regional newspapers and local advocates–especially Norte Digital and Frente Político Ciudadano para la Defensa de los Derechos Humanos (FPCDDH)—have raised significant concerns about the project, pointing to a low likelihood of success and high potential for waste and abuse.

    “It is a myopic approach to security; the full emphasis is placed on situational prevention, while the social causes of crime and violence are not addressed,” FPCDDH member and analyst Victor M. Quintana tells EFF, noting that the Plataforma Centinela’s budget is significantly higher than what the state devotes to social services. “There are no strategies for the prevention of addiction, neither for rebuilding the fabric of society nor attending to dropouts from school or young people at risk, which are social causes of insecurity.”

    Instead of providing access to unfiltered information about the project, the State of Chihuahua has launched a public relations blitz. In addition to press conferences and the highly-produced cinematic trailer, SSPE recently hosted a “Pabellón Centinel” (Sentinel Pavillion), a family-friendly carnival where the public was invited to check out a camera wall and drones, while children played with paintball guns, drove a toy ATV patrol vehicle around a model city, and colored in illustrations of a data center operator.

    Behind that smoke screen, state officials are doing almost everything they can to control the narrative around the project and avoid public scrutiny.

    According to news reports, the SSPE and the Secretaría de Hacienda (Finance Secretary) have simultaneously deemed most information about the project as classified and left dozens of public records requests unanswered. The Chihuahua State Congress also rejected a proposal to formally declassify the documents and stymied other oversight measures, including a proposed audit. Meanwhile, EFF has submitted public records requests to several Texas agencies and all have claimed they have no records related to the Plataforma Centinela.

    This is all the more troubling considering the relationship between the state and Seguritech, a company whose business practices in 22 other jurisdictions have been called into question by public officials.

    What we can be sure of is that the Plataforma Centinela project may serve as proof of concept of the kind of panopticon surveillance governments can get away with in both North America and Latin America.
    What Is the Plataforma Centinela?

    High-tech surveillance centers are not a new phenomenon on the Mexican side of the border. These facilities tend to use “C” distinctions to explain their functions and purposes. EFF has mapped out dozens of these in the six Mexican border states.

    https://www.eff.org/files/2023/09/14/c-centers_map.png
    https://www.google.com/maps/d/viewer?mid=1W73dMXnuXvPl5cSRGfi1x-BQAEivJH4&ll=25.210543464111723%2C-105.379

    They include:

    - C4 (Centro de Comunicación, Cómputo, Control y Comando) (Center for Communications, Calculation, Control, and Command),
    - C5 (Centro de Coordinación Integral, de Control, Comando, Comunicación y Cómputo del Estado) (Center for Integral Coordination for Control, Command, Communications, and State Calculation),
    - C5i (Centro de Control, Comando, Comunicación, Cómputo, Coordinación e Inteligencia) (Center for Control, Command, Communication, Calculation, Coordination and Intelligence).

    Typically, these centers focus as a cross between a 911 call center and a real-time crime center, with operators handling emergency calls, analyzing crime data, and controlling a network of surveillance cameras via a wall bank of monitors. In some cases, the Cs may be presented in different order or stand for slightly different words. For example, some C5s might alternately stand for “Centros de Comando, Control, Comunicación, Cómputo y Calidad” (Centers for Command, Control, Communication, Computation and Quality). These facilities also exist in other parts of Mexico. The number of Cs often indicate scale and responsibilities, but more often than not, it seems to be a political or marketing designation.

    The Plataforma Centinela however, goes far beyond the scope of previous projects and in fact will be known as the first C7 (Centro de Comando, Cómputo, Control, Coordinación, Contacto Ciudadano, Calidad, Comunicaciones e Inteligencia Artificial) (Center for Command, Calculation, Control, Coordination, Citizen Contact, Quality, Communications and Artificial Intelligence). The Torre Centinela in Ciudad Juarez will serve as the nerve center, with more than a dozen sub-centers throughout the state.

    According to statistics that Gov. Campos disclosed as part of negotiations with Texas and news reports, the Plataforma Centinela will include:

    - 1,791 automated license plate readers. These are cameras that photograph vehicles and their license plates, then upload that data along with the time and location where the vehicles were seen to a massive searchable database. Law enforcement can also create lists of license plates to track specific vehicles and receive alerts when those vehicles are seen.
    - 4,800 fixed cameras. These are your run-of-the-mill cameras, positioned to permanently surveil a particular location from one angle.
    - 3,065 pan-tilt-zoom (PTZ) cameras. These are more sophisticated cameras. While they are affixed to a specific location, such as a street light or a telephone pole, these cameras can be controlled remotely. An operator can swivel the camera around 360-degrees and zoom in on subjects.
    - 2,000 tablets. Officers in the field will be issued handheld devices for accessing data directly from the Plataforma Centinela.
    - 102 security arches. This is a common form of surveillance in Mexico, but not the United States. These are structures built over highways and roads to capture data on passing vehicles and their passengers.
    - 74 drones (Unmanned Aerial Vehicles/UAVs). While the Chihuahua government has not disclosed what surveillance payload will be attached to these drones, it is common for law enforcement drones to deploy video, infrared, and thermal imaging technology.
    - 40 mobile video surveillance trailers. While details on these systems are scant, it is likely these are camera towers that can be towed to and parked at targeted locations.
    - 15 anti-drone systems. These systems are designed to intercept and disable drones operated by criminal organizations.
    - Face recognition. The project calls for the application of “biometric filters” to be applied to camera feeds “to assist in the capture of cartel leaders,” and the collection of migrant biometrics. Such a system would require scanning the faces of the general public.
    - Artificial intelligence. So far, the administration has thrown around the term AI without fully explaining how it will be used. However, typically law enforcement agencies have used this technology to “predict” where crime might occur, identify individuals mostly likely to be connected to crime, and to surface potential connections between suspects that would not have been obvious to a human observer. However, all these technologies have a propensity for making errors or exacerbating existing bias.

    As of May, 60% of the Plataforma Centinela camera network had been installed, with an expected completion date of December, according to Norte Digital. However, the cameras were already being used in criminal investigations.

    All combined, this technology amounts to an unprecedented expansion of the surveillance state in Latin America, as SSPE brags in its promotional material. The threat to privacy may also be unprecedented: creating cities where people can no longer move freely in their communities without being watched, scanned, and tagged.

    But that’s assuming the system functions as advertised—and based on the main contractor’s history, that’s anything but guaranteed.
    Who Is Seguritech?

    The Plataforma Centinela project is being built by the megacorporation Seguritech, which has signed deals with more than a dozen government entities throughout Mexico. As of 2018, the company received no-bid contracts in at least 10 Mexican states and cities, which means it was able to sidestep the accountability process that requires companies to compete for projects.

    And when it comes to the Plataforma Centinela, the company isn’t simply a contractor: It will actually have ownership over the project, the Torre Centinela, and all its related assets, including cameras and drones, until August 2027.

    That’s what SSPE Secretary Gilberto Loya Chávez told the news organization Norte Digital, but the terms of the agreement between Seguritech and Chihuahua’s administration are not public. The SSPE’s Transparency Committee decided to classify the information “concerning the procedures for the acquisition of supplies, goods, and technology necessary for the development, implementation, and operation of the Platforma Centinela” for five years.

    In spite of the opacity shrouding the project, journalists have surfaced some information about the investment plan. According to statements from government officials, the Plataforma Centinela will cost 4.2 billion pesos, with Chihuahua’s administration paying regular installments to the company every three months (Chihuahua’s governor had previously said that these would be yearly payments in the amount of 700 million to 1 billion pesos per year). According to news reports, when the payments are completed in 2027, the ownership of the platform’s assets and infrastructure are expected to pass from Seguritech to the state of Chihuahua.

    The Plataforma Centinela project marks a new pinnacle in Seguritech’s trajectory as a Mexican security contractor. Founded in 1995 as a small business selling neighborhood alarms, SeguriTech Privada S.A de C.V. became a highly profitable brand, and currently operates in five areas: security, defense, telecommunications, aeronautics, and construction. According to Zeta Tijuana, Seguritech also secures contracts through its affiliated companies, including Comunicación Segura (focused on telecommunications and security) and Picorp S.A. de C.V. (focused on architecture and construction, including prisons and detention centers). Zeta also identified another SecuriTech company, Tres10 de C.V., as the contractor named in various C5i projects.

    Thorough reporting by Mexican outlets such as Proceso, Zeta Tijuana, Norte Digital, and Zona Free paint an unsettling picture of Seguritech’s activities over the years.

    Former President Felipe Calderón’s war on drug trafficking, initiated during his 2006-2012 term, marked an important turning point for surveillance in Mexico. As Proceso reported, Seguritech began to secure major government contracts beginning in 2007, receiving its first billion-peso deal in 2011 with Sinaloa’s state government. In 2013, avoiding the bidding process, the company secured a 6-billion peso contract assigned by Eruviel Ávila, then governor of the state of México (or Edomex, not to be confused with the country of Mexico). During Enrique Peña Nieto’s years as Edomex’s governor, and especially later, as Mexico’s president, Seguritech secured its status among Mexico’s top technology contractors.

    According to Zeta Tijuana, during the six years that Peña Nieto served as president (2012-2018), the company monopolized contracts for the country’s main surveillance and intelligence projects, specifically the C5i centers. As Zeta Tijuana writes:

    “More than 10 C5i units were opened or began construction during Peña Nieto’s six-year term. Federal entities committed budgets in the millions, amid opacity, violating parliamentary processes and administrative requirements. The purchase of obsolete technological equipment was authorized at an overpriced rate, hiding information under the pretext of protecting national security.”

    Zeta Tijuana further cites records from the Mexican Institute of Industrial Property showing that Seguritech registered the term “C5i” as its own brand, an apparent attempt to make it more difficult for other surveillance contractors to provide services under that name to the government.

    Despite promises from government officials that these huge investments in surveillance would improve public safety, the country’s number of violent deaths increased during Peña Nieto’s term in office.

    “What is most shocking is how ineffective Seguritech’s system is,” says Quintana, the spokesperson for FPCDDH. By his analysis, Quintana says, “In five out of six states where Seguritech entered into contracts and provided security services, the annual crime rate shot up in proportions ranging from 11% to 85%.”

    Seguritech has also been criticized for inflated prices, technical failures, and deploying obsolete equipment. According to Norte Digital, only 17% of surveillance cameras were working by the end of the company’s contract with Sinaloa’s state government. Proceso notes the rise of complaints about the malfunctioning of cameras in Cuauhtémoc Delegation (a borough of Mexico City) in 2016. Zeta Tijuana reported on the disproportionate amount the company charged for installing 200 obsolete 2-megapixel cameras in 2018.

    Seguritech’s track record led to formal complaints and judicial cases against the company. The company has responded to this negative attention by hiring services to take down and censor critical stories about its activities published online, according to investigative reports published as part of the Global Investigative Journalism Network’s Forbidden Stories project.

    Yet, none of this information dissuaded Chihuahua’s governor, Maru Campos, from closing a new no-bid contract with Seguritech to develop the Plataforma Centinela project.
    A Cross-Border Collaboration

    The Plataforma Centinela project presents a troubling escalation in cross-border partnerships between states, one that cuts out each nation’s respective federal governments. In April 2022, the states of Texas and Chihuahua signed a memorandum of understanding to collaborate on reducing “cartels’ human trafficking and smuggling of deadly fentanyl and other drugs” and to “stop the flow of migrants from over 100 countries who illegally enter Texas through Chihuahua.”

    https://www.eff.org/files/2023/09/14/a_new_border_model.png

    While much of the agreement centers around cargo at the points of entry, the document also specifically calls out the various technologies that make up the Plataforma Centinela. In attachments to the agreement, Gov. Campos promises Chihuahua is “willing to share that information with Texas State authorities and commercial partners directly.”

    During a press conference announcing the MOU, Gov. Abbot declared, “Governor Campos has provided me with the best border security plan that I have seen from any governor from Mexico.” He held up a three-page outline and a slide, which were also provided to the public, but also referenced the existence of “a much more extensive detailed memo that explains in nuance” all the aspects of the program.

    Abbott went on to read out a summary of Plataforma Centinela, adding, “This is a demonstration of commitment from a strong governor who is working collaboratively with the state of Texas.”

    Then Campos, in response to a reporter’s question, added: “We are talking about sharing information and intelligence among states, which means the state of Texas will have eyes on this side of the border.” She added that the data collected through the Plataforma Centinela will be analyzed by both the states of Chihuahua and Texas.

    Abbott provided an example of one way the collaboration will work: “We will identify hotspots where there will be an increase in the number of migrants showing up because it’s a location chosen by cartels to try to put people across the border at that particular location. The Chihuahua officials will work in collaboration with the Texas Department of Public Safety, where DPS has identified that hotspot and the Chihuahua side will work from a law enforcement side to disrupt that hotspot.”

    In order to learn more about the scope of the project, EFF sent public records requests to several Texas agencies, including the Governor’s Office, the Texas Department of Public Safety, the Texas Attorney General’s Office, the El Paso County Sheriff, and the El Paso Police Department. Not one of the agencies produced records related to the Plataforma Centinela project.

    Meanwhile, Texas is further beefing up its efforts to use technology at the border, including by enacting new laws that formally allow the Texas National Guard and State Guard to deploy drones at the border and authorize the governor to enter compacts with other states to share intelligence and resource to build “a comprehensive technological surveillance system” on state land to deter illegal activity at the border. In addition to the MOU with Chihuahua, Abbott also signed similar agreements with the states of Nuevo León and Coahuila in 2022.
    Two Sides, One Border

    The Plataforma Centinela has enormous potential to violate the rights of one of the largest cross-border populations along the U.S.-Mexico border. But while law enforcement officials are eager to collaborate and traffic data back and forth, advocacy efforts around surveillance too often are confined to their respective sides.

    The Spanish-language press in Mexico has devoted significant resources to investigating the Plataforma Centinela and raising the alarm over its lack of transparency and accountability, as well as its potential for corruption. Yet, the project has received virtually no attention or scrutiny in the United States.

    Fighting back against surveillance of cross-border communities requires cross-border efforts. EFF supports the efforts of advocacy groups in Ciudad Juarez and other regions of Chihuahua to expose the mistakes the Chihuahua government is making with the Plataforma Centinela and call out its mammoth surveillance approach for failing to address the root social issues. We also salute the efforts by local journalists to hold the government accountable. However, U.S-based journalists, activists, and policymakers—many of whom have done an excellent job surfacing criticism of Customs and Border Protection’s so-called virtual wall—must also turn their attention to the massive surveillance that is building up on the Mexican side.

    In reality, there really is no Mexican surveillance and U.S. surveillance. It’s one massive surveillance monster that, ironically, in the name of border enforcement, recognizes no borders itself.

    https://www.eff.org/deeplinks/2023/09/state-chihuahua-building-20-story-tower-ciudad-juarez-surveil-13-cities-and-sta
    #surveillance #tour #surveillance_de_masse #cartographie #visualisation #intelligence_artificielle #AI #IA #frontières #contrôles_frontaliers #technologie #Plataforma_Centinela #données #reconnaissance_faciale #caméras_de_surveillance #drones #Seguritech #complexe_militaro-industriel #Mexique

  • ChatGPT Can Now Respond With Spoken Words - The New York Times
    https://www.nytimes.com/2023/09/25/technology/chatgpt-talk-digital-assistance.html?nl=todaysheadlines&emc=edit_th_2023092

    ChatGPT has learned to talk.

    OpenAI, the San Francisco artificial intelligence start-up, released a version of its popular chatbot on Monday that can interact with people using spoken words. As with Amazon’s Alexa, Apple’s Siri, and other digital assistants, users can talk to ChatGPT and it will talk back.

    For the first time, ChatGPT can also respond to images. People can, for example, upload a photo of the inside of their refrigerator, and the chatbot can give them a list of dishes they could cook with the ingredients they have.

    “We’re looking to make ChatGPT easier to use — and more helpful,” said Peter Deng, OpenAI’s vice president of consumer and enterprise product.

    OpenAI has accelerated the release of its A.I tools in recent weeks. This month, it unveiled a version of its DALL-E image generator and folded the tool into ChatGPT.

    Alexa and Siri have long provided ways of interacting with smartphones, laptops and other devices through spoken words. But chatbots like ChatGPT and Google Bard have more powerful language skills and are able to instantly write emails, poetry and term papers, and riff on almost any topic tossed their way.

    OpenAI has essentially combined the two communication methods.

    The company sees talking as a more natural way of interacting with its chatbot. It argues that ChatGPT’s synthetic voices — people can choose from five different options, including male and females voices — are more convincing than others used with popular digital assistants.

    #Intelligence_artificielle #Voix #Dialogues_parlés

  • GRR contre GPT : l’auteur de Game of Thrones attaque OpenAI
    https://actualitte.com/article/113522/droit-justice/grr-contre-gpt-l-auteur-de-game-of-thrones-attaque-openai

    De l’autre côté de l’Atlantique, l’intelligence artificielle ne fait pas vraiment rêver les auteurs. Ou, plutôt, la manière dont sont conçus les outils d’OpenAI ou de Meta les interroge : entraînés à l’aide de textes, ils s’appuient aussi sur leurs œuvres, sans autorisation ni compensation. Une action collective en justice s’organise contre OpenAI (ChatGPT), avec quelques grands noms, dont G.R.R. Martin, Jodi Picoult, John Grisham ou encore Jonathan Franzen.

    Dans un communiqué, l’Authors Guild assure que, « sans les œuvres des plaignants, [OpenAI] proposerait des produits très différents », selon les termes de l’avocate Rachel Geman. « Le choix de copier les œuvres des plaignants, sans leur autorisation et sans verser de compensation, constitue une menace pour les auteurs et leurs revenus. »

    Les intelligences artificielles derrière les outils comme ChatGPT sont entrainées à l’aide d’un volume massif de données, et notamment des textes accessibles en ligne. Ces derniers sont « lus » et traités par les algorithmes, afin d’en améliorer la vélocité, la compréhension des requêtes, mais aussi la cohérence des réponses fournies.

    L’utilisation de textes couverts par le copyright par les sociétés de développement des intelligences artificielles a soulevé de nombreuses questions ces derniers mois. D’autant plus que l’accès à ces textes pourrait avoir été réalisé via des plateformes pirates, dont Library Genesis, Z-Library, Sci-Hub ou encore Bibliotik... Un moyen d’accéder à des réservoirs massifs de textes, sans alourdir la facture.

    Par ailleurs, les auteurs reprochent aux créateurs d’IA de s’être passés de leur autorisation, mais aussi d’avoir omis de verser une compensation et d’avoir oublié toute notion de mention des sources.

    En juillet, l’Authors Guild avait publié une lettre ouverte, cosignée par plus de 10.000 auteurs américains, réclamant plusieurs promesses des sociétés visées. Elles devaient, pour satisfaire l’organisation, s’engager « à obtenir le consentement, à créditer et à rémunérer équitablement les auteurs pour l’utilisation d’écrits protégés par le droit d’auteur à des fins d’entrainement de l’intelligence artificielle ». Visiblement, les discussions ne se sont pas déroulées comme prévu...

    Enfin, l’Authors Guild assure que l’entrainement de ChatGPT par des créations littéraires permet à cette intelligence artificielle d’imiter le style de leurs auteurs, menaçant l’intégrité de leur œuvre et leur identité.

    GPT est déjà utilisé pour générer des livres qui singent les travaux d’auteurs humains, comme la tentative récente de générer les volumes 6 et 7 de la saga Le Trône de fer de George R.R. Martin le montre, ainsi que d’autres ouvrages de l’IA postés sur Amazon pour essayer d’usurper l’identité et la réputation d’auteurs humains.

    – L’Authors Guild

    #Intelligence_artificielle #Auteurs #Plainte #Author_Guild

  • Data & Society — Democratizing AI: Principles for Meaningful Public Participation
    https://datasociety.net/library/democratizing-ai-principles-for-meaningful-public-participation

    As AI is deployed in ways that dramatically shape people’s lives and opportunities, the public has little input into its use. In light of this, there are increasing calls to integrate democratic and human-centered values into AI through public participation. 
    Public participation enables the people who are most likely to be affected by a given system to have influence into that system’s design and deployment, including decision-making power. Evidence from multiple fields indicates that, when done right, public participation helps to avert harmful impacts of new projects. Input from the public can result in concrete improvements to a program, or in the rejection of proposals that community members did not support. It brings a range of social and cultural values into decision-making above and beyond narrow technical parameters. Public participation adds legitimacy to decisions because people trust processes they understand and influence. It improves accountability by adding layers of scrutiny and discussion between the public and decision-makers. 
    Yet public participation is difficult to do well, and its mechanisms can backfire if they are not carefully designed. Fortunately, policymakers do not need to design public participation for AI from scratch. Building on a comprehensive review of evidence from public participation efforts in policy domains such as anti-poverty programs and environmental policy, in Democratizing AI: Principles for Meaningful Public Participation Michele Gilman summarizes evidence-based recommendations for better structuring public participation processes for AI, and underscores the urgency of enacting them.

    #Intelligence_artificielle #Régulation #Participation_citoyenne