• EU’s AI Act Falls Short on Protecting Rights at Borders

    Despite years of tireless advocacy by a coalition of civil society and academics (including the author), the European Union’s new law regulating artificial intelligence falls short on protecting the most vulnerable. Late in the night on Friday, Dec. 8, the European Parliament reached a landmark deal on its long-awaited Act to Govern Artificial Intelligence (AI Act). After years of meetings, lobbying, and hearings, the EU member states, Commission, and the Parliament agreed on the provisions of the act, awaiting technical meetings and formal approval before the final text of the legislation is released to the public. A so-called “global first” and racing ahead of the United States, the EU’s bill is the first ever regional attempt to create an omnibus AI legislation. Unfortunately, this bill once again does not sufficiently recognize the vast human rights risks of border technologies and should go much further protecting the rights of people on the move.

    From surveillance drones patrolling the Mediterranean to vast databases collecting sensitive biometric information to experimental projects like robo-dogs and AI lie detectors, every step of a person’s migration journey is now impacted by risky and unregulated border technology projects. These technologies are fraught with privacy infringements, discriminatory decision-making, and even impact the life, liberty, and security of person seeking asylum. They also impact procedural rights, muddying responsibility over opaque and discretionary decisions and lacking clarity in mechanisms of redress when something goes wrong.

    The EU’s AI Act could have been a landmark global standard for the protection of the rights of the most vulnerable. But once again, it does not provide the necessary safeguards around border technologies. For example, while recognizing that some border technologies could fall under the high-risk category, it is not yet clear what, if any, border tech projects will be included in the final high-risk category of projects that are subject to transparency obligations, human rights impact assessments, and greater scrutiny. The Act also has various carveouts and exemptions in place, for example for matters of national security, which can encapsulate technologies used in migration and border enforcement. And crucial discussions around bans on high-risk technologies in migration never even made it into the Parliament’s final deal terms at all. Even the bans which have been announced, for example around emotion recognition, are only in place in the workplace and education, not at the border. Moreover, what exactly is banned remains to be seen, and outstanding questions to be answered in the final text include the parameters around predictive policing as well as the exceptions to the ban on real-time biometric surveillance, still allowed in instances of a “threat of terrorism,” targeted search for victims, or the prosecution of serious crimes. It is also particularly troubling that the AI Act explicitly leaves room for technologies which are of particular appetite for Frontex, the EU’s border force. Frontex released its AI strategy on Nov. 9, signaling an appetite for predictive tools and situational analysis technology. These tools, which when used without safeguards, can facilitate illegal border interdiction operations, including “pushbacks,” in which the agency has been investigated. The Protect Not Surveil Coalition has been trying to influence European policy makers to ban predictive analytics used for the purposes of border enforcement. Unfortunately, no migration tech bans at all seem to be in the final Act.

    The lack of bans and red lines under the high-risk uses of border technologies in the EU’s position is in opposition to years of academic research as well as international guidance, such as by then-U.N. Special Rapporteur on contemporary forms of racism, E. Tendayi Achiume. For example, a recently released report by the University of Essex and the UN’s Office of the Human Rights Commissioner (OHCHR), which I co-authored with Professor Lorna McGregor, argues for a human rights based approach to digital border technologies, including a moratorium on the most high risk border technologies such as border surveillance, which pushes people on the move into dangerous terrain and can even assist with illegal border enforcement operations such as forced interdictions, or “pushbacks.” The EU did not take even a fraction of this position on border technologies.

    While it is promising to see strict regulation of high-risk AI systems such as self-driving cars or medical equipment, why are the risks of unregulated AI technologies at the border allowed to continue unabated? My work over the last six years spans borders from the U.S.-Mexico corridor to the fringes of Europe to East Africa and beyond, and I have witnessed time and again how technological border violence operates in an ecosystem replete with the criminalization of migration, anti-migrant sentiments, overreliance on the private sector in an increasingly lucrative border industrial complex, and deadly practices of border enforcement, leading to thousands of deaths at borders. From vast biometric data collected without consent in refugee camps, to algorithms replacing visa officers and making discriminatory decisions, to AI lie detectors used at borders to discern apparent liars, the roll out of unregulated technologies is ever-growing. The opaque and discretionary world of border enforcement and immigration decision-making is built on societal structures which are underpinned by intersecting systemic racism and historical discrimination against people migrating, allowing for high-risk technological experimentation to thrive at the border.

    The EU’s weak governance on border technologies will allow for more and more experimental projects to proliferate, setting a global standard on how governments will approach migration technologies. The United States is no exception, and in an upcoming election year where migration will once again be in the spotlight, there does not seem to be much incentive to regulate technologies at the border. The Biden administration’s recently released Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence does not offer a regulatory framework for these high-risk technologies, nor does it discuss the impacts of border technologies on people migrating, including taking a human rights based approach to the vast impacts of these projects on people migrating. Unfortunately, the EU often sets a precedent for how other countries govern technology. With the weak protections offered by the EU AI act on border technologies, it is no surprise that the U.S. government is emboldened to do as little as possible to protect people on the move from harmful technologies.

    But real people already are at the centre of border technologies. People like Mr. Alvarado, a young husband and father from Latin America in his early 30s who perished mere kilometers away from a major highway in Arizona, in search of a better life. I visited his memorial site after hours of trekking through the beautiful yet deadly Sonora desert with a search-and-rescue group. For my upcoming book, The Walls have Eyes: Surviving Migration in the Age of Artificial Intelligence, I was documenting the growing surveillance dragnet of the so-called smart border that pushes people to take increasingly dangerous routes, leading to increasing loss of life at the U.S.-Mexico border. Border technologies as a deterrent simply do not work. People desperate for safety – and exercising their internationally protected right to asylum – will not stop coming. They will instead more circuitous routes, and scholars like Geoffrey Boyce and Samuel Chambers have already documented a threefold increase in deaths at the U.S.-Mexico frontier as the so-called smart border expands. In the not so distant future, will people like Mr. Alvarado be pursued by the Department of Homeland Security’s recently announced robo-dogs, a military grade technology that is sometimes armed?

    It is no accident that more robust governance around migration technologies is not forthcoming. Border spaces increasingly serve as testing grounds for new technologies, places where regulation is deliberately limited and where an “anything goes” frontier attitude informs the development and deployment of surveillance at the expense of people’s lives. There is also big money to be made in developing and selling high risk technologies. Why does the private sector get to time and again determine what we innovate on and why, in often problematic public-private partnerships which states are increasingly keen to make in today’s global AI arms race? For example, whose priorities really matter when we choose to create violent sound cannons or AI-powered lie detectors at the border instead of using AI to identify racist border guards? Technology replicates power structures in society. Unfortunately, the viewpoints of those most affected are routinely excluded from the discussion, particularly around areas of no-go-zones or ethically fraught usages of technology.

    Seventy-seven border walls and counting are now cutting across the landscape of the world. They are both physical and digital, justifying broader surveillance under the guise of detecting illegal migrants and catching terrorists, creating suitable enemies we can all rally around. The use of military, or quasi-military, autonomous technology bolsters the connection between immigration and national security. None of these technologies, projects, and sets of decisions are neutral. All technological choices – choices about what to count, who counts, and why – have an inherently political dimension and replicate biases that render certain communities at risk of being harmed, communities that are already under-resourced, discriminated against, and vulnerable to the sharpening of borders all around the world.

    As is once again clear with the EU’s AI Act and the direction of U.S. policy on AI so far, the impacts on real people seems to have been forgotten. Kowtowing to industry and making concessions for the private sector not to stifle innovation does not protect people, especially those most marginalized. Human rights standards and norms are the bare minimum in the growing panopticon of border technologies. More robust and enforceable governance mechanisms are needed to regulate the high-risk experiments at borders and migration management, including a moratorium on violent technologies and red lines under military-grade technologies, polygraph machines, and predictive analytics used for border interdictions, at the very least. These laws and governance mechanisms must also include efforts at local, regional, and international levels, as well as global co-operation and commitment to a human-rights based approach to the development and deployment of border technologies. However, in order for more robust policy making on border technologies to actually affect change, people with lived experiences of migration must also be in the driver’s seat when interrogating both the negative impacts of technology as well as the creative solutions that innovation can bring to the complex stories of human movement.

    https://www.justsecurity.org/90763/eus-ai-act-falls-short-on-protecting-rights-at-borders

    #droits #frontières #AI #IA #intelligence_artificielle #Artificial_Intelligence_Act #AI_act #UE #EU #drones #Méditerranée #mer_Méditerranée #droits_humains #technologie #risques #surveillance #discrimination #transparence #contrôles_migratoires #Frontex #push-backs #refoulements #privatisation #business #complexe_militaro-industriel #morts_aux_frontières #biométrie #données #racisme #racisme_systémique #expérimentation #smart_borders #frontières_intelligentes #pouvoir #murs #barrières_frontalières #terrorisme

    • The Walls Have Eyes. Surviving Migration in the Age of Artificial Intelligence

      A chilling exposé of the inhumane and lucrative sharpening of borders around the globe through experimental surveillance technology

      “Racism, technology, and borders create a cruel intersection . . . more and more people are getting caught in the crosshairs of an unregulated and harmful set of technologies touted to control borders and ‘manage migration,’ bolstering a multibillion-dollar industry.” —from the introduction

      In 2022, the U.S. Department of Homeland Security announced it was training “robot dogs” to help secure the U.S.-Mexico border against migrants. Four-legged machines equipped with cameras and sensors would join a network of drones and automated surveillance towers—nicknamed the “smart wall.” This is part of a worldwide trend: as more people are displaced by war, economic instability, and a warming planet, more countries are turning to A.I.-driven technology to “manage” the influx.

      Based on years of researching borderlands across the world, lawyer and anthropologist Petra Molnar’s The Walls Have Eyes is a truly global story—a dystopian vision turned reality, where your body is your passport and matters of life and death are determined by algorithm. Examining how technology is being deployed by governments on the world’s most vulnerable with little regulation, Molnar also shows us how borders are now big business, with defense contractors and tech start-ups alike scrambling to capture this highly profitable market.

      With a foreword by former U.N. Special Rapporteur E. Tendayi Achiume, The Walls Have Eyes reveals the profound human stakes, foregrounding the stories of people on the move and the daring forms of resistance that have emerged against the hubris and cruelty of those seeking to use technology to turn human beings into problems to be solved.

      https://thenewpress.com/books/walls-have-eyes
      #livre #Petra_Molnar

  • Google wrongly labels as child abuse photos that father emails to doctor on request

    https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html

    The nurse said to send photos so the doctor could review them in advance.

    Mark’s wife grabbed her husband’s phone and texted a few high-quality close-ups of their son’s groin area to her iPhone so she could upload them to the health care provider’s messaging system. In one, Mark’s hand was visible, helping to better display the swelling. Mark and his wife gave no thought to the tech giants that made this quick capture and exchange of digital data possible, or what those giants might think of the images.

    [...]

    Two days after taking the photos of his son, Mark’s phone made a blooping notification noise: His account had been disabled because of “harmful content” that was “a severe violation of Google’s policies and might be illegal.” A “learn more” link led to a list of possible reasons, including “child sexual abuse & exploitation.”

    The photos were automatically uploaded from his phone to his Google account...

    [...]

    A human content moderator for Google would have reviewed the photos after they were flagged by the artificial intelligence to confirm they met the federal definition of child sexual abuse material. When Google makes such a discovery, it locks the user’s account, searches for other exploitative material and, as required by federal law, makes a report to the CyberTipline at the National Center for Missing and Exploited Children.

    Even after the police cleared him, Google has not returned his account, resulting in a loss of more than 10 years on data, contacts, emails, photos. Google has not given a statement/explanation.

    #privacy
    #artificial_intelligence

  • Airbus Artificial Intelligence Challenges
    AI Gym
    https://aigym-v.airbus.com/contest/5bc834b8ba7add0027f3ac5a

    Open: 18 Oct 2018 | Closed: 01 Jun 2019
    2 months ago

    Interested parties ranging from established companies, start-ups, research labs, schools or individuals, can express their
    interest to register to the challenge by email to timeserieschallenge.request@airbus.com anytime till end of 2018.

    CONTEXT
    Technologies at the intersection of #Artificial_Intelligence and #Internet_of_Things / #Big_Data are pushing the boundaries of the state of the art in #Time_Series_Analysis and #Predictive_Maintenance.

    #AIRBUS is launching this scientific challenge on anomaly detection in time series data in order to:
    ● scout for top players in the field of Time Series Analysis
    ● encourage the research community to tackle the specific issues of related to the data generated by the aerospace industry during tests and in operations.

    OVERVIEW
    Data collected from our platforms is mostly considered normal. Due to the high quality of our products and of aerospace context, the occurrence of faults and failures is very rare, and we cannot afford to wait for reaching hundreds of new fault types to be able to identify and anticipate them. We are interested in unexpected changes in the behavior of the systems we monitor and have a rapid reaction time in analyzing suspect behavior.

    TECHNICAL SCOPE
    We set up a three stage challenge to benchmark unsupervised detection algorithms, based on three use cases:

    1) Business Domain : Helicopters // number of input sensors : 1 // Sampling Frequency : 1000Hz // expected output : classify sequence as OK / KO

    2) Business Domain : Satellites // number of input sensors : 30 // Sampling Frequency : 1000Hz // expected output : classify sequence as OK / KO

    3) Business Domain : Commercial Aircraft // number of context sensors: 81 // number of sensors for anomaly detection: 9 // Sampling Frequency: 8Hz // expected output : identify anomalous time windows on sensors of interest

    We welcome all and every working technical approaches, ranging from statistics (eg. SCP) to more
    established machine learning techniques (eg. Isolation Forest) to modern AI (eg. Deep Learning LSTM).

    TIMELINE
    The challenge will officially start beginning 2019 with a first training phase on Q1 2019. The second phase will be a shorter evaluation on Q2 2019. A restitution workshop is going to be organised in June 2019.

    #IA
    #AI #IoT

  • Beginning Artificial Intelligence with the Raspberry Pi

    Gain a gentle introduction to the world of Artificial Intelligence (AI) using the Raspberry Pi as the computing platform. Most of the major AI topics will be explored, including expert systems, machine learning both shallow and deep, fuzzy logic control, and more!

    AI in action will be demonstrated using the Python language on the Raspberry Pi. The Prolog language will also be introduced and used to demonstrate fundamental AI concepts. In addition, the Wolfram language will be used as part of the deep machine learning demonstrations.

    A series of projects will walk you through how to implement AI concepts with the Raspberry Pi. Minimal expense is needed for the projects as only a few sensors and actuators will be required. Beginners and hobbyists can jump right in to creating AI projects with the Raspberry PI using this book.

    What You’ll Learn
    What AI is and―as importantly―what it is not
    Inference and expert systems
    Machine learning both shallow and deep
    Fuzzy logic and how to apply to an actual control system
    When AI might be appropriate to include in a system
    Constraints and limitations of the Raspberry Pi AI implementation

    Who This Book Is For
    Hobbyists, makers, engineers involved in designing autonomous systems and wanting to gain an education in fundamental AI concepts, and non-technical readers who want to understand what AI is and how it might affect their lives.

    Table of Contents
    Chapter 1: Introduction to Artificial Intelligence
    Chapter 2: Basic AI Concepts
    Chapter 3: Expert System Demonstrations
    Chapter 4: Games
    Chapter 5: Fuzzy Logic System
    Chapter 6: Machine Learning
    Chapter 7: Machine Learning: Artificial Neural Networks
    Chapter 8: Machine Learning: Deep Learning
    Chapter 9: Machine Learning: Practical ANN Demonstrations
    Chapter 10: Evolutionary Computing
    Chapter 11: Behavior-Based Robotics
    Appendix A: Build Instructions for the Alfie Robot Car

    https://www.amazon.fr/Beginning-Artificial-Intelligence-Raspberry-Pi/dp/1484227425

    #book #livre
    #AI #IA #artificial_intelligence #intelligence_artificielle
    #Raspberry_Pi #Python

  • How Big data mines personal info to manipulate voters and craft fake news
    (June 2017, Nina Burleigh)

    #Facebook, #Cambridge_Analytica, #artificial_intelligence #big_data #psychographics #OCEAN #surveillance

    http://www.newsweek.com/2017/06/16/big-data-mines-personal-info-manipulate-voters-623131.html

    “It’s my ([Alexander Nix]) privilege to speak to you today about the power of Big Data and psychographics in the electoral process,” he began. As he clicked through slides, he explained how Cambridge Analytica can appeal directly to people’s emotions, bypassing cognitive roadblocks, thanks to the oceans of data it can access on every man and woman in the country.

    After describing Big Data, Nix talked about how Cambridge was mining it for political purposes, to identify “mean personality” and then segment personality types into yet more specific subgroups, using other variables, to create ever smaller groups susceptible to precisely targeted messages.

    [...]

    Big Data, artificial intelligence and algorithms designed and manipulated by strategists like the folks at Cambridge have turned our world into a Panopticon

    [...]

    it made tens of millions of “friends” by first employing low-wage tech-workers to hand over their Facebook profiles: It spiders through Facebook posts, friends and likes, and, within a matter of seconds, spits out a personality profile, including the so-called OCEAN psychological tendencies test score (openness, conscientiousness, extraversion, agreeableness and neuroticism)

    [...]

    Facebook was even more useful for Trump, with its online behavioral data on nearly 2 billion people around the world, each of whom is precisely accessible to strategists and marketers who can afford to pay for the peek. Team Trump created a 220 million–person database, nicknamed Project Alamo, using voter registration records, gun ownership records, credit card purchase histories and the monolithic data vaults Experian PLC, Datalogix, Epsilon and Axiom Corporation.

    [...]

    Facebook offers advertisers is its Lookalike Audiences program. An advertiser (or a political campaign manager) can come to Facebook with a small group of known customers or supporters, and ask Facebook to expand it. Using its access to billions of posts and pictures, likes and contacts, Facebook can create groups of people who are “like” that initial group, and then target advertising made specifically to influence it.

    [...]

    By 2012, there had been huge advances in what Big Data, social media and AI could do together. That year, Facebook conducted a happy-sad emotional manipulation experiment, splitting a million people into two groups and manipulating the posts so that one group received happy updates from friends and another received sad ones. They then ran the effects through algorithms and proved—surprise—that they were able to affect people’s moods. (Facebook, which has the greatest storehouse of personal behavior data ever amassed, is still conducting behavioral research, mostly, again, in the service of advertising and making money.

    [...]

    Psychographic algorithms allow strategists to target not just angry racists but also the most intellectually gullible individuals, people who make decisions emotionally rather than cognitively. For Trump, such voters were the equivalent of diamonds in a dark mine. Cambridge apparently helped with that too. A few weeks before the election, in a Sky News report on the company, an employee was actually shown on camera poring over a paper on “ The Need for Cognition Scale,” which, like the OCEAN test, can be applied to personal data, and which measures the relative importance of thinking versus feeling in an individual’s decision-making.

    [...]

    Big Data technology has so far outpaced legal and regulatory frameworks that discussions about the ethics of its use for political purposes are still rare. No senior member of Congress or administration official in Washington has placed a very high priority on asking what psychographic data mining means for privacy, nor about the ethics of political messaging based on evading cognition or rational thinking, nor about the AI role in mainstreaming racist and other previously verboten speech.

    [...]

    After months of investigations and increasingly critical articles in the British press (especially by The Guardian ’s Carole Cadwalladr, who has called Cambridge Analytica’s work the framework for an authoritarian surveillance state, and whose reporting Cambridge has since legally challenged), the British Information Commissioner’s Office (ICO), an independent agency that monitors privacy rights and adherence to the U.K.’s strict laws, announced May 17 that it is looking into Cambridge and SCL for their work in the Brexit vote and other elections.

    [...]

    Now in the White House, Kushner heads the administration’s Office of Technology and Innovation. It will focus on “technology and data,” the administration stated. Kushner said he plans to use it to help run government like a business, and to treat American citizens “like customers.”

  • JPMorgan Software Does in Seconds What Took Lawyers 360,000 Hours

    The program, called COIN, for Contract Intelligence, does in seconds and without errors the mind-numbing job of interpreting commercial-loan agreements, something that consumed 360,000 hours of work each year by lawyers and loan officers.

    https://www.bloomberg.com/news/articles/2017-02-28/jpmorgan-marshals-an-army-of-developers-to-automate-high-finance

    Another program called X-Connect, which went into use in January, examines e-mails to help employees find colleagues who have the closest relationships with potential prospects and can arrange introductions.

    [...]

    While growing numbers of people in the industry worry such advancements might someday take their jobs, many Wall Street personnel are more focused on benefits. A survey of more than 3,200 financial professionals by recruiting firm Options Group last year found a majority expect new technology will improve their careers, for example by improving workplace performance.

    [...]

    the company keeps tabs on 2,000 technology ventures, using about 100 in pilot programs that will eventually join the firm’s growing ecosystem of partners. For instance, the bank’s machine-learning software was built with Cloudier Inc., a software firm that JPMorgan first encountered in 2009.

    #artificial_intelligence #intelligence_artificielle #AI #IA
    #finance

  • AI Predicts Autism From Infant Brain Scans

    Scientists of the University of North Carolina have developed an algorithm which can diagnose autism in babies between 6 and 12 months old. Through brain scans it appears the algorithm’s prediction is right about 81% of the time.

    http://spectrum.ieee.org/the-human-os/biomedical/imaging/ai-predicts-autism-from-infant-brain-scans

    #AI #IA #artificial_intelligence #intelligence_artificielle
    #deep_learning
    #autism
    #brain

  • Facebook renders public / Open Source its AI software for image recognition

    https://research.facebook.com/blog/learning-to-segment

    The main new algorithms driving our advances are the DeepMask1 segmentation framework coupled with our new SharpMask2 segment refinement module. Together, they have enabled FAIR’s [Facebook AI Research] machine vision systems to detect and precisely delineate every object in an image. The final stage of our recognition pipeline uses a specialised convolutional net, which we call MultiPathNet3, to label each object mask with the object type it contains (e.g. person, dog, sheep). We will return to the details shortly.

    We’re making the code for DeepMask+SharpMask as well as MultiPathNet — along with our research papers and demos related to them — open and accessible to all, with the hope that they’ll help rapidly advance the field of machine vision

    [...]

    In addition, our next challenge will be to apply these techniques to video, where objects are moving, interacting, and changing over time. We’ve already made some progress with computer vision techniques to watch videos and understand and classify what’s in them in real time. Real-time classification could help surface relevant and important Live videos on Facebook, while applying more refined techniques to detect scenes, objects, and actions over space and time could one day allow for real-time narration. We’re excited to continue pushing the state of the art and providing better experiences on Facebook for everyone.

    DeepMask: Learning to Segment Object Candidates.
    Pedro O. Pinheiro, Ronan Collobert, Piotr Dollár (NIPS 2015)

    https://arxiv.org/pdf/1506.06204v2.pdf
    https://arxiv.org/abs/1506.06204

    SharpMask: Learning to Refine Object Segments.
    Pedro O. Pinheiro, Tsung-Yi Lin, Ronan Collobert, Piotr Dollàr (ECCV 2016)

    https://arxiv.org/pdf/1603.08695v2.pdf
    https://arxiv.org/abs/1603.08695

    MultiPathNet: A Multipath Network for Object Detection.
    Sergey Zagoruyko, Adam Lerer, Tsung-Yi Lin, Pedro O. Pinheiro, Sam Gross, Soumith Chintala, Piotr Dollár (BMVC 2016)

    https://arxiv.org/pdf/1604.02135v2.pdf
    https://arxiv.org/abs/1604.02135

    #AI #Artificial_Intelligence
    #machine_vision #image_recognition

  • Machine Learning algorithm fed by Instagram reveals predictive markers of depression

    http://www.digitaltrends.com/social-media/ai-program-uses-instagram-to-diagnose-depression

    A new artificial intelligence program can pick up on the early signs of depression before humans (and even humans who are general practitioners) can — and just by using Instagram. A team of researchers from Harvard and the University of Vermont recently developed a machine learning program that correctly identified which Instagram users were clinically depressed with 70 percent accuracy.

    The study by Andrew G. Reece and Christopher M. Danforth:

    http://arxiv.org/pdf/1608.03282v2.pdf

    Abstract:​ Using Instagram data from 166 individuals, we applied machine learning tools to successfully identify markers of depression. Statistical features were computationally extracted from 43,950 participant Instagram photos, using color analysis, metadata components, and algorithmic face detection.
    Resulting models outperformed general practitioners’ average diagnostic success rate for depression. These results held even when the analysis was restricted to posts made before depressed individuals were first diagnosed. Photos posted by depressed individuals were more likely to be bluer, prayer, and darker. Human ratings of photo attributes (happy, sad, etc.) were weaker predictors of depression, and were uncorrelated with computationally­ generated features. These findings suggest new avenues for early screening and detection of mental illness.

    #machine_learning #artificial_intelligence #AI #face_detection
    #depression

  • Seymour Papert, computer scientist, born 29 February 1928 ; died 31 July 2016 | The Guardian
    https://www.theguardian.com/education/2016/aug/03/seymour-papert-obituary

    Child’s play had been considered largely inconsequential, but Piaget saw that it was an essential part of a child’s cognitive development. Children were “learning by doing”. Today’s educational toy industry started from there.

    Papert understood that mathematics was abstract and theoretical, and that was how it was taught to children. That was why most of them did not understand it. The answer, he thought, was to give children a physical way to think of mathematical ideas.

    #jeu #enseignement #informatique #interactivité #matérialisation #pionniers #logo #lego

    (suis preneur d’un texte plus intéressant)

  • #Facebook is using AI to make detailed maps of where people live | The Verge
    http://www.theverge.com/2016/2/22/11075456/facebook-population-density-maps-internet-org

    The project is part of Facebook’s Connectivity Lab, the technical arm of its #Internet.org initiative that deals with #drones, #satellites, and lasers for delivering #internet to rural areas and developing countries. With better maps, the company is able to determine whether Wi-Fi hotspots or cellular technologies are better for bringing people online — and helping them sign up for Facebook naturally.

    FACEBOOK ANALYZED 20 COUNTRIES COVERING 21.6 MILLION SQUARE KILOMETERS

    To generate the maps, Connectivity Lab worked with Facebook’s data science division, infrastructure unit, and #machine_learning and #artificial_intelligence groups.

    OK.

  • #RIP Marvin Minsky, 88 years

    https://en.wikipedia.org/wiki/Marvin_Minsky

    “Intelligence is not the product of any singular mechanism, but comes from the managed interaction of a diverse variety of resourceful agents.”

    (Minsky in The Society of Mind)

    His HTML 1.0 website:

    http://web.media.mit.edu/~minsky

    In 1952 he also invented what he called The Most Useless Machine Ever.
    https://www.youtube.com/watch?v=Z86V_ICUCD4

    In French:

    http://www.lemonde.fr/disparitions/article/2016/01/26/marvin-minsky-pionnier-de-l-intelligence-artificielle-est-mort_4854155_3382.

    #artificial_intelligence #intelligence_artificielle
    #GOFAI
    #Marvin_Minsky
    #MIT

  • Google utilise l’intelligence artificielle pour les 15% de requêtes qu’il n’a encore jamais reçues

    Début 2014 Google avait racheté #DeepMind, spécialisé dans l’intelligence artificielle, pour 365M€. Il s’agissait là d’un talent acquisition. C’est de là que provient le responsable Technologie de Google, Demis Hassabis, qui annonçait près d’un an plus tard la coopération avec l’université d’Oxford afin d’approfondir leur algorithmes de Machine Learning.

    Le fruit de leurs recherches porte le nom de #RankBrain.
    Quand Google ne reconnait pas un mot ou une phrase, RankBrain tente de deviner/prédire quel mot ou phrase aurait une signification similaire. De cette manière le moteur de recherche serait plus apte à fournir une réponse à une toute nouvelle requête et à des questions ambiguës.

    http://www.bloomberg.com/news/articles/2015-10-26/google-turning-its-lucrative-web-search-over-to-ai-machines

    For the past few months, a “very large fraction” of the millions of queries a second that people type into the company’s search engine have been interpreted by an artificial intelligence system, nicknamed RankBrain, said Greg Corrado, a senior research scientist with the company, outlining for the first time the emerging role of AI in search.

    RankBrain uses artificial intelligence to embed vast amounts of written language into mathematical entities — called vectors — that the computer can understand. If RankBrain sees a word or phrase it isn’t familiar with, the machine can make a guess as to what words or phrases might have a similar meaning and filter the result accordingly, making it more effective at handling never-before-seen search queries.

    [...]

    The addition of RankBrain to search is part of a half-decade-long push by Google into AI, as the company seeks to embed the technology into every aspect of its business. “Machine learning is a core transformative way by which we are rethinking everything we are doing,”

    #machine_learning
    #artificial_intelligence #Intelligence_artificielle

  • Will Advances in Technology Create a Jobless Future? | MIT Technology Review
    http://www.technologyreview.com/featuredstory/538401/who-will-own-the-robots

    We’re in the midst of a jobs crisis, and rapid advances in AI and other technologies may be one culprit. How can we get better at sharing the wealth that technology creates?

    #technologie #emplois #robots

  • Alan Turing : The Imitation Game (2014)

    (le film contient un peu trop de clichés « Turing » à mon goût, mais bon, faut quand même avoir vu)

    Trailer :
    https://www.youtube.com/watch?v=S5CjKEFb-sM

    Turing, l’homme qui cassait les codes :
    http://www.lexpress.fr/actualite/sciences/turing-l-homme-qui-cassait-les-codes_1638747.html

    Mais pourquoi ce site [#Bletchley_Park] plutôt qu’un autre ? « Cette petite ville d’une tristesse ordinaire se situe au centre géométrique de l’Angleterre intellectuelle, là où la ligne de chemin de fer de Londres bifurque pour Oxford et Cambridge », répond Andrew Hodges en nous guidant dans le musée. L’homme parle en connaisseur. Doyen du Wadham College, à Oxford, il est venu en voisin par le train. Il a écrit une biographie au titre fleurant bon le jeu de mots : Alan Turing : The Enigma, un texte qui vient d’être traduit en français dans son intégralité.

    Et puis aussi, ce documentaire ARTE de juin 2014 :
    "La Drôle De Guerre D’Alan, Turing Ou Comment Les Maths Ont Vaincu Hitler"
    https://www.youtube.com/watch?v=9b7wAdVyCV0

    Et si le débarquement de Normandie n’avait été possible que grâce à un mathématicien antimilitariste et anticonformiste, dont le rêve était de construire un cerveau artificiel ? Le doux rêveur en question s’appelle Alan Turing et son domaine d’études est la branche la plus fondamentale des mathématiques : la logique. Bien loin, en principe, de toute application concrète. Comment ce savant excentrique a-t-il pu contribuer à la victoire des Alliés ? La réponse se trouve dans la petite ville de Bletchley Park, dans la grande banlieue londonienne. C’est ici que s’est jouée pendant la Seconde Guerre mondiale une vaste partie d’échecs dont l’enjeu était le décryptage des communications secrètes de l’armée allemande. Une partie dont la pièce maîtresse a justement été Alan Turing – l’inventeur de ce qui ne s’appelait pas encore l’ordinateur. Esprit plus que brillant, Turing sera pourtant traité de manière odieuse au lendemain de la guerre : son homosexualité lui ayant valu des poursuites judiciaires, il se suicidera en 1954 après avoir dû subir une castration chimique…

    .. et qui tente entre autre d’expliquer la Machine Universelle de Turing.

    #intelligence_artificielle #artificial_intelligence

    • Qu’on romance un peu, pour des raisons commerciales, l’arrivée du principal personnage secondaire, passons.

      Mais les manipulations/complots posent problème (dans le film Turing est carrément un traître en se rendant complice d’un espion pro-soviétique) ; et surtout, le fait de donner à la Bombe le prénom de Christopher me semble vraiment déplacer le propos. Le fil rouge du film devient alors une espèce d’hypothèse selon laquelle la quête d’une intelligence artificielle a pour ressort essentiel le phantasme de ressusciter son amour de jeunesse.

    • Dans Les Lettres françaises de ce mois :

      Pas de sauveur suprême

      Imitation Game est un film promis à un grand succès. Il présente en effet de grandes qualités, aussi bien en terme de réalisation que de jeu des acteurs, qui vont probablement lui permettre de triompher autant auprès du public que des critiques. Il est par ailleurs doté de grandes ambitions philosophiques sur des sujets forts relevant d’enjeux importants pour nos sociétés contemporaines à l’époque du déferlement numérique.
      Le film expose certaines des circonstances dans lesquelles les équipes de recherche, mobilisées par les services de renseignement anglais durant la Seconde Guerre mondiale, ont percé le secret des communications des armées ennemies. On y découvre notamment le rôle du mathématicien Alan Turing qui s’attela à casser le chiffrement mis en œuvre à l’aide de la machine allemande Enigma. Sa contribution prit la forme d’une autre machine destinée à automatiser le traitement des messages codés. Cette contribution, longtemps ignorée, – le terme « ultra secret » fut inventé notamment pour couvrir ce domaine – ne fut dévoilée qu’au début des années 1990, près de quarante ans après la mort tragique de Turing. A cette occasion, la thèse d’une guerre notablement « raccourcie » grâce aux résultats des équipes rassemblées à Bletchley Park a été largement diffusée.
      Hélas, le film, certainement pour accentuer la dramatisation, introduit cependant de nombreux raccourcis, biais, voire contre-vérités historiques, que les spécialistes de l’histoire militaire, et des techniques en général, n’ont pas manqué de relever. Certaines rencontres n’ont jamais eu lieu, certains personnages sont caricaturés, certains événements sont hypothétiques. La psychologie de Turing, notamment, semble naviguer entre détachement et arrogance pour en faire un personnage hors du commun, alors même que son activité de l’époque n’a pu avoir un impact que dans la mesure où elle était intégrée au travail collectif mobilisé dans le cadre d’une stratégie bien plus large.
      Mais au-delà de ces libertés prises avec les faits historiques – que l’on peut accepter dans la mesure où le film se présente comme une « fictionnalisation » de ces faits – il y a dans le propos général une dimension beaucoup plus problématique qui ne relève plus seulement de l’efficacité scénaristique. En effet, il est suggéré que la machine construite par Turing pendant la guerre serait un ordinateur, ou plus exactement une première version matérialisée de la « machine de Turing » qui fut décrite sur le papier en 1936. Or, si Turing mobilise toute sa science pour casser le code Enigma, il ne le fait justement pas en construisant une « machine de Turing » universelle – et donc un ordinateur — mais une machine, certes ingénieuse, néanmoins uniquement dédiée à cette tâche particulière. Un vrai premier ordinateur ne sera assemblé que dans la deuxième moitié des années 1940, une fois la guerre terminée. Ainsi le rapprochement entre la victoire des alliés – et donc, par extension, de la démocratie et du monde libre – , et la mise en œuvre des ordinateurs est une thèse qui s’avère historiquement plus que discutable et idéologiquement très contestable. Particulièrement à l’heure où les réalisations multiples, pour tout et n’importe quoi, qui découlent du déferlement des machines de Turing prennent, entre autres, la forme d’un appareil de surveillance totalitaire du coté de la NSA et du GCHQ récemment dénoncé par Edward Snowden.

    • une espèce d’hypothèse selon laquelle la quête d’une intelligence artificielle a pour ressort essentiel le phantasme de ressusciter son amour de jeunesse.

      Pour Jean Lassègue, Turing cherchait plutôt à répondre à des questions existentielles sur sa propre identité, notamment sexuelle.

      Le fameux « test de Turing » décrit notamment un « étalonnage » du dispositif expérimental où il s’agit de ne plus faire la distinction entre un homme et une femme...

    • Revenant au texte de l’article où le test de Turing est décrit par son auteur, Lassègue attire l’attention sur des bizarreries inaperçues avant lui dans sa formulation, la vulgate du test ayant arasé les curiosités. Tel que Turing l’a conçu, son test – qui fixera à quel moment (historique) une machine pourra être dite « intelligente » au sens où un être humain est intelligent (par opposition à un animal « intelligent ») – est une variante du jeu suivant. Trois personnes sont réunies : un homme, une femme, et une troisième au sexe indifférent : le joueur. Il s’agit pour celui-ci de deviner qui de ses deux interlocuteurs est l’homme, qui la femme. La difficulté réside dans le fait que le joueur communique avec les deux comparses, cachés à sa perception immédiate, par le seul truchement de messages échangés par télétype ou, pour actualiser sans inconvénient la problématique, par le truchement d’e-mails. Le joueur gagne s’il devine l’identité sexuelle de ses interlocuteurs, il perd dans le cas contraire.

      Lassègue fait à propos de ce jeu initial un certain nombre de remarques fort judicieuses. Il observe tout d’abord que sur le long terme (un certain nombre de parties), le joueur ne gagne véritablement que si son taux de succès diffère significativement de 50 % – taux de réussite qu’il obtiendrait en se contentant de « jouer à pile ou face » chacune des parties (pp. 153-154). Il note également – et cela constitue un élément crucial de sa lecture psychobiographique – qu’il semble aller de soi pour Turing que la stratégie prototype de l’homme consistera à mentir, alors que celle de la femme consistera à dire la vérité (p. 159).

      J’ajouterai, car cela a un impact lorsque le jeu de la différence des sexes opère sa transformation en « test de l’intelligence artificielle », que la réussite du jeu dépend du talent combiné des trois acteurs. Le joueur peut en effet gagner du fait de sa propre habileté, mais aussi bien parce que l’homme se trahit (il ment mal), ou parce que la femme est maladroite (elle manque d’assurance alors qu’elle dit vrai).

      Le test de Turing est en principe une variante du jeu de la différence des sexes, à ceci près que l’homme est remplacé par un ordinateur. Qu’est-ce à dire ? Turing est à ce point expéditif quant à son exemple (auquel, il faut le souligner, il n’accorde pas la signification critique que les philosophes lui reconnaîtront ensuite) qu’il ne précise pas lequel des deux jeux distincts, que sa nouvelle définition autorise, est celui qui constitue en réalité le test. Dans le premier, le joueur sait que, des deux comparses qu’il a en face de lui, l’un est une femme et le second un ordinateur (c’est l’interprétation « classique » du test de Turing : l’ordinateur fait la preuve de son intelligence [humaine] en n’étant pas déjoué plus souvent qu’aléatoirement ; la femme représente ici la race humaine tout entière). Dans le deuxième jeu, l’homme a été remplacé par un ordinateur à l’insu du joueur qui croit être en présence d’un homme et d’une femme (fait de chair et d’os), c’est-à-dire croit jouer au jeu de la différence des sexes.

      La différence essentielle entre les deux jeux possibles selon la définition de Turing apparaît clairement lorsqu’on examine le cas de figure où le joueur perd. Dans la première définition du jeu, les comparses l’emportent – à défaut du manque de talent du joueur – soit parce que l’ordinateur a su cacher sa nature machinique, soit parce que la femme a su se faire passer de manière convaincante pour un ordinateur (je laisse à l’imagination du lecteur féru de La planète interdite, 2001 : l’Odyssée de l’espace, Blade Runner, etc., les moyens de réussir ce subterfuge). Dans la deuxième définition du jeu transposé en test, les comparses triomphent parce que le joueur a pris la femme pour un homme et l’ordinateur pour une femme.

      Comme on s’en aperçoit aisément, simplement transposé comme le fait Turing, le nouveau jeu – sous ses deux avatars possibles – est dépourvu d’intérêt, sinon carrément stupide. C’est ce qui a conduit Lassègue à souligner les incohérences du supposé test de Turing et (plus particulièrement dans son article en anglais de 1996), à insister sur le fait que le test est irréalisable. Ce qui est effectivement le cas si, comme on vient de le voir, on prend à la lettre l’idée du test comme simple transposition du jeu. Il n’est pas impossible cependant, avec quelques corrections, de redéfinir celui-ci de manière à ce qu’il corresponde à un test de l’intelligence artificielle parfaitement réalisable. Pour ce faire, il convient tout d’abord de se trouver dans le second cas de figure : celui où le rôle de l’homme est tenu par une machine à l’insu du joueur (et idéalement, à l’insu également de la femme comparse). Il faut aussi déplacer la perspective d’interprétation : cette fois, le joueur du test n’est plus le joueur du jeu de la différence sexuelle ; le joueur authentique est l’ordinateur. En effet, que le joueur du jeu initial « perde » (prenne la femme pour un homme, et la machine pour une femme), ou qu’il « gagne » (reconnaisse la femme comme femme et prenne la machine pour un homme), c’est le véritable joueur du test, l’ordinateur, qui aura réussi l’épreuve. La seule victoire authentique du joueur contre la machine – celle qui signale que la machine a échoué au test en ayant été percée à jour – consiste à déjouer le stratagème en s’extrayant entièrement de l’environnement du jeu et en affirmant (avec indignation, et sur un plan « méta-ludique ») : « B est une femme, alors que A est une machine se faisant passer pour un être humain ! ».

      Lassègue s’intéresse aux implications psychologiques, pour Turing, de sa supposition que la stratégie prototype de la femme consiste à dire la vérité, et celle de l’homme à mentir. Turing se verra traîner de manière infamante devant les tribunaux pour homosexualité, condamné à un traitement médical humiliant, et privé de la possibilité – comme il l’avait fait jusque-là – de travailler dans le cadre de projets liés à la défense nationale britannique (selon l’opinion – courante à l’époque – que les homosexuels sont des proies trop aisées pour le chantage) ; nulle surprise, donc, s’il considère qu’à l’instar des talents qu’il a dû déployer dans la période qui précéda son inculpation, l’essence de l’homme (par opposition à celle de la femme) réside dans sa capacité à dissimuler.

      http://lhomme.revues.org/18

    • Aussi :

      http://rue89.nouvelobs.com/rue89-culture/2015/01/28/imitation-game-alan-turing-decus-decus-decus-257376

      « Imitation Game » accomplit donc cette prouesse de mettre en scène un personnage homosexuel tout en faisant d’une star ultra-féminine sa partenaire romantique durant tout le film. Et en évitant soigneusement de le montrer embrasser un homme.

      C’est peu audacieux, mais surtout étrange, tant la question homosexuelle est un des pans qui rendent malheureusement romanesque la vie d’Alan Turing. Car si Turing est mort en martyr, c’est parce que, lors du procès qui le condamna pour « indécence » (en fait, pour homosexualité), il ne put pour se défendre invoquer sa condition de héros de guerre – les recherches qu’il avait menées devant encore rester secrètes.

      Par ailleurs, sa mort est grandement conditionnée par cette condamnation. Plutôt que d’aller en prison, il choisit la castration chimique, ce qui non seulement le rendit impuissant mais le fit aussi grossir, au point qu’il dut arrêter la course à pied, dans laquelle il excellait.

  • Ray Kurzweil, le cyber prophète de Google

    http://www.france4.fr/emissions/l-autre-jt/les-sujets/ray-kurzweil-le-cyber-prophete-de-google-une-story-de-france-swimberge_28062

    https://www.youtube.com/watch?v=7wWcEusQGEQ

    Le numéro 3 de Google est un prophète. Gourou du transhumanisme, il prévoit l’avènement de l’intelligence artificielle qui va dépasser l’intelligence humaine. Bill Gates dit de lui qu’il est le meilleur futurologue de son époque. France Swimberge l’a rencontré pour un entretien exceptionnel.

    Personnellement, plus je l’entends parler, plus je roule les yeux ; ça fait presque 20 ans qu’il recycle le(s) même(s) message(s). Aussi, on dirait que l’ancienne vague de l’intelligence artificielle et ses fantasmes qui faisaient peur, refont surface ces derniers temps comme si c’était du nouveau. Je n’y vois que du réchauffé. Et ses idées laissaient foid Hans Moravec et Jaron Lanier.

    Mais, on sait maintenant que sa Singularité porte désormais un prénom : Lucy.

    (Et je ne vois pas non plus ce qu’il a d’exceptionnel dans cet entretien pour L’autre JT, ceci dit en passant)

    Ray Kurzweil’s Slippery Futurism

    A good article to help you view his visionary statements with the simple criticism of common sense. Many of his statements are just obvious extrapolations of what was happening around us at the time.

    http://spectrum.ieee.org/computing/software/ray-kurzweils-slippery-futurism

    His stunning prophecies have earned him a reputation as a tech visionary, but many of them don’t look so good on close inspection.

    [...]

    Some facts make his predictions less bright :

    The first is that to see, in 1990, a society using networked computers for everyday tasks, you didn’t need to be prophetic. You just needed to be French. France’s government began issuing dumb terminals to telephone subscribers for free in 1981 to encourage use of the paid Minitel online information, or videotex, service. Minitel allowed users to look up phone numbers, purchase train and airline tickets, use message boards and databases, and purchase items through mail order.

    Mais aussi :
    Le cerveau, des lignes de code ? Le transhumaniste Kurzweil se plante
    http://rue89.nouvelobs.com/2014/05/09/cerveau-lignes-codes-transhumaniste-kurzweil-plante-252052
    http://seenthis.net/messages/255285
    (via @bug_in, @xporte)

    Concernant son film de 2010 "The Singularity is Near" : (bof bof lui aussi)
    http://seenthis.net/messages/98641 (@de_quels_droits_)

    PS : je ne nie pas son génie dans l’invention concrète de plein de chouettes trucs utiles pour la société.

    #artificial_intelligence #intelligence_artificielle
    #singularity #singularité
    #transhumanism #transhumanisme
    #NBIC
    #kurzweil