• What does #AI_reasoning mean for #Global_health?
    https://redasadki.me/2025/07/16/what-does-ai-reasoning-mean-for-global-health

    When epidemiologists investigate a disease outbreak, they do not just match symptoms to known pathogens. They work through complex chains of evidence, test hypotheses, reconsider assumptions when data does not fit, and sometimes completely change their approach based on new information. This deeply human process of systematic #reasoning is what #Artificial_intelligence systems are now learning to do. This capability represents a fundamental shift from AI that recognizes patterns to AI that can work through complex problems the way a skilled professional would. For those working in #global_health and education, understanding this transformation is essential. The difference between answering and reasoning To understand this revolution, consider how most AI works today versus how reasoning (...)

    #Artificial_Intelligence

  • #language as #AI’s universal interface: What it means and why it matters
    https://redasadki.me/2025/07/16/language-as-ais-universal-interface-what-it-means-and-why-it-matters

    Imagine if you could control every device, system, and process in the world simply by talking to it in plain English—or any language you speak. No special commands to memorize. No programming skills required. No technical manuals to study. Just explain what you want in your own words, and it happens. This is the transformation #Eric_Schmidt described when he spoke about language becoming the “universal interface” for #Artificial_intelligence. To understand why this matters, we need to step back and see how radically this changes everything. The old way: A tower of Babel Today, interacting with technology requires learning its language, not the other way around. Consider what you need to know: Each system speaks its own language. Humans must constantly translate their intentions into forms (...)

    #Artificial_Intelligence #RAISE_Summit

  • Why #peer_learning is critical to survive the Age of #Artificial_intelligence
    https://redasadki.me/2025/07/16/why-peer-learning-is-critical-to-survive-the-age-of-artificial-intelligenc

    María, a pediatrician in Argentina, works with an AI diagnostic system that can identify rare diseases, suggest treatment protocols, and draft reports in perfect medical Spanish. But something crucial is missing. The AI provides brilliant medical insights, yet María struggles to translate them into action in her community. Then she discovers the missing piece. Through a peer learning network—where health workers develop projects addressing real challenges, review each other’s work, and engage in facilitated dialogue—she connects with other health professionals across Latin America who are learning to work with AI as a collaborative partner. Together, they discover that AI becomes far more useful when combined with their understanding of local contexts, cultural practices, and community (...)

    #Global_health #Artificial_Intelligence #global_health #San_Francisco_Consensus

  • #Eric_Schmidt’s #San_Francisco_Consensus about the impact of #Artificial_intelligence
    https://redasadki.me/2025/07/16/eric-schmidts-san-francisco-consensus-about-the-impact-of-artificial-intel

    “We are at the beginning of a new epoch,” Eric Schmidt declared at the RAISE Summit in Paris on 9 July 2025. The former Google CEO’s message carries unusual weight—not necessarily because of his past role leading one of tech’s giants, but because of his current one: advising heads of state and industry on #Artificial_Intelligence. “When I talk to governments, what I tell them is, one, ChatGPT is great, but that was two years ago. Everything’s changed again. You’re not prepared for it. And two, you better get organized around it—the good and the bad.” At the Paris summit, he shared what he calls the “San Francisco Consensus”—a convergence of belief among Silicon Valley’s leaders that within three to six years AI will fundamentally transform every aspect of human activity. Whether one views this (...)

  • The #agentic_AI revolution: what does it mean for #workforce_development?
    https://redasadki.me/2025/07/16/the-agentic-ai-revolution-what-does-it-mean-for-workforce-development

    Imagine hiring an assistant who never sleeps, never forgets, can work on a thousand tasks simultaneously, and communicates with you in your own language. Now imagine having not just one such assistant, but an entire team of them, each specialized in different areas, all coordinating seamlessly to achieve your goals. This is the “agentic revolution” —a transformation where AI systems become agents that can understand objectives, remember context, plan actions, and work together to complete complex tasks. It represents a shift from AI as a tool you use to AI as a workforce that you collaborate with. Understanding AI agents: More than chatbots When most people think of AI today, they think of ChatGPT or similar systems—you ask a question, you get an answer. That interaction ends, and the (...)

    #Artificial_intelligence #Eric_Schmidt #RAISE_Summit

  • The business of #Artificial_intelligence and the #equity challenge
    https://redasadki.me/2025/06/13/the-business-of-artificial-intelligence-and-the-equity-challenge

    Since 2019, when #The_Geneva_Learning_Foundation (TGLF) launched its first AI pilot project, we have been exploring how the #Second_Machine_Age is reshaping learning. Ahead of the release of the first framework for AI in global health, I had a chance to sit down with a group of Swiss business leaders at the PanoramAI conference in Lausanne on 5 June 2025 to share TGLF’s insights about the significance and potential of #Artificial_Intelligence for global health and humanitarian response. Here is the article posted by the conference to recap a few of the take-aways. The Global Equity Challenger At the Panoramai AI Summit, Reda Sadki, leader of The Geneva Learning Foundation, delivered provocative insights about AI’s impact on global equity and the future of human work. Drawing from (...)

    #Raphaël_Briner

  • Chicago Sun-Times Prints #AI-Generated Summer #Reading_List With #Books That Don’t Exist
    https://www.404media.co/chicago-sun-times-prints-ai-generated-summer-reading-list-with-books-that-d

    The Chicago Sun-Times newspaper’s “Best of Summer” section published over the weekend contains a guide to summer reads that features real authors and fake books that they did not write was partially generated by #artificial_intelligence, the person who generated it told 404 Media.

    #AI_slop

    Chicago newspaper prints a summer reading list. The problem? The books don’t exist | CBC News
    https://www.cbc.ca/news/world/chicago-sun-times-ai-book-list-1.7539016

    That’s because, while the authors may be real, the books don’t actually exist. And the Chicago Sun-Times is being roasted online for publishing the AI-generated list. The paper initially couldn’t explain how the piece was published.

  • Why #YouTube is obsolete: From linear #video content to AI-mediated multimodal knowledge
    https://redasadki.me/2025/04/06/why-youtube-is-obsolete-from-linear-video-content-to-ai-mediated-multimoda

    Does the educational purpose of video change with AI? The purpose of video in education is undergoing a fundamental transformation in the age of #Artificial_Intelligence. This medium, long established in digital #Learning environments, is changing not just in how we consume it, but in its very role within the learning process. Video has always been a problem in education Video has always presented significant challenges in educational contexts. Its linear format makes it difficult to skim or scan content. Unlike text, which allows learners to quickly jump between sections, glance at headings, or scan for key information, video requires sequential consumption. This constraint has long been problematic for effective learning. Furthermore, in many regions where our learners are based, (...)

    #Global_health #knowledge_construction #knowledge_consumption #knowledge_theory #learning_theory #linear_content

  • A #Global_health framework for #Artificial_Intelligence as co-worker to support networked learning and #local_action
    https://redasadki.me/2025/01/24/a-global-health-framework-for-artificial-intelligence-as-co-worker-to-supp

    The theme of International Education Day 2025, “AI and education: Preserving human agency in a world of automation,” invites critical examination of how artificial intelligence might enhance rather than replace human capabilities in learning and leadership. #global_health education offers a compelling context for exploring this question, as mounting challenges from climate change to persistent inequities demand new approaches to building collective capability. The promise of connected communities Recent experiences like the Teach to Reach initiative demonstrate the potential of structured peer learning networks. The platform has connected over 60,000 health workers, primarily government workers from districts and facilities across 82 countries, including those serving in conflict zones, (...)

    #AI_agents #climate_change_and_health #immunization #learning_strategy #neglected_tropical_diseases #NTDs

  • EU’s AI Act Falls Short on Protecting Rights at Borders

    Despite years of tireless advocacy by a coalition of civil society and academics (including the author), the European Union’s new law regulating artificial intelligence falls short on protecting the most vulnerable. Late in the night on Friday, Dec. 8, the European Parliament reached a landmark deal on its long-awaited Act to Govern Artificial Intelligence (AI Act). After years of meetings, lobbying, and hearings, the EU member states, Commission, and the Parliament agreed on the provisions of the act, awaiting technical meetings and formal approval before the final text of the legislation is released to the public. A so-called “global first” and racing ahead of the United States, the EU’s bill is the first ever regional attempt to create an omnibus AI legislation. Unfortunately, this bill once again does not sufficiently recognize the vast human rights risks of border technologies and should go much further protecting the rights of people on the move.

    From surveillance drones patrolling the Mediterranean to vast databases collecting sensitive biometric information to experimental projects like robo-dogs and AI lie detectors, every step of a person’s migration journey is now impacted by risky and unregulated border technology projects. These technologies are fraught with privacy infringements, discriminatory decision-making, and even impact the life, liberty, and security of person seeking asylum. They also impact procedural rights, muddying responsibility over opaque and discretionary decisions and lacking clarity in mechanisms of redress when something goes wrong.

    The EU’s AI Act could have been a landmark global standard for the protection of the rights of the most vulnerable. But once again, it does not provide the necessary safeguards around border technologies. For example, while recognizing that some border technologies could fall under the high-risk category, it is not yet clear what, if any, border tech projects will be included in the final high-risk category of projects that are subject to transparency obligations, human rights impact assessments, and greater scrutiny. The Act also has various carveouts and exemptions in place, for example for matters of national security, which can encapsulate technologies used in migration and border enforcement. And crucial discussions around bans on high-risk technologies in migration never even made it into the Parliament’s final deal terms at all. Even the bans which have been announced, for example around emotion recognition, are only in place in the workplace and education, not at the border. Moreover, what exactly is banned remains to be seen, and outstanding questions to be answered in the final text include the parameters around predictive policing as well as the exceptions to the ban on real-time biometric surveillance, still allowed in instances of a “threat of terrorism,” targeted search for victims, or the prosecution of serious crimes. It is also particularly troubling that the AI Act explicitly leaves room for technologies which are of particular appetite for Frontex, the EU’s border force. Frontex released its AI strategy on Nov. 9, signaling an appetite for predictive tools and situational analysis technology. These tools, which when used without safeguards, can facilitate illegal border interdiction operations, including “pushbacks,” in which the agency has been investigated. The Protect Not Surveil Coalition has been trying to influence European policy makers to ban predictive analytics used for the purposes of border enforcement. Unfortunately, no migration tech bans at all seem to be in the final Act.

    The lack of bans and red lines under the high-risk uses of border technologies in the EU’s position is in opposition to years of academic research as well as international guidance, such as by then-U.N. Special Rapporteur on contemporary forms of racism, E. Tendayi Achiume. For example, a recently released report by the University of Essex and the UN’s Office of the Human Rights Commissioner (OHCHR), which I co-authored with Professor Lorna McGregor, argues for a human rights based approach to digital border technologies, including a moratorium on the most high risk border technologies such as border surveillance, which pushes people on the move into dangerous terrain and can even assist with illegal border enforcement operations such as forced interdictions, or “pushbacks.” The EU did not take even a fraction of this position on border technologies.

    While it is promising to see strict regulation of high-risk AI systems such as self-driving cars or medical equipment, why are the risks of unregulated AI technologies at the border allowed to continue unabated? My work over the last six years spans borders from the U.S.-Mexico corridor to the fringes of Europe to East Africa and beyond, and I have witnessed time and again how technological border violence operates in an ecosystem replete with the criminalization of migration, anti-migrant sentiments, overreliance on the private sector in an increasingly lucrative border industrial complex, and deadly practices of border enforcement, leading to thousands of deaths at borders. From vast biometric data collected without consent in refugee camps, to algorithms replacing visa officers and making discriminatory decisions, to AI lie detectors used at borders to discern apparent liars, the roll out of unregulated technologies is ever-growing. The opaque and discretionary world of border enforcement and immigration decision-making is built on societal structures which are underpinned by intersecting systemic racism and historical discrimination against people migrating, allowing for high-risk technological experimentation to thrive at the border.

    The EU’s weak governance on border technologies will allow for more and more experimental projects to proliferate, setting a global standard on how governments will approach migration technologies. The United States is no exception, and in an upcoming election year where migration will once again be in the spotlight, there does not seem to be much incentive to regulate technologies at the border. The Biden administration’s recently released Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence does not offer a regulatory framework for these high-risk technologies, nor does it discuss the impacts of border technologies on people migrating, including taking a human rights based approach to the vast impacts of these projects on people migrating. Unfortunately, the EU often sets a precedent for how other countries govern technology. With the weak protections offered by the EU AI act on border technologies, it is no surprise that the U.S. government is emboldened to do as little as possible to protect people on the move from harmful technologies.

    But real people already are at the centre of border technologies. People like Mr. Alvarado, a young husband and father from Latin America in his early 30s who perished mere kilometers away from a major highway in Arizona, in search of a better life. I visited his memorial site after hours of trekking through the beautiful yet deadly Sonora desert with a search-and-rescue group. For my upcoming book, The Walls have Eyes: Surviving Migration in the Age of Artificial Intelligence, I was documenting the growing surveillance dragnet of the so-called smart border that pushes people to take increasingly dangerous routes, leading to increasing loss of life at the U.S.-Mexico border. Border technologies as a deterrent simply do not work. People desperate for safety – and exercising their internationally protected right to asylum – will not stop coming. They will instead more circuitous routes, and scholars like Geoffrey Boyce and Samuel Chambers have already documented a threefold increase in deaths at the U.S.-Mexico frontier as the so-called smart border expands. In the not so distant future, will people like Mr. Alvarado be pursued by the Department of Homeland Security’s recently announced robo-dogs, a military grade technology that is sometimes armed?

    It is no accident that more robust governance around migration technologies is not forthcoming. Border spaces increasingly serve as testing grounds for new technologies, places where regulation is deliberately limited and where an “anything goes” frontier attitude informs the development and deployment of surveillance at the expense of people’s lives. There is also big money to be made in developing and selling high risk technologies. Why does the private sector get to time and again determine what we innovate on and why, in often problematic public-private partnerships which states are increasingly keen to make in today’s global AI arms race? For example, whose priorities really matter when we choose to create violent sound cannons or AI-powered lie detectors at the border instead of using AI to identify racist border guards? Technology replicates power structures in society. Unfortunately, the viewpoints of those most affected are routinely excluded from the discussion, particularly around areas of no-go-zones or ethically fraught usages of technology.

    Seventy-seven border walls and counting are now cutting across the landscape of the world. They are both physical and digital, justifying broader surveillance under the guise of detecting illegal migrants and catching terrorists, creating suitable enemies we can all rally around. The use of military, or quasi-military, autonomous technology bolsters the connection between immigration and national security. None of these technologies, projects, and sets of decisions are neutral. All technological choices – choices about what to count, who counts, and why – have an inherently political dimension and replicate biases that render certain communities at risk of being harmed, communities that are already under-resourced, discriminated against, and vulnerable to the sharpening of borders all around the world.

    As is once again clear with the EU’s AI Act and the direction of U.S. policy on AI so far, the impacts on real people seems to have been forgotten. Kowtowing to industry and making concessions for the private sector not to stifle innovation does not protect people, especially those most marginalized. Human rights standards and norms are the bare minimum in the growing panopticon of border technologies. More robust and enforceable governance mechanisms are needed to regulate the high-risk experiments at borders and migration management, including a moratorium on violent technologies and red lines under military-grade technologies, polygraph machines, and predictive analytics used for border interdictions, at the very least. These laws and governance mechanisms must also include efforts at local, regional, and international levels, as well as global co-operation and commitment to a human-rights based approach to the development and deployment of border technologies. However, in order for more robust policy making on border technologies to actually affect change, people with lived experiences of migration must also be in the driver’s seat when interrogating both the negative impacts of technology as well as the creative solutions that innovation can bring to the complex stories of human movement.

    https://www.justsecurity.org/90763/eus-ai-act-falls-short-on-protecting-rights-at-borders

    #droits #frontières #AI #IA #intelligence_artificielle #Artificial_Intelligence_Act #AI_act #UE #EU #drones #Méditerranée #mer_Méditerranée #droits_humains #technologie #risques #surveillance #discrimination #transparence #contrôles_migratoires #Frontex #push-backs #refoulements #privatisation #business #complexe_militaro-industriel #morts_aux_frontières #biométrie #données #racisme #racisme_systémique #expérimentation #smart_borders #frontières_intelligentes #pouvoir #murs #barrières_frontalières #terrorisme

    • The Walls Have Eyes. Surviving Migration in the Age of Artificial Intelligence

      A chilling exposé of the inhumane and lucrative sharpening of borders around the globe through experimental surveillance technology

      “Racism, technology, and borders create a cruel intersection . . . more and more people are getting caught in the crosshairs of an unregulated and harmful set of technologies touted to control borders and ‘manage migration,’ bolstering a multibillion-dollar industry.” —from the introduction

      In 2022, the U.S. Department of Homeland Security announced it was training “robot dogs” to help secure the U.S.-Mexico border against migrants. Four-legged machines equipped with cameras and sensors would join a network of drones and automated surveillance towers—nicknamed the “smart wall.” This is part of a worldwide trend: as more people are displaced by war, economic instability, and a warming planet, more countries are turning to A.I.-driven technology to “manage” the influx.

      Based on years of researching borderlands across the world, lawyer and anthropologist Petra Molnar’s The Walls Have Eyes is a truly global story—a dystopian vision turned reality, where your body is your passport and matters of life and death are determined by algorithm. Examining how technology is being deployed by governments on the world’s most vulnerable with little regulation, Molnar also shows us how borders are now big business, with defense contractors and tech start-ups alike scrambling to capture this highly profitable market.

      With a foreword by former U.N. Special Rapporteur E. Tendayi Achiume, The Walls Have Eyes reveals the profound human stakes, foregrounding the stories of people on the move and the daring forms of resistance that have emerged against the hubris and cruelty of those seeking to use technology to turn human beings into problems to be solved.

      https://thenewpress.com/books/walls-have-eyes
      #livre #Petra_Molnar

  • Google wrongly labels as child abuse photos that father emails to doctor on request

    https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html

    The nurse said to send photos so the doctor could review them in advance.

    Mark’s wife grabbed her husband’s phone and texted a few high-quality close-ups of their son’s groin area to her iPhone so she could upload them to the health care provider’s messaging system. In one, Mark’s hand was visible, helping to better display the swelling. Mark and his wife gave no thought to the tech giants that made this quick capture and exchange of digital data possible, or what those giants might think of the images.

    [...]

    Two days after taking the photos of his son, Mark’s phone made a blooping notification noise: His account had been disabled because of “harmful content” that was “a severe violation of Google’s policies and might be illegal.” A “learn more” link led to a list of possible reasons, including “child sexual abuse & exploitation.”

    The photos were automatically uploaded from his phone to his Google account...

    [...]

    A human content moderator for Google would have reviewed the photos after they were flagged by the artificial intelligence to confirm they met the federal definition of child sexual abuse material. When Google makes such a discovery, it locks the user’s account, searches for other exploitative material and, as required by federal law, makes a report to the CyberTipline at the National Center for Missing and Exploited Children.

    Even after the police cleared him, Google has not returned his account, resulting in a loss of more than 10 years on data, contacts, emails, photos. Google has not given a statement/explanation.

    #privacy
    #artificial_intelligence

  • Airbus Artificial Intelligence Challenges
    AI Gym
    https://aigym-v.airbus.com/contest/5bc834b8ba7add0027f3ac5a

    Open: 18 Oct 2018 | Closed: 01 Jun 2019
    2 months ago

    Interested parties ranging from established companies, start-ups, research labs, schools or individuals, can express their
    interest to register to the challenge by email to timeserieschallenge.request@airbus.com anytime till end of 2018.

    CONTEXT
    Technologies at the intersection of #Artificial_Intelligence and #Internet_of_Things / #Big_Data are pushing the boundaries of the state of the art in #Time_Series_Analysis and #Predictive_Maintenance.

    #AIRBUS is launching this scientific challenge on anomaly detection in time series data in order to:
    ● scout for top players in the field of Time Series Analysis
    ● encourage the research community to tackle the specific issues of related to the data generated by the aerospace industry during tests and in operations.

    OVERVIEW
    Data collected from our platforms is mostly considered normal. Due to the high quality of our products and of aerospace context, the occurrence of faults and failures is very rare, and we cannot afford to wait for reaching hundreds of new fault types to be able to identify and anticipate them. We are interested in unexpected changes in the behavior of the systems we monitor and have a rapid reaction time in analyzing suspect behavior.

    TECHNICAL SCOPE
    We set up a three stage challenge to benchmark unsupervised detection algorithms, based on three use cases:

    1) Business Domain : Helicopters // number of input sensors : 1 // Sampling Frequency : 1000Hz // expected output : classify sequence as OK / KO

    2) Business Domain : Satellites // number of input sensors : 30 // Sampling Frequency : 1000Hz // expected output : classify sequence as OK / KO

    3) Business Domain : Commercial Aircraft // number of context sensors: 81 // number of sensors for anomaly detection: 9 // Sampling Frequency: 8Hz // expected output : identify anomalous time windows on sensors of interest

    We welcome all and every working technical approaches, ranging from statistics (eg. SCP) to more
    established machine learning techniques (eg. Isolation Forest) to modern AI (eg. Deep Learning LSTM).

    TIMELINE
    The challenge will officially start beginning 2019 with a first training phase on Q1 2019. The second phase will be a shorter evaluation on Q2 2019. A restitution workshop is going to be organised in June 2019.

    #IA
    #AI #IoT

  • Beginning Artificial Intelligence with the Raspberry Pi

    Gain a gentle introduction to the world of Artificial Intelligence (AI) using the Raspberry Pi as the computing platform. Most of the major AI topics will be explored, including expert systems, machine learning both shallow and deep, fuzzy logic control, and more!

    AI in action will be demonstrated using the Python language on the Raspberry Pi. The Prolog language will also be introduced and used to demonstrate fundamental AI concepts. In addition, the Wolfram language will be used as part of the deep machine learning demonstrations.

    A series of projects will walk you through how to implement AI concepts with the Raspberry Pi. Minimal expense is needed for the projects as only a few sensors and actuators will be required. Beginners and hobbyists can jump right in to creating AI projects with the Raspberry PI using this book.

    What You’ll Learn
    What AI is and―as importantly―what it is not
    Inference and expert systems
    Machine learning both shallow and deep
    Fuzzy logic and how to apply to an actual control system
    When AI might be appropriate to include in a system
    Constraints and limitations of the Raspberry Pi AI implementation

    Who This Book Is For
    Hobbyists, makers, engineers involved in designing autonomous systems and wanting to gain an education in fundamental AI concepts, and non-technical readers who want to understand what AI is and how it might affect their lives.

    Table of Contents
    Chapter 1: Introduction to Artificial Intelligence
    Chapter 2: Basic AI Concepts
    Chapter 3: Expert System Demonstrations
    Chapter 4: Games
    Chapter 5: Fuzzy Logic System
    Chapter 6: Machine Learning
    Chapter 7: Machine Learning: Artificial Neural Networks
    Chapter 8: Machine Learning: Deep Learning
    Chapter 9: Machine Learning: Practical ANN Demonstrations
    Chapter 10: Evolutionary Computing
    Chapter 11: Behavior-Based Robotics
    Appendix A: Build Instructions for the Alfie Robot Car

    https://www.amazon.fr/Beginning-Artificial-Intelligence-Raspberry-Pi/dp/1484227425

    #book #livre
    #AI #IA #artificial_intelligence #intelligence_artificielle
    #Raspberry_Pi #Python

  • How Big data mines personal info to manipulate voters and craft fake news
    (June 2017, Nina Burleigh)

    #Facebook, #Cambridge_Analytica, #artificial_intelligence #big_data #psychographics #OCEAN #surveillance

    http://www.newsweek.com/2017/06/16/big-data-mines-personal-info-manipulate-voters-623131.html

    “It’s my ([Alexander Nix]) privilege to speak to you today about the power of Big Data and psychographics in the electoral process,” he began. As he clicked through slides, he explained how Cambridge Analytica can appeal directly to people’s emotions, bypassing cognitive roadblocks, thanks to the oceans of data it can access on every man and woman in the country.

    After describing Big Data, Nix talked about how Cambridge was mining it for political purposes, to identify “mean personality” and then segment personality types into yet more specific subgroups, using other variables, to create ever smaller groups susceptible to precisely targeted messages.

    [...]

    Big Data, artificial intelligence and algorithms designed and manipulated by strategists like the folks at Cambridge have turned our world into a Panopticon

    [...]

    it made tens of millions of “friends” by first employing low-wage tech-workers to hand over their Facebook profiles: It spiders through Facebook posts, friends and likes, and, within a matter of seconds, spits out a personality profile, including the so-called OCEAN psychological tendencies test score (openness, conscientiousness, extraversion, agreeableness and neuroticism)

    [...]

    Facebook was even more useful for Trump, with its online behavioral data on nearly 2 billion people around the world, each of whom is precisely accessible to strategists and marketers who can afford to pay for the peek. Team Trump created a 220 million–person database, nicknamed Project Alamo, using voter registration records, gun ownership records, credit card purchase histories and the monolithic data vaults Experian PLC, Datalogix, Epsilon and Axiom Corporation.

    [...]

    Facebook offers advertisers is its Lookalike Audiences program. An advertiser (or a political campaign manager) can come to Facebook with a small group of known customers or supporters, and ask Facebook to expand it. Using its access to billions of posts and pictures, likes and contacts, Facebook can create groups of people who are “like” that initial group, and then target advertising made specifically to influence it.

    [...]

    By 2012, there had been huge advances in what Big Data, social media and AI could do together. That year, Facebook conducted a happy-sad emotional manipulation experiment, splitting a million people into two groups and manipulating the posts so that one group received happy updates from friends and another received sad ones. They then ran the effects through algorithms and proved—surprise—that they were able to affect people’s moods. (Facebook, which has the greatest storehouse of personal behavior data ever amassed, is still conducting behavioral research, mostly, again, in the service of advertising and making money.

    [...]

    Psychographic algorithms allow strategists to target not just angry racists but also the most intellectually gullible individuals, people who make decisions emotionally rather than cognitively. For Trump, such voters were the equivalent of diamonds in a dark mine. Cambridge apparently helped with that too. A few weeks before the election, in a Sky News report on the company, an employee was actually shown on camera poring over a paper on “ The Need for Cognition Scale,” which, like the OCEAN test, can be applied to personal data, and which measures the relative importance of thinking versus feeling in an individual’s decision-making.

    [...]

    Big Data technology has so far outpaced legal and regulatory frameworks that discussions about the ethics of its use for political purposes are still rare. No senior member of Congress or administration official in Washington has placed a very high priority on asking what psychographic data mining means for privacy, nor about the ethics of political messaging based on evading cognition or rational thinking, nor about the AI role in mainstreaming racist and other previously verboten speech.

    [...]

    After months of investigations and increasingly critical articles in the British press (especially by The Guardian ’s Carole Cadwalladr, who has called Cambridge Analytica’s work the framework for an authoritarian surveillance state, and whose reporting Cambridge has since legally challenged), the British Information Commissioner’s Office (ICO), an independent agency that monitors privacy rights and adherence to the U.K.’s strict laws, announced May 17 that it is looking into Cambridge and SCL for their work in the Brexit vote and other elections.

    [...]

    Now in the White House, Kushner heads the administration’s Office of Technology and Innovation. It will focus on “technology and data,” the administration stated. Kushner said he plans to use it to help run government like a business, and to treat American citizens “like customers.”

  • JPMorgan Software Does in Seconds What Took Lawyers 360,000 Hours

    The program, called COIN, for Contract Intelligence, does in seconds and without errors the mind-numbing job of interpreting commercial-loan agreements, something that consumed 360,000 hours of work each year by lawyers and loan officers.

    https://www.bloomberg.com/news/articles/2017-02-28/jpmorgan-marshals-an-army-of-developers-to-automate-high-finance

    Another program called X-Connect, which went into use in January, examines e-mails to help employees find colleagues who have the closest relationships with potential prospects and can arrange introductions.

    [...]

    While growing numbers of people in the industry worry such advancements might someday take their jobs, many Wall Street personnel are more focused on benefits. A survey of more than 3,200 financial professionals by recruiting firm Options Group last year found a majority expect new technology will improve their careers, for example by improving workplace performance.

    [...]

    the company keeps tabs on 2,000 technology ventures, using about 100 in pilot programs that will eventually join the firm’s growing ecosystem of partners. For instance, the bank’s machine-learning software was built with Cloudier Inc., a software firm that JPMorgan first encountered in 2009.

    #artificial_intelligence #intelligence_artificielle #AI #IA
    #finance

  • AI Predicts Autism From Infant Brain Scans

    Scientists of the University of North Carolina have developed an algorithm which can diagnose autism in babies between 6 and 12 months old. Through brain scans it appears the algorithm’s prediction is right about 81% of the time.

    http://spectrum.ieee.org/the-human-os/biomedical/imaging/ai-predicts-autism-from-infant-brain-scans

    #AI #IA #artificial_intelligence #intelligence_artificielle
    #deep_learning
    #autism
    #brain

  • Facebook renders public / Open Source its AI software for image recognition

    https://research.facebook.com/blog/learning-to-segment

    The main new algorithms driving our advances are the DeepMask1 segmentation framework coupled with our new SharpMask2 segment refinement module. Together, they have enabled FAIR’s [Facebook AI Research] machine vision systems to detect and precisely delineate every object in an image. The final stage of our recognition pipeline uses a specialised convolutional net, which we call MultiPathNet3, to label each object mask with the object type it contains (e.g. person, dog, sheep). We will return to the details shortly.

    We’re making the code for DeepMask+SharpMask as well as MultiPathNet — along with our research papers and demos related to them — open and accessible to all, with the hope that they’ll help rapidly advance the field of machine vision

    [...]

    In addition, our next challenge will be to apply these techniques to video, where objects are moving, interacting, and changing over time. We’ve already made some progress with computer vision techniques to watch videos and understand and classify what’s in them in real time. Real-time classification could help surface relevant and important Live videos on Facebook, while applying more refined techniques to detect scenes, objects, and actions over space and time could one day allow for real-time narration. We’re excited to continue pushing the state of the art and providing better experiences on Facebook for everyone.

    DeepMask: Learning to Segment Object Candidates.
    Pedro O. Pinheiro, Ronan Collobert, Piotr Dollár (NIPS 2015)

    https://arxiv.org/pdf/1506.06204v2.pdf
    https://arxiv.org/abs/1506.06204

    SharpMask: Learning to Refine Object Segments.
    Pedro O. Pinheiro, Tsung-Yi Lin, Ronan Collobert, Piotr Dollàr (ECCV 2016)

    https://arxiv.org/pdf/1603.08695v2.pdf
    https://arxiv.org/abs/1603.08695

    MultiPathNet: A Multipath Network for Object Detection.
    Sergey Zagoruyko, Adam Lerer, Tsung-Yi Lin, Pedro O. Pinheiro, Sam Gross, Soumith Chintala, Piotr Dollár (BMVC 2016)

    https://arxiv.org/pdf/1604.02135v2.pdf
    https://arxiv.org/abs/1604.02135

    #AI #Artificial_Intelligence
    #machine_vision #image_recognition

  • Machine Learning algorithm fed by Instagram reveals predictive markers of depression

    http://www.digitaltrends.com/social-media/ai-program-uses-instagram-to-diagnose-depression

    A new artificial intelligence program can pick up on the early signs of depression before humans (and even humans who are general practitioners) can — and just by using Instagram. A team of researchers from Harvard and the University of Vermont recently developed a machine learning program that correctly identified which Instagram users were clinically depressed with 70 percent accuracy.

    The study by Andrew G. Reece and Christopher M. Danforth:

    http://arxiv.org/pdf/1608.03282v2.pdf

    Abstract:​ Using Instagram data from 166 individuals, we applied machine learning tools to successfully identify markers of depression. Statistical features were computationally extracted from 43,950 participant Instagram photos, using color analysis, metadata components, and algorithmic face detection.
    Resulting models outperformed general practitioners’ average diagnostic success rate for depression. These results held even when the analysis was restricted to posts made before depressed individuals were first diagnosed. Photos posted by depressed individuals were more likely to be bluer, prayer, and darker. Human ratings of photo attributes (happy, sad, etc.) were weaker predictors of depression, and were uncorrelated with computationally­ generated features. These findings suggest new avenues for early screening and detection of mental illness.

    #machine_learning #artificial_intelligence #AI #face_detection
    #depression

  • Seymour Papert, computer scientist, born 29 February 1928 ; died 31 July 2016 | The Guardian
    https://www.theguardian.com/education/2016/aug/03/seymour-papert-obituary

    Child’s play had been considered largely inconsequential, but Piaget saw that it was an essential part of a child’s cognitive development. Children were “learning by doing”. Today’s educational toy industry started from there.

    Papert understood that mathematics was abstract and theoretical, and that was how it was taught to children. That was why most of them did not understand it. The answer, he thought, was to give children a physical way to think of mathematical ideas.

    #jeu #enseignement #informatique #interactivité #matérialisation #pionniers #logo #lego

    (suis preneur d’un texte plus intéressant)

  • #Facebook is using AI to make detailed maps of where people live | The Verge
    http://www.theverge.com/2016/2/22/11075456/facebook-population-density-maps-internet-org

    The project is part of Facebook’s Connectivity Lab, the technical arm of its #Internet.org initiative that deals with #drones, #satellites, and lasers for delivering #internet to rural areas and developing countries. With better maps, the company is able to determine whether Wi-Fi hotspots or cellular technologies are better for bringing people online — and helping them sign up for Facebook naturally.

    FACEBOOK ANALYZED 20 COUNTRIES COVERING 21.6 MILLION SQUARE KILOMETERS

    To generate the maps, Connectivity Lab worked with Facebook’s data science division, infrastructure unit, and #machine_learning and #artificial_intelligence groups.

    OK.

  • #RIP Marvin Minsky, 88 years

    https://en.wikipedia.org/wiki/Marvin_Minsky

    “Intelligence is not the product of any singular mechanism, but comes from the managed interaction of a diverse variety of resourceful agents.”

    (Minsky in The Society of Mind)

    His HTML 1.0 website:

    http://web.media.mit.edu/~minsky

    In 1952 he also invented what he called The Most Useless Machine Ever.
    https://www.youtube.com/watch?v=Z86V_ICUCD4

    In French:

    http://www.lemonde.fr/disparitions/article/2016/01/26/marvin-minsky-pionnier-de-l-intelligence-artificielle-est-mort_4854155_3382.

    #artificial_intelligence #intelligence_artificielle
    #GOFAI
    #Marvin_Minsky
    #MIT

  • Google utilise l’intelligence artificielle pour les 15% de requêtes qu’il n’a encore jamais reçues

    Début 2014 Google avait racheté #DeepMind, spécialisé dans l’intelligence artificielle, pour 365M€. Il s’agissait là d’un talent acquisition. C’est de là que provient le responsable Technologie de Google, Demis Hassabis, qui annonçait près d’un an plus tard la coopération avec l’université d’Oxford afin d’approfondir leur algorithmes de Machine Learning.

    Le fruit de leurs recherches porte le nom de #RankBrain.
    Quand Google ne reconnait pas un mot ou une phrase, RankBrain tente de deviner/prédire quel mot ou phrase aurait une signification similaire. De cette manière le moteur de recherche serait plus apte à fournir une réponse à une toute nouvelle requête et à des questions ambiguës.

    http://www.bloomberg.com/news/articles/2015-10-26/google-turning-its-lucrative-web-search-over-to-ai-machines

    For the past few months, a “very large fraction” of the millions of queries a second that people type into the company’s search engine have been interpreted by an artificial intelligence system, nicknamed RankBrain, said Greg Corrado, a senior research scientist with the company, outlining for the first time the emerging role of AI in search.

    RankBrain uses artificial intelligence to embed vast amounts of written language into mathematical entities — called vectors — that the computer can understand. If RankBrain sees a word or phrase it isn’t familiar with, the machine can make a guess as to what words or phrases might have a similar meaning and filter the result accordingly, making it more effective at handling never-before-seen search queries.

    [...]

    The addition of RankBrain to search is part of a half-decade-long push by Google into AI, as the company seeks to embed the technology into every aspect of its business. “Machine learning is a core transformative way by which we are rethinking everything we are doing,”

    #machine_learning
    #artificial_intelligence #Intelligence_artificielle

  • Will Advances in Technology Create a Jobless Future? | MIT Technology Review
    http://www.technologyreview.com/featuredstory/538401/who-will-own-the-robots

    We’re in the midst of a jobs crisis, and rapid advances in AI and other technologies may be one culprit. How can we get better at sharing the wealth that technology creates?

    #technologie #emplois #robots