• What AI still can’t do - MIT Technology Review
    https://www.technologyreview.com/s/615189/what-ai-still-cant-do

    In less than a decade, computers have become extremely good at diagnosing diseases, translating languages, and transcribing speech. They can outplay humans at complicated strategy games, create photorealistic images, and suggest useful replies to your emails.

    Yet despite these impressive achievements, artificial intelligence has glaring weaknesses.

    Machine-learning systems can be duped or confounded by situations they haven’t seen before. A self-driving car gets flummoxed by a scenario that a human driver could handle easily. An AI system laboriously trained to carry out one task (identifying cats, say) has to be taught all over again to do something else (identifying dogs). In the process, it’s liable to lose some of the expertise it had in the original task. Computer scientists call this problem “catastrophic forgetting.”

    These shortcomings have something in common: they exist because AI systems don’t understand causation. They see that some events are associated with other events, but they don’t ascertain which things directly make other things happen. It’s as if you knew that the presence of clouds made rain likelier, but you didn’t know clouds caused rain.

    But there’s a growing consensus that progress in AI will stall if computers don’t get better at wrestling with causation. If machines could grasp that certain things lead to other things, they wouldn’t have to learn everything anew all the time—they could take what they had learned in one domain and apply it to another. And if machines could use common sense we’d be able to put more trust in them to take actions on their own, knowing that they aren’t likely to make dumb errors.

    Pearl’s work has also led to the development of causal Bayesian networks—software that sifts through large amounts of data to detect which variables appear to have the most influence on other variables. For example, GNS Healthcare, a company in Cambridge, Massachusetts, uses these techniques to advise researchers about experiments that look promising.

    In one project, GNS worked with researchers who study multiple myeloma, a kind of blood cancer. The researchers wanted to know why some patients with the disease live longer than others after getting stem-cell transplants, a common form of treatment. The software churned through data with 30,000 variables and pointed to a few that seemed especially likely to be causal. Biostatisticians and experts in the disease zeroed in on one in particular: the level of a certain protein in patients’ bodies. Researchers could then run a targeted clinical trial to see whether patients with the protein did indeed benefit more from the treatment. “It’s way faster than poking here and there in the lab,” says GNS cofounder Iya Khalil.

    Nonetheless, the improvements that Pearl and other scholars have achieved in causal theory haven’t yet made many inroads in deep learning, which identifies correlations without too much worry about causation. Bareinboim is working to take the next step: making computers more useful tools for human causal explorations.

    Getting people to think more carefully about causation isn’t necessarily much easier than teaching it to machines, he says. Researchers in a wide range of disciplines, from molecular biology to public policy, are sometimes content to unearth correlations that are not actually rooted in causal relationships. For instance, some studies suggest drinking alcohol will kill you early, while others indicate that moderate consumption is fine and even beneficial, and still other research has found that heavy drinkers outlive nondrinkers. This phenomenon, known as the “reproducibility crisis,” crops up not only in medicine and nutrition but also in psychology and economics. “You can see the fragility of all these inferences,” says Bareinboim. “We’re flipping results every couple of years.”

    On reste quand même dans la fascination technologique

    Bareinboim described this vision while we were sitting in the lobby of MIT’s Sloan School of Management, after a talk he gave last fall. “We have a building here at MIT with, I don’t know, 200 people,” he said. How do those social scientists, or any scientists anywhere, decide which experiments to pursue and which data points to gather? By following their intuition: “They are trying to see where things will lead, based on their current understanding.”

    That’s an inherently limited approach, he said, because human scientists designing an experiment can consider only a handful of variables in their minds at once. A computer, on the other hand, can see the interplay of hundreds or thousands of variables. Encoded with “the basic principles” of Pearl’s causal calculus and able to calculate what might happen with new sets of variables, an automated scientist could suggest exactly which experiments the human researchers should spend their time on.

    #Intelligence_artificielle #Causalité #Connaissance #Pragmatique #Machine_learning

  • Hackers can trick a Tesla into accelerating by 50 miles per hour - MIT Technology Review
    https://www.technologyreview.com/s/615244/hackers-can-trick-a-tesla-into-accelerating-by-50-miles-per-hour

    Hackers have manipulated multiple Tesla cars into speeding up by 50 miles per hour. The researchers fooled the car’s MobilEye EyeQ3 camera system by subtly altering a speed limit sign on the side of a road in a way that a person driving by would almost never notice.

    This demonstration from the cybersecurity firm McAfee is the latest indication that adversarial machine learning can potentially wreck autonomous driving systems, presenting a security challenge to those hoping to commercialize the technology.

    MobilEye EyeQ3 camera systems read speed limit signs and feed that information into autonomous driving features like Tesla’s automatic cruise control, said Steve Povolny and Shivangee Trivedi from McAfee’s Advanced Threat Research team.

    The researchers stuck a tiny and nearly imperceptible sticker on a speed limit sign. The camera read the sign as 85 instead of 35 and, in testing, both the 2016 Tesla Model X and that year’s Model S sped up 50 miles per hour.

    The modified speed limit sign reads as 85 on the Tesla’s heads-up display. A Mobileye spokesperson downplayed the research by suggesting this sign would fool a human into reading 85 as well.
    MCAFEE

    The Tesla, reading the modified 35 as 85, is tricked into accelerating.
    MCAFEE

    This is the latest in an increasing mountain of research showing how machine learning systems can be attacked and fooled in life-threatening situations.

    “Why we’re studying this in advance is because you have intelligent systems that at some point in the future are going to be doing tasks that are now handled by humans,” Povolny said. “If we are not very prescient about what the attacks are and very careful about how the systems are designed, you then have a rolling fleet of interconnected computers which are one of the most impactful and enticing attack surfaces out there.”

    As autonomous systems proliferate, the issue extends to machine learning algorithms far beyond vehicles: A March 2019 study showed medical machine-learning systems fooled into giving bad diagnoses.

    A Mobileye spokesperson downplayed the research by suggesting the modified sign would even fool a human into reading 85 instead of 35. The company doesn’t consider tricking the camera to be an attack and, despite the role the camera plays in Tesla’s cruise control and the camera wasn’t designed for autonomous driving.

    “Autonomous vehicle technology will not rely on sensing alone, but will also be supported by various other technologies and data, such as crowdsourced mapping, to ensure the reliability of the information received from the camera sensors and offer more robust redundancies and safety,” the Mobileye spokesperson said in a statement.

    Comme je cherchais des mots clés, je me disais que « #cyberattaque » n’était pas le bon terme, car l’attaque n’est pas via le numérique, mais bien en accolant un stocker sur un panneau physique. Il ne s’agit pas non plus d’une attaque destructive, mais simplement de « rendre fou (footing) » le système de guidage, car celui-ci ne « comprend » pas une situation. La réponse de MobilEye est intéressante : un véhicule autonome ne peut pas se fier à sa seule « perception », mais recouper l’information avec d’autres sources.

    #Machine_learning #Véhicules_autonomes #Tesla #Panneau_routiers #Intelligence_artificielle

  • AI bias creep is a problem that’s hard to fix | Biometric Update
    https://www.biometricupdate.com/202002/__trashed-6

    On the heels of a National Institute of Standards and Technology (NIST) study on demographic differentials of biometric facial recognition accuracy, Karen Hao, an artificial intelligence authority and reporter for MIT Technology Review, recently explained that “bias can creep in at many stages of the [AI] deep-learning process” because “the standard practices in computer science aren’t designed to detect it.”

    “Fixing discrimination in algorithmic systems is not something that can be solved easily,” explained Andrew Selbst, a post-doctoral candidate at the Data & Society Research Institute, and lead author of the recent paper, Fairness and Abstraction in Sociotechnical Systems.

    “A key goal of the fair-ML community is to develop machine-learning based systems that, once introduced into a social context, can achieve social and legal outcomes such as fairness, justice, and due process,” the paper’s authors, which include Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi, noted, adding that “(b)edrock concepts in computer science – such as abstraction and modular design – are used to define notions of fairness and discrimination, to produce fairness-aware learning algorithms, and to intervene at different stages of a decision-making pipeline to produce ‘fair’ outcomes.”

    Consequently, just recently a broad coalition of more than 100 civil rights, digital justice, and community-based organizations issued a joint statement of civil rights concerns in which they highlighted concerns with the adoption of algorithmic-based decision making tools.

    Explaining why “AI bias is hard to fix,” Hoa cited as an example, “unknown unknowns. The introduction of bias isn’t always obvious during a model’s construction because you may not realize the downstream impacts of your data and choices until much later. Once you do, it’s hard to retroactively identify where that bias came from and then figure out how to get rid of it.”

    Hoa also blames “lack of social context,” meaning “the way in which computer scientists are taught to frame problems often isn’t compatible with the best way to think about social problems.”

    Then there are the definitions of fairness where it’s not at all “clear what the absence of bias should look like,” Hoa argued, noting, “this isn’t true just in computer science – this question has a long history of debate in philosophy, social science, and law. What’s different about computer science is that the concept of fairness has to be defined in mathematical terms, like balancing the false positive and false negative rates of a prediction system. But as researchers have discovered, there are many different mathematical definitions of fairness that are also mutually exclusive.”

    “A very important aspect of ethical behavior is to avoid (intended, perceived, or accidental) bias,” which they said “occurs when the data distribution is not representative enough of the natural phenomenon one wants to model and reason about. The possibly biased behavior of a service is hard to detect and handle if the AI service is merely being used and not developed from scratch since the training data set is not available.”

    #Machine_learning #Intelligence_artificielle #Société #Sciences_sociales

  • « Le terme IA est tellement sexy qu’il fait prendre des calculs pour de l’intelligence »
    https://www.lemonde.fr/idees/article/2020/02/07/le-terme-ia-est-tellement-sexy-qu-il-fait-prendre-des-calculs-pour-de-l-inte

    Croire que l’intelligence artificielle ait quelque chose à voir avec l’intelligence humaine est une illusion, détaille l’informaticien Vincent Bérenger dans une tribune au « Monde ». Tribune. C’est une éminente figure de l’intelligence artificielle (IA), Yann Le Cun, qui souligne que les prouesses de l’IA démontrent bien plus les limites intellectuelles humaines que l’intelligence de ses réalisations. Nous sommes de mauvais calculateurs, nous ne savons pas brasser de grandes quantités d’informations, (...)

    #algorithme #technologisme

  • Washington Must Bet Big on AI or Lose Its Global Clout | WIRED
    https://www.wired.com/story/washington-bet-big-ai-or-lose-global-clout

    The report, from the Center for New American Security (CNAS), is the latest to highlight the importance of AI to the future of the US. It argues that the technology will define economic, military, and geopolitical power in coming decades.

    Advanced technologies, including AI, 5G wireless services, and quantum computing, are already at the center of an emerging technological cold war between the US and China. The Trump administration has declared AI a national priority, and it has enacted policies, such as technology export controls, designed to limit China’s progress in AI and related areas.

    The CNAS report calls for a broader national AI strategy and a level of commitment reminiscent of the Apollo program. “If the United States wants to continue to be the world leader, not just in technology but in political power and being able to promote democracy and human rights, that calls for this type of effort,” says Martijn Rasser, a senior fellow at CNAS and the lead author of the report.

    Rasser and his coauthors believe AI will be as pervasive and transformative as software itself has been. This means it will be of critical importance to economic success as well as military might and global influence. Rasser argues that $25 billion over five years is achievable, and notes that it would constitute less than 19 percent of total federal R&D in the 2020 budget.

    “We’re back in an era of great power competition, and technology is that the center,” Rasser says. “And the nation that leads, not just artificial intelligence but technology across the board, will truly dominate the 21st century.”

    “Both the Russians and the Chinese have concluded that the way to leapfrog the US is with AI,” says Bob Work, a distinguished senior fellow at CNAS who served as deputy secretary of defense under Presidents Obama and Trump. Work says the US needs to convince the public and that it doesn’t intend to develop lethal autonomous weapons, only technology that would counter the work Russia and China are doing.

    In addition to calling for new funding, the CNAS report argues that a different attitude towards international talent is needed. It recommends that the US attract and retain more foreign scientists by raising the number of H1-B visas and removing the cap for people with advanced degrees. “You want these people to live, work, and stay in the United States,” Rasser says. The report suggests early vetting of applications at foriegn embassies to identify potential security risks.

    #Intelligence_artificielle #Guerre_technologique #Géopolitique

  • How Apple personalizes Siri without hoovering up your data - MIT Technology Review
    https://www.technologyreview.com/s/614900/apple-ai-personalizes-siri-federated-learning

    Instead, it relies primarily on a technique called federated learning, Apple’s head of privacy, Julien Freudiger, told an audience at the Neural Processing Information Systems conference on December 8. Federated learning is a privacy-preserving machine-learning method that was first introduced by Google in 2017. It allows Apple to train different copies of a speaker recognition model across all its users’ devices, using only the audio data available locally. It then sends just the updated models back to a central server to be combined into a master model. In this way, raw audio of users’ Siri requests never leaves their iPhones and iPads, but the assistant continuously gets better at identifying the right speaker.

    In addition to federated learning, Apple also uses something called differential privacy to add a further layer of protection. The technique injects a small amount of noise into any raw data before it is fed into a local machine-learning model. The additional step makes it exceedingly difficult for malicious actors to reverse-engineer the original audio files from the trained model.

    In the past year, federated learning has grown increasingly popular within the AI research community as concerns about data privacy have grown. In March, Google released a new set of tools to make it easier for developers to implement their own federating learning models. Among many other uses, researchers hope it will help overcome privacy challenges in the application of AI to health care. Companies including Owkin, Doc.ai, and Nvidia are interested in using it in this way.

    While the technique is still relatively new and needs further refinement, Apple’s latest adoption offers another case study for how it can be applied at scale. It also marks a fundamental shift in the trade-off the tech industry has traditionally assumed between privacy and utility: in fact, it’s now possible to achieve both. Let’s hope other companies quickly catch on.

    #Intelligence_artificielle #Siri #Federated_learning

  • YouTube : un bug a entraîné des dizaines de faux signalements pour droits d’auteur
    https://www.numerama.com/tech/577538-youtube-un-bug-a-entraine-des-dizaines-de-faux-signalements-pour-dr

    Des streameurs et streameuses ont reçu de multiples revendications de la part d’un même ayant-droit. YouTube plaide l’erreur et explique que cela n’aura pas de conséquence pour les personnes concernées. Les vidéastes n’en ont décidément pas fini de se battre avec Content ID, le système utilisé par YouTube pour faire respecter les droits d’auteur, et revendiquer des droits sur une partie de leurs revenus. Ce mercredi 4 décembre, plusieurs streameurs et streameuses se sont plaint de revendications (...)

    #Google #streaming #YouTube #ContentID #copyright #erreur #algorithme

    //c0.lestechnophiles.com/www.numerama.com/content/uploads/2019/06/youtube.jpg

  • On the Measure of Intelligence - François Chollet
    https://arxiv.org/abs/1911.01547v2

    To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to “buy” arbitrary levels of skills for a system, in a way that masks the system’s own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.

    #intelligence_artificielle

  • AI For Good Is Often Bad. Trying to solve poverty, crime, and disease with (often biased) technology doesn’t address their root causes.

    After speaking at an MIT conference on emerging #AI technology earlier this year, I entered a lobby full of industry vendors and noticed an open doorway leading to tall grass and shrubbery recreating a slice of the African plains. I had stumbled onto TrailGuard AI, Intel’s flagship AI for Good project, which the chip company describes as an artificial intelligence solution to the crime of wildlife poaching. Walking through the faux flora and sounds of the savannah, I emerged in front of a digital screen displaying a choppy video of my trek. The AI system had detected my movements and captured digital photos of my face, framed by a rectangle with the label “poacher” highlighted in red.

    I was handed a printout with my blurry image next to a picture of an elephant, along with text explaining that the TrailGuard AI camera alerts rangers to capture poachers before one of the 35,000 elephants each year are killed. Despite these good intentions, I couldn’t help but wonder: What if this happened to me in the wild? Would local authorities come to arrest me now that I had been labeled a criminal? How would I prove my innocence against the AI? Was the false positive a result of a tool like facial recognition, notoriously bad with darker skin tones, or was it something else about me? Is everyone a poacher in the eyes of Intel’s computer vision?

    Intel isn’t alone. Within the last few years, a number of tech companies, from Google to Huawei, have launched their own programs under the AI for Good banner. They deploy technologies like machine-learning algorithms to address critical issues like crime, poverty, hunger, and disease. In May, French president Emmanuel Macron invited about 60 leaders of AI-driven companies, like Facebook’s Mark Zuckerberg, to a Tech for Good Summit in Paris. The same month, the United Nations in Geneva hosted its third annual AI for Global Good Summit sponsored by XPrize. (Disclosure: I have spoken at it twice.) A recent McKinsey report on AI for Social Good provides an analysis of 160 current cases claiming to use AI to address the world’s most pressing and intractable problems.

    While AI for good programs often warrant genuine excitement, they should also invite increased scrutiny. Good intentions are not enough when it comes to deploying AI for those in greatest need. In fact, the fanfare around these projects smacks of tech solutionism, which can mask root causes and the risks of experimenting with AI on vulnerable people without appropriate safeguards.

    Tech companies that set out to develop a tool for the common good, not only their self-interest, soon face a dilemma: They lack the expertise in the intractable social and humanitarian issues facing much of the world. That’s why companies like Intel have partnered with National Geographic and the Leonardo DiCaprio Foundation on wildlife trafficking. And why Facebook partnered with the Red Cross to find missing people after disasters. IBM’s social-good program alone boasts 19 partnerships with NGOs and government agencies. Partnerships are smart. The last thing society needs is for engineers in enclaves like Silicon Valley to deploy AI tools for global problems they know little about.

    Get WIRED Access
    subscribe
    Most Popular

    Backchannel

    The Strange Life and Mysterious Death of a Virtuoso Coder
    Brendan I. Koerner
    Backchannel

    How the Dumb Design of a WWII Plane Led to the Macintosh
    Cliff Kuang
    Security

    Burglars Really Do Use Bluetooth Scanners to Find Laptops and Phones
    Lily Hay Newman
    Security

    How Iran’s Government Shut Off the Internet
    Lily Hay Newman

    The deeper issue is that no massive social problem can be reduced to the solution offered by the smartest corporate technologists partnering with the most venerable international organizations. When I reached out to the head of Intel’s AI for Good program for comment, I was told that the “poacher” label I received at the TrailGuard installation was in error—the public demonstration didn’t match the reality. The real AI system, Intel assured me, only detects humans or vehicles in the vicinity of endangered elephants and leaves it to the park rangers to identify them as poachers. Despite this nuance, the AI camera still won’t detect the likely causes of poaching: corruption, disregarding the rule of law, poverty, smuggling, and the recalcitrant demand for ivory. Those who still cling to technological solutionism are operating under the false assumption that because a company’s AI application might work in one narrow area, it will work on a broad political and social problem that has vexed society for ages.

    Sometimes, a company’s pro-bono projects collide with their commercial interests. Earlier this year Palantir and the World Food Programme announced a $45M partnership to use data analytics to improve food delivery in humanitarian crises. A backlash quickly ensued, led by civil society organizations concerned over issues like data privacy and surveillance, which stem from Palantir’s contracts with the military. Despite Palantir’s project helping the humanitarian organization Mercy Corps aid refugees in Jordan, protesters and even some Palantir employees have demanded the company stop helping the Immigration and Customs Enforcement detain migrants and separate families at the US border.

    Even when a company’s intentions seem coherent, the reality is that for many AI applications, the current state of the art is pretty bad when applied to global populations. Researchers have found that facial recognition software, in particular, is often biased against people of color, especially those who are women. This has led to calls for a global moratorium on facial recognition and cities like San Francisco to effectively ban it. AI systems built on limited training data create inaccurate predictive models that lead to unfair outcomes. AI for good projects often amount to pilot beta testing with unproven technologies. It’s unacceptable to experiment in the real world on vulnerable people, especially without their meaningful consent. And the AI field has yet to figure out who is culpable when these systems fail and people are hurt as a result.

    This is not to say tech companies should not work to serve the common good. With AI poised to impact much of our lives, they have more of a responsibility to do so. To start, companies and their partners need to move from good intentions to accountable actions that mitigate risk. They should be transparent about both benefits and harms these AI tools may have in the long run. Their publicity around the tools should reflect the reality, not the hype. To Intel’s credit, the company promised to fix that demo to avoid future confusion. It should involve local people closest to the problem in the design process and conduct independent human rights assessments to determine if a project should move forward. Overall, companies should approach any complex global problem with the humility in knowing that an AI tool won’t solve it.

    https://www.wired.com/story/opinion-ai-for-good-is-often-bad/?mbid=social_twitter
    #IA #intelligence_artificielle #pauvreté #développement #technologie #root_causes #API #braconnage #wildlife #éléphants #droits_humains

  • Les abeilles derrière les fenêtres | Lise Gaignard et Aline Torterat
    https://www.jefklak.org/les-abeilles-derriere-les-fenetres

    Quand on cherche obtenir une aide sur un site internet, il arrive qu’on se retrouve à « tchatter » avec un visage dessiné et un prénom d’emprunt, qui répond à côté et nous embrouille. Pour réduire l’agacement qui nous saisit, il peut être utile de connaître ce qui se passe de l’autre côté de l’écran. Au risque de transformer cet agacement en effarement. Source : Jef Klak

  • Comment un algorithme a empêché des millions d’Afro-Américains de bénéficier de soins de santé optimaux - Société - Numerama
    https://www.numerama.com/politique/564260-comment-un-algorithme-a-empeche-des-millions-dafro-americains-de-be

    En 2018, le Centre national des statistiques de santé expliquait que 9 % de la population vivait sans couverture maladie. Les soins de santé sont particulièrement difficiles d’accès pour eux, ce qui pousse beaucoup de gens à y renoncer.

    Parmi ces 9 %, explique l’étude, les patientes et patients noirs sont surreprésentés : en raison de leur couleur de peau, ils subissent des discriminations et sont davantage exposés à la précarité. L’algorithme mis en cause par la revue Science aurait uniquement compris qu’ils dépensaient moins d’argent dans les soins de santé… mais il n’a pas pris en compte les raisons. Pour ce genre d’outils, « moins d’argent dépensé » équivaut en effet bêtement à « moins de problèmes de santé ».

    Selon l’étude, 17,7 % des patients américains noirs reçoivent actuellement ce que l’on appelle « une prise en charge complémentaire », réservée aux patients à risque. Science estime que si l’algorithme était correctement conçu, ce pourcentage devrait être de 46,5 %. À terme, cela peut créer des problèmes de santé majeurs.

    L’étude ne mentionne pas l’identité des créateurs de l’algorithme. Elle explique que plusieurs firmes utilisent des systèmes similaires, les assureurs notamment. Ils permettent à ces derniers d’éviter des frais jugés inutiles, en analysant au mieux les besoins des patients.

    #Santé #Intelligence_artificielle #Racisme #Algorithmes

    • Je ne comprends pas @nestor si tu interroges la pertinence de traiter les gens comme des items statistiques ou si c’est juste une manière de dire que tu t’en tapes, des conséquences du racisme et des préjugés, parce que les personnes noires ont le super privilège de voir une personne sur 100 000 financer ses études par le sport ou sur un million gagner un paquet.

  • Yodo1’s AI-driven whale hunt is a bad look for the games industry | Opinion | GamesIndustry.biz
    https://www.gamesindustry.biz/articles/2019-10-21-yodo1s-ai-driven-whale-hunt-is-a-bad-look-for-the-games-in

    We’ve learned a lot about making money from games, and making and managing good games. About a year ago I decided, what if I could teach AI how to do all of this? What if I could teach #AI how to make money from this? What if I could teach AI how to find whales inside of games? What if I could teach AI how to moderate a game community of millions of players?

    (note : whale = joueur qui paie — l’un d’eux a payé 150,000$ dans un jeu, lequel emploie des techniques d’#intelligence_artificielle pour définir les propositions commerciales et moduler la partie de manière à extraire le max de pognon des gogos)

    #extractivisme #addiction #jeu_vidéo

  • Zeynep Tufekci : Get a red team to ensure AI is ethical | Verdict
    https://www.verdict.co.uk/zeynep-tufekci-ai-red-team

    In cybersecurity, red team professionals are tasked with finding vulnerabilities before they become a problem. In artificial intelligence, flaws such as bias often become apparent only once they are deployed.

    One way to catch these AI flaws early is for organisations to apply the red team concept when developing new systems, according to techno-sociologist and academic Zeynep Tufekci.

    “Get a read team, get people in the room, wherever you’re working, who think about what could go wrong,” she said, speaking at Hitachi Vantara’s Next conference in Las Vegas, US, last week. “Because thinking about what could go wrong before it does is the best way to make sure it doesn’t go wrong.”

    Referencing Hitachi CEO and president Toshiaki Higashihara description of digitalisation as having “lights and shadows”, Tufekci warned of the risks associated with letting the shadowy side go unchecked.
    AI shadows

    One of these “shadows” is when complex AI systems become black boxes, making it difficult even for the AI’s creators to explain how it made its decision.

    Tufekci also cited the example of YouTube’s recommendation algorithm pushing people towards extremism. For example, a teenager could innocently search ‘is there a male feminism’ and then be nudged towards misogynistic videos because such controversial videos have received more engagement.

    And while data can be used for good, it can also be used by authoritarian governments to repress its citizens, or by election consultancies to manipulate our votes.

    Then there are the many instances of human bias finding their way into algorithms. These include AI in recruitment reflecting the sexism of human employers or facial recognition not working for people with darker skin.

    “If the data can be used to fire you, or to figure out protesters or to use for social control, or not hire people prone to depression, people are going to be like: ‘we do not want this’,” said Tufekci, who is an associate professor at the UNC School of Information and Library Science.

    “What would be much better is to say, what are the guidelines?”
    Using a red team to enforce AI ethics guidelines

    Some guidelines already exist. In April 2018, the European Union’s High-Level Expert Group on AI presented seven key requirements for trustworthy AI.

    These requirements include human oversight, accountability and technical robustness and safety. But what Tufekci suggests is having a team of people dedicated to ensuring AI ethics are adhered to.
    3 Things That Will Change the World Today
    Get the Verdict morning email

    “You need people in the room, who are going to say there’s light and there are shadows in this technology, and how do we figure out to bring more light into the shadowy side, so that we’re not blindsided, so that we’re not just sort of shocked by the ethical challenges when they hit us,” she explained.

    “So we think about it ahead of time.”

    However, technology companies often push back against regulation, usually warning that too much will stifle innovation.

    “Very often when a technology is this new, and this powerful, and this promising, the people who keep talking about what could go wrong – which is what I do a lot – are seen as these spoilsport people,” said Tufekci.

    “And I’m kind of like no – it’s because we want it to be better.”

    #Intelligence_artificielle #Zeynep_Tufekci #Cybersécurité #Biais #Big_data

  • Nouveau récapitulatif automnal : six mois d’inscriptions murales | Yves Pagès
    http://www.archyves.net/html/Blog/?p=7825

    Sale coup pour la sempiternelle convergence des buts (sinon des luttes), on a vu la semaine dernière les gentils organisateurs d’Extinction/Rébellion s’ingénier à effacer à l’acétone tags et graffitis sur le Pont au Change occupé, près du Châtelet. Comme s’il leur fallait tout à la fois scénographier la désobéissance collective et en faire disparaître toute trace de contamination textuelle, hors la marque déposée de leur logo. Source : Pense-bête

  • Sur le plancher des vaches (IV/I)
    Symboles (et plus si affinités)

    Natalie

    https://lavoiedujaguar.net/Sur-le-plancher-des-vaches-IV-I-Symboles-et-plus-si-affinites

    Paris, le 7 octobre 2019
    Amis,

    « Le plancher des vaches » inaugural jouait avec quelques pseudo-vérités concernant ce que l’on a nommé la « technontologie ». La principale question posée était celle-ci : Notre genre d’humain n’aurait-il pas une certaine propension à recycler sans fin le divin Un ? Si tel était le cas, Dieu ne serait pas mort, mais où s’cache t’El crénom ?

    Le champ d’investigation proposé pour tenter de répondre à cette question est celui du monde du travail. « Le plancher des vaches II » a brossé à grand traits quelques dispositifs structurants mis en place à l’échelon mondial depuis les années 1980, dispositifs dont on a affirmé, dans « le plancher des vaches III », qu’ils dessinent un mouvement progressif de chosification du vivant.

    Ce mouvement n’est pas récent, mais on fait ici l’hypothèse qu’après la prise de corps opérée par la division scientifique du travail, puis le remplacement de bien des corps par des machines, l’époque actuelle est à la prise de tête. Nous avons réduit celle-ci au seul vocable de normalisation — nom proposé pour les tables de la loi —, soit un état de normalité, ce qui pourrait sembler à d’aucuns rassurant. Mais dans ce terme, au-delà de la norme, il y a un caractère de procédé, une proactivité et, sous-jacentes à celle-ci, des nécessités de vérifier ladite normalité. (...)

    #Dieu #normalisation #loi #Florence_Parly #intelligence_artificielle #symbole #cercle #Terre #religion #flèches #projet #développement_durable #trinité #génome #borroméen #plan #parousie #entreprise #objectif #stratégie #Hannah_Arendt

  • Think only authoritarian regimes spy on their citizens?

    Use of AI surveillance technology is becoming the global norm, even in liberal democracies.

    Almost half the world’s countries now deploy AI surveillance systems. So says a new report, The Global Expansion of AI Surveillance, from the #Carnegie_Endowment_for_International_Peace (https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847). Such technologies vary from “#smart_city” projects, which use real-time data on residents to aid delivery of public services and enhance policing, to facial recognition systems, to border security, to governments spying on political dissidents.

    The main driver is China. The tech company Huawei alone is responsible for providing AI surveillance technology to at least 50 countries. But it’s not just Beijing pushing such technology. Western companies, from IBM to Palantir, are deeply involved. In Saudi Arabia, for instance, Huawei is helping create smart cities, Google and Amazon are building cloud computing servers for government surveillance and the UK arms firm BAE is providing mass monitoring systems.

    While authoritarian countries are investing heavily in such technology, it is most widespread in democracies. “Liberal democratic governments,” the report observes, “are aggressively using AI tools to police borders, apprehend potential criminals, monitor citizens for bad behaviour and pull out suspected terrorists from crowds.” Projects range from Baltimore’s secret use of drones for daily surveillance of the city’s residents, to Marseille’s mass monitoring project, built largely by the Chinese firm ZTE and given the very Orwellian name of Big Data of Public Tranquility, to the array of advanced surveillance techniques being deployed on the US-Mexico border.

    The technologies raise major ethical issues and questions about civil liberties. Yet even before we’ve begun to ask such questions, the technology has become so ubiquitous as to render the debate almost redundant. That should be as worrying as the technology itself.

    https://www.theguardian.com/commentisfree/2019/sep/22/think-only-authoritarian-regimes-spy-on-their-citizens
    #surveillance #démocratie #intelligence_artificielle #big_data #index #Chine #Huawei #IBM #Palantir #Google #Amazon #BAE #drones #Baltimore #Marseille #ZTE #Big_data_of_public_tranquility

  • Un prix littéraire décerné par une intelligence artificielle : une idée aussi stupide qu’il y paraît ?
    https://medium.com/@story_nerd/un-prix-litt%C3%A9raire-d%C3%A9cern%C3%A9-par-une-intelligence-artificielle-

    Sur son site, QualiFiction affirme que LiSA est une intelligence artificielle capable de détecter les best-sellers (carrément). En 60 secondes chrono, l’algorithme analyse un texte long de plusieurs centaines de pages, est capable d’en extraire les données et de proposer une analyse autour de quatre pivots :

    analyse du sujet : le texte est-il un thriller ou une histoire d’amour ? Quelles en sont les informations pertinentes ?
    analyse des sentiments : s’agit-il d’une bluette légère ou d’un roman d’horreur à vous donner des cauchemars ? Le texte se termine-t-il en happy end ou en catastrophe ? Quelle en est la tonalité générale ?
    style : quel est le degré de complexité du texte ? Le style est-il accessible au commun des lecteurs ou d’une complexité littéraire plus élevée ?
    prédiction : LiSA définit le public-cible du texte, évalue ses chances d’atteindre le maximum de lecteurs possible ou s’il s’adresse davantage à une niche.

    Ainsi que l’explique QualiFiction, LiSA a été entraîné en ingérant le contenu de milliers de livres et en y corrélant l’historique de leur succès (ou de leur insuccès). En appliquant la même recette à des manuscrits inédits, l’entreprise espère pouvoir prédire le sort d’un roman à paraître. Actuellement, elle revendique un taux de succès de 78% — ce qui est déjà énorme — et affirme que l’algorithme s’améliorera encore dans un futur proche, à mesure que ses analyses s’affineront et que l’IA engrangera plus de données.

    Pas convaincu par la suite du post... Ce que fait l’IA, c’est repérer des patterns réguliers et les comparer avec les patterns du marché. Ce qui ne peut fonctionner que pour les livres de forte demande, sur des secteurs où les patterns repérables et les régularités sociales sont statistiquement évaluables. Et cela va repérer la « vendabilité » d’un livre et en aucun cas sa valeur intrinsèque.

    Pour mémoire, Minuit n’a vendu que 150 exemplaires de « En attendant Godot » la première année...

    #Intelligence_artificielle #Littérature #Edition

  • Google reiterates exit from Project Maven — kind of - TechSpot
    https://www.techspot.com/news/79003-google-reiterates-exit-project-maven-kind.html

    Google’s controversial contract connected to Project Maven will expire in March 2019, and while Google has pledged to not renew it, an unnamed technology company will take up the work started by Google. Furthermore, Google will support the unnamed contractor with “basic” cloud services, rather than Google’s Cloud AI services. Google also appears to try and straddle the line between maintaining their early mantra of “don’t be evil” and pursuing lucrative defense contracts, like Microsoft and Amazon.

    Last year, when news broke that Google had been awarded a military contract to develop AI for Project Maven, it stirred up no shortage of controversy — some of which is still coming to light. This led to many employees questioning the ethical and moral implications of such work, spurring many to resign, and many more to protest. In the end, Google conceded to the demands of its employees and has grappled with something of an identity crisis since.

    Recently, in an email obtained by The Intercept, Google appeared to reiterate its commitment not to renew its contract with the Pentagon. The email was penned by Kent Walker, Google’s senior vice president for global affairs. “Last June, we announced we would not be renewing our image-recognition contract with the US Department of Defense connected with Project Maven,” Walker wrote.

    However, Walker added a caveat of sorts, in that an unnamed contractor will take up the work Google started and use “off-the-shelf Google Cloud Platform (basic compute service, rather than Cloud AI or other Cloud Services) to support some workloads.”

    It’s presently unclear what compensation Google will obtain, or what specific Project Maven workloads will be processed by Google’s Cloud services. The Intercept reached out for comment, but received no further clarification. Walker’s email also mentioned that the company was working closely with the Department of Defense to "make the transition in a way that is consistent with our AI Principles and contractual commitments.”

    Google’s Project Maven contract is set to expire next month, and while Google will not renew it, the company also won’t rule out future military work, as Walker notes in his email.

    We continue to explore work across the public sector, including the military, in a wide range of areas, such as cybersecurity, search and rescue, training and health care, in ways consistent with our AI Principles.

    #Google #USA #armement #intelligence_artificielle #project_maven

  • The world’s top deepfake artist is wrestling with the monster he created - MIT Technology Review
    https://www.technologyreview.com/s/614083/the-worlds-top-deepfake-artist-is-wrestling-with-the-monster-he-cr

    Misinformation has long been a popular tool of geopolitical sabotage, but social media has injected rocket fuel into the spread of fake news. When fake video footage is as easy to make as fake news articles, it is a virtual guarantee that it will be weaponized. Want to sway an election, ruin the career and reputation of an enemy, or spark ethnic violence? It’s hard to imagine a more effective vehicle than a clip that looks authentic, spreading like wildfire through Facebook, WhatsApp, or Twitter, faster than people can figure out they’ve been duped.

    As a pioneer of digital fakery, Li worries that deepfakes are only the beginning. Despite having helped usher in an era when our eyes cannot always be trusted, he wants to use his skills to do something about the looming problem of ubiquitous, near-perfect video deception.

    Li isn’t your typical deepfaker. He doesn’t lurk on Reddit posting fake porn or reshoots of famous movies modified to star Nicolas Cage. He’s spent his career developing cutting-edge techniques to forge faces more easily and convincingly. He has also messed with some of the most famous faces in the world for modern blockbusters, fooling millions of people into believing in a smile or a wink that was never actually there. Talking over Skype from his office in Los Angeles one afternoon, he casually mentions that Will Smith stopped in recently, for a movie he’s working on.

    Actors often come to Li’s lab at the University of Southern California (USC) to have their likeness digitally scanned. They are put inside a spherical array of lights and machine vision cameras to capture the shape of their face, facial expressions, and skin tone and texture down to the level of individual pores. A special-effects team working on a movie can then manipulate scenes that have already been shot, or even add an actor to a new one in post-production.

    Shortly after joining USC, Li created facial tracking technology used to make a digital version of the late actor Paul Walker for the action movie Furious 7. It was a big achievement, since Walker, who died in a car accident halfway through shooting, had not been scanned beforehand, and his character needed to appear in so many scenes. Li’s technology was used to paste Walker’s face onto the bodies of his two brothers, who took turns acting in his place in more than 200 scenes.

    The movie, which grossed $1.5 billion at the box office, was the first to depend so heavily on a digitally re-created star. Li mentions Walker’s virtual role when talking about how good video trickery is becoming. “Even I can’t tell which ones are fake,” he says with a shake of his head.

    La vague des repentis s’élargit... Mais c’est intéressant cette façon de voir : tant que ce sont quelques personnes bien intentionnées, les technologies sont magiques, mais quand cela devient accessible à tout le monde, les problèmes arrivent. On a vu ce discours se répéter depuis que l’internet s’est ouvert au public.

    Underneath the digital silliness, though, is an important trend: AI is rapidly making advanced image manipulation the province of the smartphone rather than the desktop. FaceApp, developed by a company in Saint Petersburg, Russia, has drawn millions of users, and recent controversy, by offering a one-click way to change a face on your phone. You can add a smile to a photo, remove blemishes, or mess with your age or gender (or someone else’s). Dozens more apps offer similar manipulations at the click of a button.

    Not everyone is excited about the prospect of this technology becoming ubiquitous. Li and others are “basically trying to make one-image, mobile, and real-time deepfakes,” says Sam Gregory, director of Witness, a nonprofit focused on video and human rights. “That’s the threat level that worries me, when it [becomes] something that’s less easily controlled and more accessible to a range of actors.”

    Fortunately, most deepfakes still look a bit off. A flickering face, a wonky eye, or an odd skin tone make them easy enough to spot. But just as an expert can remove such flaws, advances in AI promise to smooth them out automatically, making the fake videos both simpler to create and harder to detect.

    Even as Li races ahead with digital fakery, he is also troubled by the potential for harm. “We’re sitting in front of a problem,” he says.

    (Medifor : programme de la DARPA our reconnaître les deepfakes)

    Earlier this year, Matt Turek, DARPA program manager for MediFor, asked Li to demonstrate his fakes to the MediFor researchers. This led to a collaboration with Hany Farid, a professor at UC Berkeley and one of the world’s foremost authorities on digital forensics. The pair are now engaged in a digital game of cat-and-mouse, with Li developing deepfakes for Farid to catch, and then refining them to evade detection.

    Farid, Li, and others recently released a paper outlining a new, more powerful way to spot deepfakes. It hinges on training a machine-learning algorithm to recognize the quirks of a specific individual’s facial expressions and head movements. If you simply paste someone’s likeness onto another face, those features won’t be carried over. It would require a lot of computer power and training data—i.e., images or video of the person—to make a deepfake that incorporates these characteristics. But one day it will be possible. “Technical solutions will continue to improve on the defensive side,” says Turek. “But will that be perfect? I doubt it.”

    #Fake_news #deepfakes #Intelligence_artificielle #Art #Cinéma

  • These companies claim to provide “fair-trade” data work. Do they? - MIT Technology Review
    https://www.technologyreview.com/s/614070/cloudfactory-ddd-samasource-imerit-impact-sourcing-companies-for-d

    A lot of human labor goes into building artificial-intelligence systems. Much of it is in cleaning, categorizing, and labeling data before AIs ingest it to look for patterns. The AI Now Institute, an ethics body, refers to this work as the “hidden labor” of the AI pipeline, “providing the invisible human work that often backstops claims of AI ‘magic’ once these systems are deployed in products and services.”

    By contrast, most people doing data annotation don’t work in Manhattan offices but from their homes in places such as India, Kenya, Malaysia, and the Philippines. They log in to online platforms for anywhere from a few minutes to several hours a day, perhaps distinguishing between bunches of green onions and stalks of celery or between cat-eye and aviator-style sunglasses. As detailed in the recent book Ghost Work by Mary Gray and Siddharth Suri, most of them are gig workers with low pay, insecure employment, and no opportunity for career advancement.

    A small group of data annotation firms aims to rewrite that narrative. But these firms aiming to “do well by doing good” in AI data services are finding the path to enterprise enlightenment can be a rocky one.

    “It is really a race to the bottom,” says Daniel Kaelin, director of customer success at Alegion, a data annotation services company in Austin, Texas. “This whole industry is very, very competitive; everybody tries to find that little cheaper labor force somewhere else in the world.”
    What does “impact” really mean?

    Alegion is one of several platforms, including CloudFactory, Digital Divide Data (DDD), iMerit, and Samasource, that say they want to make AI data work dignified. They call themselves “impact” companies and claim to offer ethically sourced data labor, with better working conditions and career prospects than most firms in the industry. It’s like fair-trade coffee beans, but for enormous data sets.

    However, there are no regulations and only weak industry standards for what ethical sourcing means. And the companies’ own definitions of it vary widely.

    Troy Stringfield, who took over as Alegion’s global impact director in 2018, defends the “impact” label—which the seven-year-old company has adopted only in the past year or so—by saying impact means creating work that improves people’s lives. “It’s going in and saying, ‘What is a livable wage? What is getting them better than where they’re at?’” he says.

    But Sara Enright, project director at the Global Impact Sourcing Coalition (GISC), a member-funded industry body, says it’s doubtful that such work should be called impact sourcing: “If it is solely gig work in which an individual is accessing part-time wages through an hour a day here and there, that is not impact employment, because it does not actually lead to career development and ultimately poverty alleviation.”

    Getting into the US market

    In their bid to expand, companies like Alegion and iMerit are also trying to build a pool of data workers in the US, drawing on underprivileged and marginalized populations there. That gives them lucrative access to government, financial, and health care clients that demand stringent security measures, work with regulated medical and financial data, or need the work done in the US for other legal reasons.

    To recruit those US workers, the impact firms can go through companies like Daivergent, which serves as a conduit to organizations such as the Autism Society and Autism Speaks. (That’s where Campbell, whom we met earlier drawing boxes around cars, works.) Alegion also did a trial using workers provided through IAM23, a support group for military veterans.

    Unlike with fair-trade goods, there is little public pressure on the companies to be honest, because they provide their services to businesses, not directly to consumers. “Consumers can value ethical sourcing—for example, at Patagonia and various consumer brands—and you kind of buy into that as a consumer,” says iMerit’s Natarajan. But “it remains to be seen what ethical sourcing means in the b-to-b sector.” As a 2014 issue of Pulse, an outsourcing industry magazine, noted, companies would have to make a choice to use impact-conscious labor providers. Without laws or public pressure it’s not clear what can impel them to make such a choice, and without standards and accountability, it’s not clear how they should evaluate the providers.

    In the end it may be only regulation that changes labor practices. “There is no way to change employment from the inside of markets. Yes, they’re doing everything they can, and that’s like saying I’ve got a bicycle with no pedals, and I’m doing everything I can to ride it as quickly as this thing is built to go,” says Gray, the Ghost Work coauthor. “There is no organizing of the rights of workers and fair employment without involving civil society, and we haven’t done that yet.

    #Digital_labor #Travail #Intelligence_artificielle #Ethique

    • Je trouve admirable que, pendant que le discours des prometteurs et promotrices de l’intelligence artificielle et de l’informatisation à outrance insiste si souvent que cette informatisation, digitalisation, vont permettre de libérer le genre humain des tâches les plus répétitives et les moins satisfaisantes, en fait, en grattant un peu, on découvre que c’est le contraire, ce sont des humains qui nettoient fastidieusement les données pour que l’ordinateur fasse le boulot marrant de déduire de ces données propres des éléments statistiques intéressants.

  • Amazon Helps ICE Deport Migrants Using AI Technology: Report | News | teleSUR English
    https://www.telesurenglish.net/news/united-states-amazon-ice-migrants-deportation-family-separation-dhs-

    Published 17 July 2019

    Amazon is helping ICE carry out raids by pitching its Rekognition facial identification technology for "deportations without due process.”

    Amazon is enabling the Immigrations and Customs Enforcement (ICE) to detain and deport migrants from the United States according to a report by Al-Jazeera published Wednesday.

    According to the activist group Mijente which demands the multinational company “stop powering ICE,” the Department of Homeland Security (DHS) has contracts with Amazon.

    The DHS uses the Palantir software that helps them track would-be-deportees. Amazon Web Services host the database and Palantir provides computer program to organize the data.

    Palantir, a data analytics firm can be described as a mix between Google and the CIA. It provides algorithms for government agencies for counterterrorism or immigration enforcement and receives taxpayers money.

    Activists have pointed out that the firm sells “mission-critical” tools used by ICE to plan raids. The contract between the two is worth US$51 million.

    “Amazon and Palantir have secured a role as the backbone for the federal government’s immigration and law enforcement dragnet, allowing them to pursue multibillion-dollar government contracts in various agencies at every single level of law enforcement,” says a petition on Mijente’s website.

    This week, while ICE agents were rounding up immigrants, Mijente delivered 270,000 petitions to the New York residence of Amazon CEO Jeff Bezos demanding he cut ties with the immigration authorities.

    The immigration authorities launched small-scale operations seeking to arrest undocumented immigrants over the weekend in an apparent start to President Donald Trump’s vow to launch mass deportation round-ups across the country.

    The operation, which Trump revealed on Twitter last month, was expected to target hundreds of recently-arrived families in about 10 cities whose deportation had been ordered by an immigration judge.

    The removal operations are meant to deter a surge in Central American families fleeing poverty and gang violence in their home countries, with many seeking asylum in the United States.

    On Monday, Trump said the raids were “very successful” even though immigration activists and lawyers said that only a few arrests took place. Nonetheless, the crackdown is not over. ICE informed that more arrests will be made later this week.

    Jennifer Lee of the American Civil Liberties Union said during a rally in front the company’s headquarters in Seattle that Amazon is helping the authorities carry out the raids by pitching its Rekognition facial identification technology which could result in "deportations without due process.”

    Last week, activists interrupted the Amazon Web Services Summit in New York by playing recordings of migrant families being split up while the Amazon Chief Technology Officer Werner Vogels was giving a keynote speech. Activists have also called for boycotting Amazon’s products like Prime Video, Whole Foods, Kindle, etc.

    “Companies and government organizations need to use existing and new technology responsibly and lawfully. There is clearly a need for more clarity from governments on what is acceptable use of AI and ramifications for its misuse, and we’ve provided a proposed legislative framework for this,” said an Amazon spokesperson in a statement responding to the accusations.

    “We remain eager for the government to provide this additional clarity and legislation,” continued the statement, "and will continue to offer our ideas and specific suggestions.”

    #migration #réfugiés #cloud #surveillance #intelligence_artificielle

  • Déçue par sa navette autonome, La Défense arrête l’expérience Jean-Bernard Litzler - 15 Juillet 2019 - Le figaro
    https://immobilier.lefigaro.fr/article/decue-par-sa-navette-autonome-la-defense-arrete-l-experience_21e

    Au terme de deux ans d’expérimentation, l’essai n’a pas été transformé. La navette autonome qui circulait dans le quartier d’affaires de la région parisienne a transporté moins de 12.000 voyageurs ces 12 derniers mois et son « bilan global n’est pas satisfaisant ».

    Après avoir chassé de sa dalle, une partie des trottinettes électriques qui pullulait dans le quartier, La Défense s’apprête à se séparer de sa navette électrique autonome qui circulait depuis deux ans à travers le quartier d’affaires de la région parisienne. L’expérience avait pourtant suscité un sacré enthousiasme à son apparition en juillet 2017 et le public avait bien accueilli ce minibus à la bouille sympathique pouvant transporter jusqu’à 15 personnes (11 assises et 4 debout). Tout en rappelant que « pendant les six premiers mois la navette a rencontré un grand succès auprès des usagers, avec plus de 30.000 voyageurs, dont 97% de satisfaits et 88% ayant l’intention de réutiliser la navette », Paris La Défense, l’établissement qui gère le quartier estime que le test n’est finalement pas concluant. D’ailleurs, après six mois d’arrêt suite à un incident technique, l’attrait s’était bien émoussé au point de ne plus séduire que 11865 voyageurs sur la deuxième année d’exploitation.

    Comment en est-on arrivé là ? « Dans l’ensemble, l’exploitation du service a été complexe en raison de difficultés liées à la #connexion (effet de « canyon urbain » à la Défense du fait de la hauteur des tours) », souligne Paris La Défense dans un communiqué tout en expliquant que les changements réguliers de l’environnement urbain au gré de divers événements (marché de Noël, travaux, installations de foodtruck...) ont été difficiles à gérer. Tout comme l’importance et la variété des flux de circulation sur l’esplanade (piétons, cyclistes, trottinettes, véhicules d’entretien).

    Résultat : la navette testée en partenariat avec Ile-de-France Mobilités, #keolis et le constructeur #navya semble avoir montré ses limites actuelles selon Paris-La Défense. Il lui est notamment reproché de ne pas avoir pu augmenter sa vitesse de circulation, ce qui aurait eu pour but de « rendre le service attractif ». Par ailleurs, l’objectif de passage en mode « full autonome », c’est-à-dire sans opérateur à bord n’a pas abouti. Verdict sans appel : « Paris La Défense ne souhaite pas reconduire l’expérimentation ». Si cette navette produite en région lyonnaise n’a pas séduit la capitale, elle met actuellement en avant ses atouts à Lyon même mais également à Tokyo. Et qui sait, une fois les défauts de jeunesse gommés, peut-être la retrouvera-t-on sur les routes de la capitale ?

    #voiture #transport #automobile #mobilité #transports #surveillance #voiture_autonome #voiture_autopilotée #voitures_autonomes #autopilote #robotisation #algorithme #intelligence_artificielle #echec de l’#innovation à 2 balles, sans compter l’absence de toute information sur la #pollution engendrée par ces véhicules.

  • Ces #microtravailleurs de l’ombre | CNRS Le journal
    https://lejournal.cnrs.fr/articles/ces-microtravailleurs-de-lombre

    Quel est le portrait-robot du microtravailleur ?
    A. C. : Notre enquête révèle une #géographie_sociale marquée par la #précarité, dont certains aspects sont assez alarmants. Le microtravailleur est d’abord une microtravailleuse, souvent chargée de #famille et possédant un #emploi principal à côté. 56 % des microtravailleurs en France sont en effet des #femmes ; 63 % des microtravailleurs ont entre 25 et 44 ans, et 64 % ont un emploi principal. Ils travaillent dans les secteurs de la santé, de l’éducation, ou encore dans les services publics… et utilisent le microtravail comme #revenu de complément.

    L’investissement des femmes dans le microtravail, assez important dans certains cas, montre un glissement de celles-ci vers la « triple journée » : l’activité sur les plateformes de microtravail vient s’ajouter à un emploi à temps plein et aux tâches ménagères et familiales. À noter que 22 % des microtravailleurs sont au-dessous du seuil de pauvreté, ce qui confirme un réel problème de précarité économique dans notre pays. Enfin, et c’est assez surprenant pour des tâches dont on dit qu’elles ne demandent aucune qualification, les microtravailleurs sont plus diplômés que la moyenne de la population. Ainsi, 43 % ont un diplôme supérieur à Bac+2. Leur motivation principale pour le microtravail est avant tout l’argent, mais aussi la flexibilité qu’il autorise : on peut se connecter à n’importe quelle heure et y passer le temps que l’on souhaite puisque l’on est généralement payé à la pièce.

    #travail #informatique #intelligence_artificielle #droit_du_travail

    https://diplab.eu