• MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs • The Register

    The dataset holds more than 79,300,000 images, scraped from Google Images, arranged in 75,000-odd categories. A smaller version, with 2.2 million images, could be searched and perused online from the website of MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). This visualization, along with the full downloadable database, were removed on Monday from the CSAIL website after El Reg alerted the dataset’s creators to the work done by Prabhu and Birhane.

    The key problem is that the dataset includes, for example, pictures of Black people and monkeys labeled with the N-word; women in bikinis, or holding their children, labeled whores; parts of the anatomy labeled with crude terms; and so on – needlessly linking everyday imagery to slurs and offensive language, and baking prejudice and bias into future AI models.
    Screenshot from the MIT AI training dataset

    A screenshot of the 2.2m dataset visualization before it was taken offline this week. It shows some of the dataset’s examples for the label ’whore’, which we’ve pixelated for legal and decency reasons. The images ranged from a headshot photo of woman and a mother holding her baby with Santa to porn actresses and a woman in a bikini ... Click to enlarge

    Antonio Torralba, a professor of electrical engineering and computer science at CSAIL, said the lab wasn’t aware these offensive images and labels were present within the dataset at all. “It is clear that we should have manually screened them,” he told The Register. “For this, we sincerely apologize. Indeed, we have taken the dataset offline so that the offending images and categories can be removed.”

    In a statement on its website, however, CSAIL said the dataset will be permanently pulled offline because the images were too small for manual inspection and filtering by hand. The lab also admitted it automatically obtained the images from the internet without checking whether any offensive pics or language were ingested into the library, and it urged people to delete their copies of the data:

    “The dataset contains 53,464 different nouns, directly copied over from WordNet," Prof Torralba said referring to Princeton University’s database of English words grouped into related sets. “These were then used to automatically download images of the corresponding noun from internet search engines at the time, using the available filters at the time, to collect the 80 million images.”

    WordNet was built in the mid-1980s at Princeton’s Cognitive Science Laboratory under George Armitage Miller, one of the founders of cognitive psychology. “Miller was obsessed with the relationships between words,” Prabhu told us. “The database essentially maps how words are associated with one another.”

    For example, the words cat and dog are more closely related than cat and umbrella. Unfortunately, some of the nouns in WordNet are racist slang and insults. Now, decades later, with academics and developers using the database as a convenient silo of English words, those terms haunt modern machine learning.

    “When you are building huge datasets, you need some sort of structure,” Birhane told El Reg. “That’s why WordNet is effective. It provides a way for computer-vision researchers to categorize and label their images. Why do that yourself when you could just use WordNet?”

    WordNet may not be so harmful on its own, as a list of words, though when combined with images and AI algorithms, it can have upsetting consequences. “The very aim of that [WordNet] project was to map words that are close to each other,” said Birhane. "But when you begin associating images with those words, you are putting a photograph of a real actual person and associating them with harmful words that perpetuate stereotypes.”

    The fraction of problematic images and labels in these giant datasets is small, and it’s easy to brush them off as anomalies. Yet this material can lead to real harm if they’re used to train machine-learning models that are used in the real world, Prabhu and Birhane argued.

    “The absence of critical engagement with canonical datasets disproportionately negatively impacts women, racial and ethnic minorities, and vulnerable individuals and communities at the margins of society,” they wrote in their paper.

    #Intelligence_artificielle #Images #Reconnaissance_image #WordNet #Tiny_images #Deep_learning

  • EU pays for surveillance in Gulf of Tunis

    A new monitoring system for Tunisian coasts should counter irregular migration across the Mediterranean. The German Ministry of the Interior is also active in the country. A similar project in Libya has now been completed. Human rights organisations see it as an aid to „#pull_backs“ contrary to international law.

    In order to control and prevent migration, the European Union is supporting North African states in border surveillance. The central Mediterranean Sea off Malta and Italy, through which asylum seekers from Libya and Tunisia want to reach Europe, plays a special role. The EU conducts various operations in and off these countries, including the military mission „#Irini“ and the #Frontex mission „#Themis“. It is becoming increasingly rare for shipwrecked refugees to be rescued by EU Member States. Instead, they assist the coast guards in Libya and Tunisia to bring the people back. Human rights groups, rescue organisations and lawyers consider this assistance for „pull backs“ to be in violation of international law.

    With several measures, the EU and its member states want to improve the surveillance off North Africa. Together with Switzerland, the EU Commission has financed a two-part „#Integrated_Border_Management Project“ in Tunisia. It is part of the reform of the security sector which was begun a few years after the fall of former head of state Ben Ali in 2011. With one pillar of this this programme, the EU wants to „prevent criminal networks from operating“ and enable the authorities in the Gulf of Tunis to „save lives at sea“.

    System for military and border police

    The new installation is entitled „#Integrated_System_for_Maritime_Surveillance“ (#ISMariS) and, according to the Commission (https://www.europarl.europa.eu/doceo/document/E-9-2020-000891-ASW_EN.html), is intended to bring together as much information as possible from all authorities involved in maritime and coastal security tasks. These include the Ministry of Defence with the Navy, the Coast Guard under the Ministry of the Interior, the National Guard, and IT management and telecommunications authorities. The money comes from the #EU_Emergency_Trust_Fund_for_Africa, which was established at the Valletta Migration Summit in 2015. „ISMariS“ is implemented by the Italian Ministry of the Interior and follows on from an earlier Italian initiative. The EU is financing similar projects with „#EU4BorderSecurity“ not only in Tunisia but also for other Mediterranean countries.

    An institute based in Vienna is responsible for border control projects in Tunisia. Although this #International_Centre_for_Migration_Policy_Development (ICMPD) was founded in 1993 by Austria and Switzerland, it is not a governmental organisation. The German Foreign Office has also supported projects in Tunisia within the framework of the #ICMPD, including the establishment of border stations and the training of border guards. Last month German finally joined the Institute itself (https://www.andrej-hunko.de/start/download/dokumente/1493-deutscher-beitritt-zum-international-centre-for-migration-policy-development/file). For an annual contribution of 210,000 euro, the Ministry of the Interior not only obtains decision-making privileges for organizing ICMPD projects, but also gives German police authorities the right to evaluate any of the Institute’s analyses for their own purposes.

    It is possible that in the future bilateral German projects for monitoring Tunisian maritime borders will also be carried out via the ICMPD. Last year, the German government supplied the local coast guard with equipment for a boat workshop. In the fourth quarter of 2019 alone (http://dipbt.bundestag.de/doc/btd/19/194/1919467.pdf), the Federal Police carried out 14 trainings for the national guard, border police and coast guard, including instruction in operating „control boats“. Tunisia previously received patrol boats from Italy and the USA (https://migration-control.info/en/wiki/tunisia).

    Vessel tracking and coastal surveillance

    It is unclear which company produced and installed the „ISMariS“ surveillance system for Tunisia on behalf of the ICPMD. Similar facilities for tracking and displaying ship movements (#Vessel_Tracking_System) are marketed by all major European defence companies, including #Airbus, #Leonardo in Italy, #Thales in France and #Indra in Spain. However, Italian project management will probably prefer local companies such as Leonardo. The company and its spin-off #e-GEOS have a broad portfolio of maritime surveillance systems (https://www.leonardocompany.com/en/sea/maritime-domain-awareness/coastal-surveillance-systems).

    It is also possible to integrate satellite reconnaissance, but for this the governments must conclude further contracts with the companies. However, „ISMariS“ will not only be installed as a Vessel Tracking System, it should also enable monitoring of the entire coast. Manufacturers promote such #Coastal_Surveillance_Systems as a technology against irregular migration, piracy, terrorism and smuggling. The government in Tunisia has defined „priority coastal areas“ for this purpose, which will be integrated into the maritime surveillance framework.

    Maritime „#Big_Data

    „ISMariS“ is intended to be compatible with the components already in place at the Tunisian authorities, including coastguard command and control systems, #radar, position transponders and receivers, night vision equipment and thermal and optical sensors. Part of the project is a three-year maintenance contract with the company installing the „ISMariS“.

    Perhaps the most important component of „ISMariS“ for the EU is a communication system, which is also included. It is designed to improve „operational cooperation“ between the Tunisian Coast Guard and Navy with Italy and other EU Member States. The project description mentions Frontex and EUROSUR, the pan-European surveillance system of the EU Border Agency, as possible participants. Frontex already monitors the coastal regions off Libya and Tunisia (https://insitu.copernicus.eu/FactSheets/CSS_Border_Surveillance) using #satellites (https://www.europarl.europa.eu/doceo/document/E-8-2018-003212-ASW_EN.html) and an aerial service (https://digit.site36.net/2020/06/26/frontex-air-service-reconnaissance-for-the-so-called-libyan-coast-guar).

    #EUROSUR is now also being upgraded, Frontex is spending 2.6 million Euro (https://ted.europa.eu/udl?uri=TED:NOTICE:109760-2020:TEXT:EN:HTML) on a new application based on artificial intelligence. It is to process so-called „Big Data“, including not only ship movements but also data from ship and port registers, information on ship owners and shipping companies, a multi-year record of previous routes of large ships and other maritime information from public sources on the Internet. The contract is initially concluded for one year and can be extended up to three times.

    Cooperation with Libya

    To connect North African coastguards to EU systems, the EU Commission had started the „#Seahorse_Mediterranean“ project two years after the fall of North African despots. To combat irregular migration, from 2013 onwards Spain, Italy and Malta have trained a total of 141 members of the Libyan coast guard for sea rescue. In this way, „Seahorse Mediterranean“ has complemented similar training measures that Frontex is conducting for the Coastal Police within the framework of the EU mission #EUBAM_Libya and the military mission #EUNAVFOR_MED for the Coast Guard of the Tripolis government.

    The budget for „#Seahorse_Mediterranean“ is indicated by the Commission as 5.5 million Euro (https://www.europarl.europa.eu/doceo/document/E-9-2020-000892-ASW_EN.html), the project was completed in January 2019. According to the German Foreign Office (http://dipbt.bundestag.de/doc/btd/19/196/1919625.pdf), Libya has signed a partnership declaration for participation in a future common communication platform for surveillance of the Mediterranean. Tunisia, Algeria and Egypt are also to be persuaded to participate. So far, however, the governments have preferred unilateral EU support for equipping and training their coastguards and navies, without having to make commitments in projects like „Seahorse“, such as stopping migration and smuggling on the high seas.


    #Golfe_de_Tunis #surveillance #Méditerranée #asile #migrations #réfugiés #militarisation_des_frontières #surveillance_des_frontières #Tunisie #externalisation #complexe_militaro-industriel #Algérie #Egypte #Suisse #EU #UE #Union_européenne #Trust_Fund #Emergency_Trust_Fund_for_Africa #Allemagne #Italie #gardes-côtes #gardes-côtes_tunisiens #intelligence_artificielle #IA #données #Espagne #Malte #business

    ping @reka @isskein @_kg_ @rhoumour @karine4


    Ajouté à cette métaliste sur l’externalisation des frontières :

    Et celle-ci sur le lien entre développement et contrôles frontaliers :

  • Are we making spacecraft too autonomous ? | MIT Technology Review

    Le syndrome Neil Armstrong ne leur a pas suffit ?

    When SpaceX’s Crew Dragon took NASA astronauts to the ISS near the end of May, the launch brought back a familiar sight. For the first time since the space shuttle was retired, American rockets were launching from American soil to take Americans into space.

    Inside the vehicle, however, things couldn’t have looked more different. Gone was the sprawling dashboard of lights and switches and knobs that once dominated the space shuttle’s interior. All of it was replaced with a futuristic console of multiple large touch screens that cycle through a variety of displays. Behind those screens, the vehicle is run by software that’s designed to get into space and navigate to the space station completely autonomously.

    “Growing up as a pilot, my whole career, having a certain way to control a vehicle—this is certainly different,” Doug Hurley told NASA TV viewers shortly before the SpaceX mission. Instead of calling for a hand on the control stick, navigation is now a series of predetermined inputs. The SpaceX astronauts may still be involved in decision-making at critical junctures, but much of that function has moved out of their hands.

    But overrelying on software and autonomous systems in spaceflight creates new opportunities for problems to arise. That’s especially a concern for many of the space industry’s new contenders, who aren’t necessarily used to the kind of aggressive and comprehensive testing needed to weed out problems in software and are still trying to strike a good balance between automation and manual control.

    Nowadays, a few errors in over one million lines of code could spell the difference between mission success and mission failure. We saw that late last year, when Boeing’s Starliner capsule (the other vehicle NASA is counting on to send American astronauts into space) failed to make it to the ISS because of a glitch in its internal timer. A human pilot could have overridden the glitch that ended up burning Starliner’s thrusters prematurely. NASA administrator Jim Bridenstine remarked soon after Starliner’s problems arose: “Had we had an astronaut on board, we very well may be at the International Space Station right now.”

    But it was later revealed that many other errors in the software had not been caught before launch, including one that could have led to the destruction of the spacecraft. And that was something human crew members could easily have overridden.

    Boeing is certainly no stranger to building and testing spaceflight technologies, so it was a surprise to see the company fail to catch these problems before the Starliner test flight. “Software defects, particularly in complex spacecraft code, are not unexpected,” NASA said when the second glitch was made public. “However, there were numerous instances where the Boeing software quality processes either should have or could have uncovered the defects.” Boeing declined a request for comment.

    Space, however, is a unique environment to test for. The conditions a spacecraft will encounter aren’t easy to emulate on the ground. While an autonomous vehicle can be taken out of the simulator and eased into lighter real-world conditions to refine the software little by little, you can’t really do the same thing for a launch vehicle. Launch, spaceflight, and a return to Earth are actions that either happen or they don’t—there is no “light” version.

    This, says Schreier, is why AI is such a big deal in spaceflight nowadays—you can develop an autonomous system that is capable of anticipating those conditions, rather than requiring the conditions to be learned during a specific simulation. “You couldn’t possibly simulate on your own all the corner cases of the new hardware you’re designing,” he says.

    Raines adds that in contrast to the slower approach NASA takes for testing, private companies are able to move much more rapidly. For some, like SpaceX, this works out well. For others, like Boeing, it can lead to some surprising hiccups.

    Ultimately, “the worst thing you can do is make something fully manual or fully autonomous,” says Nathan Uitenbroek, another NASA engineer working on Orion’s software development. Humans have to be able to intervene if the software is glitching up or if the computer’s memory is destroyed by an unanticipated event (like a blast of cosmic rays). But they also rely on the software to inform them when other problems arise.

    NASA is used to figuring out this balance, and it has redundancy built into its crewed vehicles. The space shuttle operated on multiple computers using the same software, and if one had a problem, the others could take over. A separate computer ran on entirely different software, so it could take over the entire spacecraft if a systemic glitch was affecting the others. Raines and Uitenbroek say the same redundancy is used on Orion, which also includes a layer of automatic function that bypasses the software entirely for critical functions like parachute release.

    On the Crew Dragon, there are instances where astronauts can manually initiate abort sequences, and where they can override software on the basis of new inputs. But the design of these vehicles means it’s more difficult now for the human to take complete control. The touch-screen console is still tied to the spacecraft’s software, and you can’t just bypass it entirely when you want to take over the spacecraft, even in an emergency.

    #Espace #Logiciel #Intelligence_artificielle #Sécurité

  • Si vous pensiez encore que les IA ne peuvent pas être racistes, voici une nouvelle preuve

    Loin des clichés qui entourent l’intelligence artificielle, les exemples montrent combien elles sont encore limitées, et leurs erreurs peuvent avoir de graves conséquences sur la vie des personnes, surtout lorsqu’elles ne sont pas blanches.

    #intelligence_artificielle #racisme

  • (3) Dr. Ay. Poulain Maubant sur Twitter : "Sur les biais racistes des IA mal entraînées / Twitter

    Depuis que quelqu’un a remarqué qu’un récent réseau de neurones capable de transformer un visage très pixellisé en un visage réaliste créait systématiquement des visages caucasiens, les expériences se multiplient pour démontrer le biais de cette #IA.

    Il faut lire tout le thread avec plein d’exemples concrets.

    #Intelligence_artificielle #Big_data #Deep_learning #Biais_raciste

  • Les inquiétudes du #Défenseur_des_droits sur l’#automatisation des #discriminations

    En 2018, une étude américaine a démontré que certains systèmes de reconnaissance faciale commettaient de nombreuses erreurs dès lors qu’il s’agissait d’identifier des femmes de couleur. Pourquoi ? La base de données sur laquelle « s’entraînait » l’intelligence artificielle montrait une très forte prédominance des profils masculins blancs. Cet exemple fait partie de ceux évoqués par le Défenseur des droits dans une note de 10 pages, publiée à l’issue d’un colloque coorganisé avec la #Cnil fin mai, consacré aux risques d’automatisation des discriminations générés par les #algorithmes.


    #biais #intelligence_artificielle #parcoursup #ségrégation #rgpd

  • Of course technology perpetuates racism. It was designed that way. | MIT Technology Review

    We often call on technology to help solve problems. But when society defines, frames, and represents people of color as “the problem,” those solutions often do more harm than good. We’ve designed facial recognition technologies that target criminal suspects on the basis of skin color. We’ve trained automated risk profiling systems that disproportionately identify Latinx people as illegal immigrants. We’ve devised credit scoring algorithms that disproportionately identify black people as risks and prevent them from buying homes, getting loans, or finding jobs.

    So the question we have to confront is whether we will continue to design and deploy tools that serve the interests of racism and white supremacy,

    Of course, it’s not a new question at all.

    As part of a DARPA project aimed at turning the tide of the Vietnam War, Pool’s company had been hard at work preparing a massive propaganda and psychological campaign against the Vietcong. President Johnson was eager to deploy Simulmatics’s behavioral influence technology to quell the nation’s domestic threat, not just its foreign enemies. Under the guise of what they called a “media study,” Simulmatics built a team for what amounted to a large-scale surveillance campaign in the “riot-affected areas” that captured the nation’s attention that summer of 1967.

    Three-member teams went into areas where riots had taken place that summer. They identified and interviewed strategically important black people. They followed up to identify and interview other black residents, in every venue from barbershops to churches. They asked residents what they thought about the news media’s coverage of the “riots.” But they collected data on so much more, too: how people moved in and around the city during the unrest, who they talked to before and during, and how they prepared for the aftermath. They collected data on toll booth usage, gas station sales, and bus routes. They gained entry to these communities under the pretense of trying to understand how news media supposedly inflamed “riots.” But Johnson and the nation’s political leaders were trying to solve a problem. They aimed to use the information that Simulmatics collected to trace information flow during protests to identify influencers and decapitate the protests’ leadership.

    They didn’t accomplish this directly. They did not murder people, put people in jail, or secretly “disappear” them.

    But by the end of the 1960s, this kind of information had helped create what came to be known as “criminal justice information systems.” They proliferated through the decades, laying the foundation for racial profiling, predictive policing, and racially targeted surveillance. They left behind a legacy that includes millions of black and brown women and men incarcerated.

    #Racisme #Intelligence_artificielle #capitalisme_surveillance #surveillance

  • Traçages et fusions

    Dans le cadre de la lutte contre la contamination, tout pousse au renforcement du traçage par fusion de bases de données. Or cette fusion est le nouveau modèle d’affaires des plateformes de #réseaux_sociaux en ligne, propulsées par l’intelligence artificielle et la conception de nouvelles institutions.

    #Société #police #big_data #intelligence_artificielle #surveillance

  • Trump’s Executive Order Isn’t About Twitter - The Atlantic

    Par Zeynep Tufekci

    In reality, Trump’s salvo on social-media companies has primarily an audience of one: Mark Zuckerberg. And it is already working. After the executive order was issued, Facebook’s CEO quickly gave an interview to Fox News in which he said, “I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online.” He added, “Private companies probably shouldn’t be, especially these platform companies, shouldn’t be in the position of doing that.”

    It’s important to pay attention to what the president is doing, but not because the legal details of this order matter at all. Trump is unlikely to repeal Section 230 or take any real action to curb the power of the major social-media companies. Instead, he wants to keep things just the way they are and make sure that the red-carpet treatment he has received so far, especially at Facebook, continues without impediment. He definitely does not want substantial changes going into the 2020 election. The secondary aim is to rile up his base against yet another alleged enemy: this time Silicon Valley, because there needs to be an endless list of targets in the midst of multiple failures.

    Trump does very well on Facebook, as my colleagues Ian Bogost and Alexis Madrigal have written, because “his campaign has been willing to cede control to Facebook’s ad-buying machinery”—both now, and in 2016. The relationship is so smooth that Trump said Zuckerberg congratulated the president for being “No. 1 on Facebook” at a private dinner with him. Bloomberg has reported that Facebook’s own data-science team agreed, publishing an internal report concluding how much better Trump was in leveraging “Facebook’s ability to optimize for outcomes.” This isn’t an unusual move for Facebook and its clients. Bloomberg has reported that Facebook also offered its “white glove” services to the Philippine strongman Rodrigo Duterte, to help him “maximize the platform’s potential and use best practices.” Duterte dominated political conversation on the site the month before the Philippines’ May 2016 presidential election. And once elected, Duterte banned independent press from attending his inauguration, instead live-streaming it on Facebook—a win-win for the company, which could then collect data from and serve ads to the millions who had little choice but to turn to the site if they wanted to see their president take office. (Duterte has since been accused of extrajudicial killings, jailing political opponents, and targeting independent media.)

    Playing the refs by browbeating them has long been a key move in the right-wing playbook against traditional media. The method is simple: It involves badgering them with accusations of unfairness and bias so that they bend over backwards to accommodate a “both sides” narrative even when the sides were behaving very differently, or when one side was not grounded in fact. Climate-change deniers funded by fossil-fuel companies effectively used this strategy for decades, relying on journalists’ training and instinct to equate objectivity with representing both sides of a story. This way of operating persisted even when one of the sides was mostly bankrolled by the fossil-fuel industry while the other was a near-unanimous consensus of independent experts and academics.

    For Facebook, that gatekeeper is a single person, Mark Zuckerberg. Facebook’s young CEO is an emperor of information who decides rules of amplification and access to speech for billions of people, simply due to the way ownership of Facebook shares are structured: Zuckerberg personally controls 60 percent of the voting power. And just like the way people try to get on or advertise on the president’s seemingly favorite TV show, Fox & Friends, merely to reach him, Trump is clearly aiming to send a message to his one-person target.

    As a consequence, Facebook became cautious of taking actions that would make it look like it was holding back right-wing information machinery. That was the environment in which the country headed into the 2016 election—five months during which all stripes of misinformation went easily viral on Facebook, including stories that falsely claimed that the pope had endorsed Donald Trump, or that Hillary Clinton had sold weapons to the Islamic State. These stories were viewed millions of times on the platform, many of them outperforming traditional news sources. The pressure to keep Facebook friendly to the Trump campaign continued unabated after the election. When Facebook appeared to be considering changes to its microtargeting rules in 2019—for example, not allowing political campaigns to use the same level of microtargeting tools that product advertisers can, a potential strike at “a major Trump ad strategy”—the Trump reelection campaign swiftly attacked the platform, and the rules were left unchanged.

    Silicon Valley engineers and employees may well be overwhelmingly liberal, but Facebook is run by the algorithms they program, which optimize for the way the site makes money, rather than sifting through posts one by one. This is probably why the trending-topics controversy seemed like such a big hit: It took the one tiny section where humans had some minor input and portrayed the whole platform as working the same way. The employees may be liberal, but the consequences of how social-media companies operate are anything but. In 2016, for example, Facebook, Twitter, and Google all “embedded” staffers with both campaigns, without charge, helping them use the sites better and get more out of the many millions of dollars they spent on the platforms. However, this was especially helpful to the Trump campaign, an upstart with a bare-bones staff. Unsurprisingly, the “bulk of Silicon Valley’s hands-on campaign support went to Trump rather than to Clinton.”

    Trump and his campaign understood the power of Facebook better than the Clinton campaign, and formed a mutually beneficial relationship. Trump spent $44 million on the site, compared with the Clinton campaign’s $28 million, but ad money is only part of the story. A key role of Facebook is promoting organic content: posts, not ads, written by people who may range from partisans to campaign operatives to opportunists who just want the clicks. Some of the authors of these viral pages are motivated by promoting their ideology. Others are just grifters, using Facebook to maximize their spread so that they can collect ad money from their own webpage—which probably uses Google’s industry-dominating ad infrastructure. It’s a complete circle of back-scratching that is rarely commented on or known outside of a small number of experts and industry practitioners.

    The Trump campaign also made better use of Facebook’s own artificial-intelligence tools, like “lookalike audiences”—a crucial functionality that lets advertisers find many new people that Facebook predicts will act similarly to a small “custom” audience uploaded to the site. In other words, if you upload a list of a few thousand people who are open to your message, whether it is interest in a harmless hobby or incendiary claims against a political opponent, Facebook’s vast surveillance machinery, giant databases, and top-of-the line artificial-intelligence tools can help you find many, many more similar targets—which you can reach as long as you’re willing to pay Facebook. These are the kinds of advanced functions that Facebook makes easy to use, and staffers embedded with the Trump campaign would be able to explain and help with.

    #Zeynep_Tufekci #Facebook #Publicité_politique #Trump #Intelligence_artificielle

  • WhiteHall Analytica : IA, sociétés de surveillance et santé publique – par Nafeez Ahmed

    Source : By Line Times – Nafeez Ahmed Première partie d’une grande enquête du journaliste d’investigation Nafeez Ahmed sur une importance convergence d’hommes, d’intérêts et de fonds autour de la question des données médicales. La crise sanitaire du Covid-19 est en train d’enrichir un réseau de sociétés de surveillance liées à de hauts fonctionnaires du […]

    #Santé #Données_Personnelles #Intelligence_artificielle #Surveillance_de_masse #Santé,_Données_Personnelles,_Intelligence_artificielle,_Surveillance_de_masse

  • Facebook’s AI is still largely baffled by covid misinformation | MIT Technology Review

    Tiens, l’IA ne serait pas à la hauteur pour assurer la modération de contenu. Il faut des humains pour comprendre l’humanité. Quelle découverte miraculeuse. On est vraiment au XXIe siècle, je crois.

    The news: In its latest Community Standards Enforcement Report, released today, Facebook detailed the updates it has made to its AI systems for detecting hate speech and disinformation. The tech giant says 88.8% of all the hate speech it removed this quarter was detected by AI, up from 80.2% in the previous quarter. The AI can remove content automatically if the system has high confidence that it is hate speech, but most is still checked by a human being first.

    Behind the scenes: The improvement is largely driven by two updates to Facebook’s AI systems. First, the company is now using massive natural-language models that can better decipher the nuance and meaning of a post. These models build on advances in AI research within the last two years that allow neural networks to be trained on language without any human supervision, getting rid of the bottleneck caused by manual data curation.

    The second update is that Facebook’s systems can now analyze content that consists of images and text combined, such as hateful memes. AI is still limited in its ability to interpret such mixed-media content, but Facebook has also released a new data set of hateful memes and launched a competition to help crowdsource better algorithms for detecting them.

    Covid lies: Despite these updates, however, AI hasn’t played as big a role in handling the surge of coronavirus misinformation, such as conspiracy theories about the virus’s origin and fake news of cures. Facebook has instead relied primarily on human reviewers at over 60 partner fact-checking organizations. Only once a person has flagged something, such as an image with a misleading headline, do AI systems take over to search for identical or similar items and automatically add warning labels or take them down. The team hasn’t yet been able to train a machine-learning model to find new instances of disinformation itself. “Building a novel classifier for something that understands content it’s never seen before takes time and a lot of data,” Mike Schroepfer, Facebook’s CTO, said on a press call.

    Why it matters: The challenge reveals the limitations of AI-based content moderation. Such systems can detect content similar to what they’ve seen before, but they founder when new kinds of misinformation appear. In recent years, Facebook has invested heavily in developing AI systems that can adapt more quickly, but the problem is not just the company’s: it remains one of the biggest research challenges in the field.

    #Intelligence_artificielle #Facebook #Modération

  • Fooling Facial Detection with Fashion

    Usage of facial recognition is on the rise. With the recent debates over the ethics of facial recognition potential adversarial attacks against facial detection have been on my mind. Facial recognition is being used everywhere from airports to social media. It seems to be near impossible to opt-out of having your face scanned.

    An ideal attack on facial detection would be an article of clothing that looks inconspicuous to the uninformed. With inspiration from the Hyperface project I decided to research and implement a wearable adversarial example. In this article I’ll detail the process of creating an adversarial image to fool a selected type of facial detection and how I implemented a practical example on a face mask.



    #surveillance #vidéo-surveillance #reconnaissance_faciale #Hyperface_project #biométrie #CCTV #algorithme #Surveillance #intelligence_artificielle

  • Monitoring being pitched to fight Covid-19 was tested on refugees

    The pandemic has given a boost to controversial data-driven initiatives to track population movements

    In Italy, social media monitoring companies have been scouring Instagram to see who’s breaking the nationwide lockdown. In Israel, the government has made plans to “sift through geolocation data” collected by the Shin Bet intelligence agency and text people who have been in contact with an infected person. And in the UK, the government has asked mobile operators to share phone users’ aggregate location data to “help to predict broadly how the virus might move”.

    These efforts are just the most visible tip of a rapidly evolving industry combining the exploitation of data from the internet and mobile phones and the increasing number of sensors embedded on Earth and in space. Data scientists are intrigued by the new possibilities for behavioural prediction that such data offers. But they are also coming to terms with the complexity of actually using these data sets, and the ethical and practical problems that lurk within them.

    In the wake of the refugee crisis of 2015, tech companies and research consortiums pushed to develop projects using new data sources to predict movements of migrants into Europe. These ranged from broad efforts to extract intelligence from public social media profiles by hand, to more complex automated manipulation of big data sets through image recognition and machine learning. Two recent efforts have just been shut down, however, and others are yet to produce operational results.

    While IT companies and some areas of the humanitarian sector have applauded new possibilities, critics cite human rights concerns, or point to limitations in what such technological solutions can actually achieve.

    In September last year Frontex, the European border security agency, published a tender for “social media analysis services concerning irregular migration trends and forecasts”. The agency was offering the winning bidder up to €400,000 for “improved risk analysis regarding future irregular migratory movements” and support of Frontex’s anti-immigration operations.

    Frontex “wants to embrace” opportunities arising from the rapid growth of social media platforms, a contracting document outlined. The border agency believes that social media interactions drastically change the way people plan their routes, and thus examining would-be migrants’ online behaviour could help it get ahead of the curve, since these interactions typically occur “well before persons reach the external borders of the EU”.

    Frontex asked bidders to develop lists of key words that could be mined from platforms like Twitter, Facebook, Instagram and YouTube. The winning company would produce a monthly report containing “predictive intelligence ... of irregular flows”.

    Early this year, however, Frontex cancelled the opportunity. It followed swiftly on from another shutdown; Frontex’s sister agency, the European Asylum Support Office (EASO), had fallen foul of the European data protection watchdog, the EDPS, for searching social media content from would-be migrants.

    The EASO had been using the data to flag “shifts in asylum and migration routes, smuggling offers and the discourse among social media community users on key issues – flights, human trafficking and asylum systems/processes”. The search covered a broad range of languages, including Arabic, Pashto, Dari, Urdu, Tigrinya, Amharic, Edo, Pidgin English, Russian, Kurmanji Kurdish, Hausa and French.

    Although the EASO’s mission, as its name suggests, is centred around support for the asylum system, its reports were widely circulated, including to organisations that attempt to limit illegal immigration – Europol, Interpol, member states and Frontex itself.

    In shutting down the EASO’s social media monitoring project, the watchdog cited numerous concerns about process, the impact on fundamental rights and the lack of a legal basis for the work.

    “This processing operation concerns a vast number of social media users,” the EDPS pointed out. Because EASO’s reports are read by border security forces, there was a significant risk that data shared by asylum seekers to help others travel safely to Europe could instead be unfairly used against them without their knowledge.

    Social media monitoring “poses high risks to individuals’ rights and freedoms,” the regulator concluded in an assessment it delivered last November. “It involves the use of personal data in a way that goes beyond their initial purpose, their initial context of publication and in ways that individuals could not reasonably anticipate. This may have a chilling effect on people’s ability and willingness to express themselves and form relationships freely.”

    EASO told the Bureau that the ban had “negative consequences” on “the ability of EU member states to adapt the preparedness, and increase the effectiveness, of their asylum systems” and also noted a “potential harmful impact on the safety of migrants and asylum seekers”.

    Frontex said that its social media analysis tender was cancelled after new European border regulations came into force, but added that it was considering modifying the tender in response to these rules.

    Drug shortages put worst-hit Covid-19 patients at risk
    European doctors running low on drugs needed to treat Covid-19 patients
    Big Tobacco criticised for ’coronavirus publicity stunt’ after donating ventilators

    The two shutdowns represented a stumbling block for efforts to track population movements via new technologies and sources of data. But the public health crisis precipitated by the Covid-19 virus has brought such efforts abruptly to wider attention. In doing so it has cast a spotlight on a complex knot of issues. What information is personal, and legally protected? How does that protection work? What do concepts like anonymisation, privacy and consent mean in an age of big data?
    The shape of things to come

    International humanitarian organisations have long been interested in whether they can use nontraditional data sources to help plan disaster responses. As they often operate in inaccessible regions with little available or accurate official data about population sizes and movements, they can benefit from using new big data sources to estimate how many people are moving where. In particular, as well as using social media, recent efforts have sought to combine insights from mobile phones – a vital possession for a refugee or disaster survivor – with images generated by “Earth observation” satellites.

    “Mobiles, satellites and social media are the holy trinity of movement prediction,” said Linnet Taylor, professor at the Tilburg Institute for Law, Technology and Society in the Netherlands, who has been studying the privacy implications of such new data sources. “It’s the shape of things to come.”

    As the devastating impact of the Syrian civil war worsened in 2015, Europe saw itself in crisis. Refugee movements dominated the headlines and while some countries, notably Germany, opened up to more arrivals than usual, others shut down. European agencies and tech companies started to team up with a new offering: a migration hotspot predictor.

    Controversially, they were importing a concept drawn from distant catastrophe zones into decision-making on what should happen within the borders of the EU.

    “Here’s the heart of the matter,” said Nathaniel Raymond, a lecturer at the Yale Jackson Institute for Global Affairs who focuses on the security implications of information communication technologies for vulnerable populations. “In ungoverned frontier cases [European data protection law] doesn’t apply. Use of these technologies might be ethically safer there, and in any case it’s the only thing that is available. When you enter governed space, data volume and ease of manipulation go up. Putting this technology to work in the EU is a total inversion.”
    “Mobiles, satellites and social media are the holy trinity of movement prediction”

    Justin Ginnetti, head of data and analysis at the Internal Displacement Monitoring Centre in Switzerland, made a similar point. His organisation monitors movements to help humanitarian groups provide food, shelter and aid to those forced from their homes, but he casts a skeptical eye on governments using the same technology in the context of migration.

    “Many governments – within the EU and elsewhere – are very interested in these technologies, for reasons that are not the same as ours,” he told the Bureau. He called such technologies “a nuclear fly swatter,” adding: “The key question is: What problem are you really trying to solve with it? For many governments, it’s not preparing to ‘better respond to inflow of people’ – it’s raising red flags, to identify those en route and prevent them from arriving.”
    Eye in the sky

    A key player in marketing this concept was the European Space Agency (ESA) – an organisation based in Paris, with a major spaceport in French Guiana. The ESA’s pitch was to combine its space assets with other people’s data. “Could you be leveraging space technology and data for the benefit of life on Earth?” a recent presentation from the organisation on “disruptive smart technologies” asked. “We’ll work together to make your idea commercially viable.”

    By 2016, technologists at the ESA had spotted an opportunity. “Europe is being confronted with the most significant influxes of migrants and refugees in its history,” a presentation for their Advanced Research in Telecommunications Systems Programme stated. “One burning issue is the lack of timely information on migration trends, flows and rates. Big data applications have been recognised as a potentially powerful tool.” It decided to assess how it could harness such data.

    The ESA reached out to various European agencies, including EASO and Frontex, to offer a stake in what it called “big data applications to boost preparedness and response to migration”. The space agency would fund initial feasibility stages, but wanted any operational work to be jointly funded.

    One such feasibility study was carried out by GMV, a privately owned tech group covering banking, defence, health, telecommunications and satellites. GMV announced in a press release in August 2017 that the study would “assess the added value of big data solutions in the migration sector, namely the reduction of safety risks for migrants, the enhancement of border controls, as well as prevention and response to security issues related with unexpected migration movements”. It would do this by integrating “multiple space assets” with other sources including mobile phones and social media.

    When contacted by the Bureau, a spokeswoman from GMV said that, contrary to the press release, “nothing in the feasibility study related to the enhancement of border controls”.

    In the same year, the technology multinational CGI teamed up with the Dutch Statistics Office to explore similar questions. They started by looking at data around asylum flows from Syria and at how satellite images and social media could indicate changes in migration patterns in Niger, a key route into Europe. Following this experiment, they approached EASO in October 2017. CGI’s presentation of the work noted that at the time EASO was looking for a social media analysis tool that could monitor Facebook groups, predict arrivals of migrants at EU borders, and determine the number of “hotspots” and migrant shelters. CGI pitched a combined project, co-funded by the ESA, to start in 2019 and expand to serve more organisations in 2020.
    The proposal was to identify “hotspot activities”, using phone data to group individuals “according to where they spend the night”

    The idea was called Migration Radar 2.0. The ESA wrote that “analysing social media data allows for better understanding of the behaviour and sentiments of crowds at a particular geographic location and a specific moment in time, which can be indicators of possible migration movements in the immediate future”. Combined with continuous monitoring from space, the result would be an “early warning system” that offered potential future movements and routes, “as well as information about the composition of people in terms of origin, age, gender”.

    Internal notes released by EASO to the Bureau show the sheer range of companies trying to get a slice of the action. The agency had considered offers of services not only from the ESA, GMV, the Dutch Statistics Office and CGI, but also from BIP, a consulting firm, the aerospace group Thales Alenia, the geoinformation specialist EGEOS and Vodafone.

    Some of the pitches were better received than others. An EASO analyst who took notes on the various proposals remarked that “most oversell a bit”. They went on: “Some claimed they could trace GSM [ie mobile networks] but then clarified they could do it for Venezuelans only, and maybe one or two countries in Africa.” Financial implications were not always clearly provided. On the other hand, the official noted, the ESA and its consortium would pay 80% of costs and “we can get collaboration on something we plan to do anyway”.

    The features on offer included automatic alerts, a social media timeline, sentiment analysis, “animated bubbles with asylum applications from countries of origin over time”, the detection and monitoring of smuggling sites, hotspot maps, change detection and border monitoring.

    The document notes a group of services available from Vodafone, for example, in the context of a proposed project to monitor asylum centres in Italy. The proposal was to identify “hotspot activities”, using phone data to group individuals either by nationality or “according to where they spend the night”, and also to test if their movements into the country from abroad could be back-tracked. A tentative estimate for the cost of a pilot project, spread over four municipalities, came to €250,000 – of which an unspecified amount was for “regulatory (privacy) issues”.

    Stumbling blocks

    Elsewhere, efforts to harness social media data for similar purposes were proving problematic. A September 2017 UN study tried to establish whether analysing social media posts, specifically on Twitter, “could provide insights into ... altered routes, or the conversations PoC [“persons of concern”] are having with service providers, including smugglers”. The hypothesis was that this could “better inform the orientation of resource allocations, and advocacy efforts” - but the study was unable to conclude either way, after failing to identify enough relevant data on Twitter.

    The ESA pressed ahead, with four feasibility studies concluding in 2018 and 2019. The Migration Radar project produced a dashboard that showcased the use of satellite imagery for automatically detecting changes in temporary settlement, as well as tools to analyse sentiment on social media. The prototype received positive reviews, its backers wrote, encouraging them to keep developing the product.

    CGI was effusive about the predictive power of its technology, which could automatically detect “groups of people, traces of trucks at unexpected places, tent camps, waste heaps and boats” while offering insight into “the sentiments of migrants at certain moments” and “information that is shared about routes and motives for taking certain routes”. Armed with this data, the company argued that it could create a service which could predict the possible outcomes of migration movements before they happened.

    The ESA’s other “big data applications” study had identified a demand among EU agencies and other potential customers for predictive analyses to ensure “preparedness” and alert systems for migration events. A package of services was proposed, using data drawn from social media and satellites.

    Both projects were slated to evolve into a second, operational phase. But this seems to have never become reality. CGI told the Bureau that “since the completion of the [Migration Radar] project, we have not carried out any extra activities in this domain”.

    The ESA told the Bureau that its studies had “confirmed the usefulness” of combining space technology and big data for monitoring migration movements. The agency added that its corporate partners were working on follow-on projects despite “internal delays”.

    EASO itself told the Bureau that it “took a decision not to get involved” in the various proposals it had received.

    Specialists found a “striking absence” of agreed upon core principles when using the new technologies

    But even as these efforts slowed, others have been pursuing similar goals. The European Commission’s Knowledge Centre on Migration and Demography has proposed a “Big Data for Migration Alliance” to address data access, security and ethics concerns. A new partnership between the ESA and GMV – “Bigmig" – aims to support “migration management and prevention” through a combination of satellite observation and machine-learning techniques (the company emphasised to the Bureau that its focus was humanitarian). And a consortium of universities and private sector partners – GMV among them – has just launched a €3 million EU-funded project, named Hummingbird, to improve predictions of migration patterns, including through analysing phone call records, satellite imagery and social media.

    At a conference in Berlin in October 2019, dozens of specialists from academia, government and the humanitarian sector debated the use of these new technologies for “forecasting human mobility in contexts of crises”. Their conclusions raised numerous red flags. They found a “striking absence” of agreed upon core principles. It was hard to balance the potential good with ethical concerns, because the most useful data tended to be more specific, leading to greater risks of misuse and even, in the worst case scenario, weaponisation of the data. Partnerships with corporations introduced transparency complications. Communication of predictive findings to decision makers, and particularly the “miscommunication of the scope and limitations associated with such findings”, was identified as a particular problem.

    The full consequences of relying on artificial intelligence and “employing large scale, automated, and combined analysis of datasets of different sources” to predict movements in a crisis could not be foreseen, the workshop report concluded. “Humanitarian and political actors who base their decisions on such analytics must therefore carefully reflect on the potential risks.”

    A fresh crisis

    Until recently, discussion of such risks remained mostly confined to scientific papers and NGO workshops. The Covid-19 pandemic has brought it crashing into the mainstream.

    Some see critical advantages to using call data records to trace movements and map the spread of the virus. “Using our mobile technology, we have the potential to build models that help to predict broadly how the virus might move,” an O2 spokesperson said in March. But others believe that it is too late for this to be useful. The UK’s chief scientific officer, Patrick Vallance, told a press conference in March that using this type of data “would have been a good idea in January”.

    Like the 2015 refugee crisis, the global emergency offers an opportunity for industry to get ahead of the curve with innovative uses of big data. At a summit in Downing Street on 11 March, Dominic Cummings asked tech firms “what [they] could bring to the table” to help the fight against Covid-19.

    Human rights advocates worry about the longer term effects of such efforts, however. “Right now, we’re seeing states around the world roll out powerful new surveillance measures and strike up hasty partnerships with tech companies,” Anna Bacciarelli, a technology researcher at Amnesty International, told the Bureau. “While states must act to protect people in this pandemic, it is vital that we ensure that invasive surveillance measures do not become normalised and permanent, beyond their emergency status.”

    More creative methods of surveillance and prediction are not necessarily answering the right question, others warn.

    “The single largest determinant of Covid-19 mortality is healthcare system capacity,” said Sean McDonald, a senior fellow at the Centre for International Governance Innovation, who studied the use of phone data in the west African Ebola outbreak of 2014-5. “But governments are focusing on the pandemic as a problem of people management rather than a problem of building response capacity. More broadly, there is nowhere near enough proof that the science or math underlying the technologies being deployed meaningfully contribute to controlling the virus at all.”

    Legally, this type of data processing raises complicated questions. While European data protection law - the GDPR - generally prohibits processing of “special categories of personal data”, including ethnicity, beliefs, sexual orientation, biometrics and health, it allows such processing in a number of instances (among them public health emergencies). In the case of refugee movement prediction, there are signs that the law is cracking at the seams.
    “There is nowhere near enough proof that the science or math underlying the technologies being deployed meaningfully contribute to controlling the virus at all.”

    Under GDPR, researchers are supposed to make “impact assessments” of how their data processing can affect fundamental rights. If they find potential for concern they should consult their national information commissioner. There is no simple way to know whether such assessments have been produced, however, or whether they were thoroughly carried out.

    Researchers engaged with crunching mobile phone data point to anonymisation and aggregation as effective tools for ensuring privacy is maintained. But the solution is not straightforward, either technically or legally.

    “If telcos are using individual call records or location data to provide intel on the whereabouts, movements or activities of migrants and refugees, they still need a legal basis to use that data for that purpose in the first place – even if the final intelligence report itself does not contain any personal data,” said Ben Hayes, director of AWO, a data rights law firm and consultancy. “The more likely it is that the people concerned may be identified or affected, the more serious this matter becomes.”

    More broadly, experts worry that, faced with the potential of big data technology to illuminate movements of groups of people, the law’s provisions on privacy begin to seem outdated.

    “We’re paying more attention now to privacy under its traditional definition,” Nathaniel Raymond said. “But privacy is not the same as group legibility.” Simply put, while issues around the sensitivity of personal data can be obvious, the combinations of seemingly unrelated data that offer insights about what small groups of people are doing can be hard to foresee, and hard to mitigate. Raymond argues that the concept of privacy as enshrined in the newly minted data protection law is anachronistic. As he puts it, “GDPR is already dead, stuffed and mounted. We’re increasing vulnerability under the colour of law.”

    #cobaye #surveillance #réfugiés #covid-19 #coronavirus #test #smartphone #téléphones_portables #Frontex #frontières #contrôles_frontaliers #Shin_Bet #internet #big_data #droits_humains #réseaux_sociaux #intelligence_prédictive #European_Asylum_Support_Office (#EASO) #EDPS #protection_des_données #humanitaire #images_satellites #technologie #European_Space_Agency (#ESA) #GMV #CGI #Niger #Facebook #Migration_Radar_2.0 #early_warning_system #BIP #Thales_Alenia #EGEOS #complexe_militaro-industriel #Vodafone #GSM #Italie #twitter #détection #routes_migratoires #systèmes_d'alerte #satellites #Knowledge_Centre_on_Migration_and_Demography #Big_Data for_Migration_Alliance #Bigmig #machine-learning #Hummingbird #weaponisation_of_the_data #IA #intelligence_artificielle #données_personnelles

    ping @etraces @isskein @karine4 @reka

    signalé ici par @sinehebdo :

  • Tous surveillés - 7 milliards de suspects | ARTE


    C’est un reportage intéressant car il contient une interview avec l’inventeur du système de crédit social chinois Lin Junyue. (à partir deTC 00:50:20) Cet homme rearquable pour son visage aux traits impénétrables explique sur un ton glaçant qu’avec son système le mouvement des gilets jaunes et tous les autres mouvements populaires n’auraient pas eu lieu en France et qu’il espère bien qu’un jour les Français comprennent qu’il est nécessaire de règlementer la société à sa façon.

    Pour le reste ce serait parfait si on nous avait expliqué que le sort des Ouïgours est le résultat d’un islamisme terroriste lancé et entretenu par les #USA afin de déstabliliser son concurrent chinois. Le gouvernement Chinois se défend avec les moyens modernes qui lui permettent d’éviter un conflit armé ouvert. Il est évident que les conséquences sont horribles pour les familles ouïgoures broyées dans l’engrenage de la nouvelle guerre froide imérialiste. Quant à ce sujet ce documentaire n’apporte pas d’élément au dela de ce que veulent nous font croire les infos de 20 heures des médias système. Au lieu de parler des forces véritables qui agissent dans cette guerre on nous tient le discours habituel anti-chinois avec quelques comparaison nazies aberrantes illustrés avec les images habituelles de Tibétains et autres opposants ralliés aux amis étatuniens.

    A d’autres moments le film dit clairement que les technologies de surveillance moderne sont le fruit de la guerre et particulièrment de la guerre menè par l’état d’Israel. Là encore il y manque l’information que cet état est le résultat d’actions de groupes armés zionistes anti-arabes et anti-britanniques, ces auteurs de l’attentat de l’hôtel King David et diplomates habiles se servant du désir étatsunien de récupérer la place du Royaume Uni dans la région après 1945.

    Bref, ce film est à voir mais avec les mises en garde habituelles contre l’amalgame de vérité et de vraies fausses histoires.

    Des caméras de Nice à la répression chinoise des Ouïghours, cette enquête dresse le panorama mondial de l’obsession sécuritaire, avec un constat glaçant : le totalitarisme numérique est pour demain.

    Disponible du 14/04/2020 au 19/06/2020, Prochaine diffusion le vendredi 15 mai à 09:25

    Arte-Doku zur Überwachungspraxis : Perfekte Unterdrückung - Medien - Gesellschaft - Tagesspiegel

    Tatsächlich ist mit dem Überwachungs-Regime die Kriminalitätsrate stark gesunken. Probleme würden nicht durch Inhaftierung, sondern durch die missbilligende Reaktion der Gesellschaft gelöst, sagt der Sozialwissenschaftler und Regierungsberater Lin Junyue, der in dem französischen Dokumentarfilm „Überwacht: Sieben Milliarden im Visier“ als Erfinder der Sozialkredite in der Volksrepublik China vorgestellt wird.

    Lin Junyue würde seine Idee gerne ins kapitalistische Ausland verkaufen, in Europa habe Polen Interesse signalisiert. Um das französische Publikum zu überzeugen, sagt er: „Mit dem Sozialkredit-System hätte es die Gelbwesten-Bewegung nie gegeben.“

    #intelligence_artificielle #reconnaissance_faciale #surveillance #crimes_de_guerre #Chine #France #reportage

  • Contribuez à la #consultation du collectif #LeJourdAprès

    –-> 11 thèmes à discuter

    Thème 1 - "Le plus important, c’est la #santé !" : quel #système_de_santé demain ?

    Thème 2 - Métro, boulot, robot” : quel monde du #travail voulons-nous ?

    Thème 3 - “A consommer avec modération” : vers une société de la #sobriété ?

    Thème 4 - “Des liens plutôt que des biens” : comment retisser des #solidarités ?

    Thème 5 - “Éducation et #jeunesse” : comment construire une #société_apprenante ?

    Thème 6 - “L’homme face à la machine” : peut-on humaniser le #numérique ?

    Thème 7 - “Une #démocratie plus ouverte” : comment partager le #pouvoir ?

    Thème 8 - “L’avenir de nos #territoires” : quel nouveau contrat pour les renforcer et préserver leur diversité ?

    Thème 9 - L’Europe dans le monde” : comment recréer une #solidarité_européenne et internationale ?

    Thème 10 - “Notre richesse est invisible” : comment mieux évaluer le bien-commun ?

    Thème 11 - "Le nerf de la guerre" : quel financement & quel nouveau #partage_des_richesses ?

    #le_monde_d'après #futur #consommation #solidarité #éducation #solidarité_internationale #bien_commun #richesse #pauvreté

    • Autour de l’éducation, voici un commentaire reçu via la mailing-list Facs et labos en lutte, le 06.04.2020 :

      Je suis allé voir sur leur site (appelé judicieusement « #le_jour_d'après » pile une semaine après la #tribune appelant à un futur écologique féministe et social et signée par 18 organisations : une bonne façon de reprendre le nom et de mettre le flou (de façon voulue ou non je ne me prononcerai pas).

      Quand on regarde les sujets cela paraît intéressant, ça couvre plusieurs choses (sans questionner l’#extractivisme, le #colonialisme par exemple non plus, dont dépend pourtant le numérique).
      Mais quand on fouille dans chaque thème, on aperçoit déjà un sacré biais sur la vision du jour d’après de ces députés :

      thème sur le soin :
      « il est aussi évident que notre système de soins a montré des limites inquiétantes [...] manque d’investissement dans la recherche (comme par exemple en #intelligence_artificielle » ? Le lien coronavirus -> médical -> recherche -> #IA est à m’expliquer... drôle de vision de la recherche en tout cas... Très #LPPR compatible...

      Thème sur l’éducation :
      « La crise nous a montré que de nouvelles façons d’apprendre sont possibles et à encourager : continuité pédagogique en ligne, mobilisation sans précédent des #EdTech, industrialisation des #Moocs et de la formation continue en ligne, cours et astuces via les #réseaux_sociaux »
      Super nouvelle pour toute la start-up éducation, une belle vision de l’#apprentissage !

      Encore plus orientant, la plateforme ne s’arrête pas à une consultation mais propose des #ateliers. Il y en a 3 pour l’instant et le moins qu’on puisse dire c’est que ça laisse songeur...
      « le jour d’après sera numérique ou ne sera pas ».
      Pour l’atelier « leçons à tirer de la crise » c’est #Laurent_Berger secrétaire général de la CFDT (pour la retraite à point ne l’oublions pas) qui est invité.
      Belle #démocratie_participative où on invite toujours les mêmes...

      à mon sens on ne peut que rester sceptique et prudent quand on sait d’où viennent les députés de la tribune (#Cédric_Villani signataire est aussi auteur d’un des rapports de la LPPR)... Est-ce l’arrivée d’un #grand_débat_bis ? Encore une fameuse/fumeuse initiative de démocratie participative complètement biaisée d’avance ?
      En tout cas au vu de l’organisation ça semble être un sacré bulldozer et ça n’est pas le plus rassurant.

    • A mettre en regard des (encore trop gentilles) propositions d’Attac :

      4 mesures d’urgence
      – L’ arrêt immédiat des activités non indispensables pour faire face à l’épidémie.
      – Les réquisitions des établissements médicaux privés et des entreprises afin de produire dans l’urgence masques, respirateurs et tout le matériel nécessaire pour sauver des vies.
      – La suspension immédiate des versements de dividendes, rachats d’actions et bonus aux PDG.
      – La décision de ne pas utiliser les 750 milliards d’euros de la BCE pour alimenter les marchés financiers mais uniquement pour financer les besoins sociaux et écologiques des populations.

      Dès maintenant et à long terme
      Il ne s’agit pas ensuite de relancer une économie profondément insoutenable écologiquement et socialement ! Nous demandons que s’engagent sans plus attendre des politiques publiques de long terme pour ne plus jamais revivre ça :
      – Un plan de développement de tous les services publics, en France et dans le monde.
      – Une fiscalité bien plus juste et redistributive, un impôt sur les grandes fortunes, une taxe sur les transactions financières renforcée et une véritable lutte contre l’évasion fiscale.
      – Un plan de réorientation et de relocalisation solidaire de l’agriculture, de l’industrie et des services, pour les rendre plus justes socialement, en mesure de satisfaire les besoins essentiels des populations et de répondre à la crise écologique.


    • Ce truc du parlement ouvert, c’est pas des députés qui se font un supplément d’âme ?

      Quand on regarde les sujets cela paraît intéressant, ça couvre plusieurs choses (sans questionner l’#extractivisme, le #colonialisme par exemple non plus, dont dépend pourtant le numérique).

      Niet, le jour d’après qui nous revend du partage de la connaissance et du numérique à tire-larigot !

    • Je vois, je vois ... Et sinon, pour le hashtag que j’avais initié ici même, (en l’occurence « le jour d’après ») je me sens un peu con. Une idée pour un éventuel détournement de LEUR « jour d’après » ?

      {edit] :
      * idée n°1 : « La nuit d’après » ?
      * idée n°2 : « Le Grand-Soir d’après » ?
      * idée n°3 : « the mess after » ?

    • 58 parlementaires appellent les Français à construire le monde d’après

      Des parlementaires de différentes sensibilités politiques lancent un appel invitant les Français à imaginer un « grand plan de transformation de notre société » à l’issue de la crise épidémique. Une consultation est ouverte à partir de samedi et pour une durée d’un mois, pour recueillir les propositions.

      Construire ensemble le monde de l’après-crise, c’est l’ambition de 58 parlementaires de différentes sensibilités politiques, pour la plupart députés, qui lancent un appel en ce sens aux citoyens et aux forces vives du pays (voir ci-bas). Pour écrire « notre avenir commun », ils organisent, jusqu’au dimanche 3 mai, une grande consultation ouverte à tous.

      Chacun est invité à contribuer sur la plateforme en ligne lejourdapres.parlement-ouvert.fr ou à se prononcer sur un certain nombre de propositions avancées par les signataires de cet appel. Emmenés par Matthieu Orphelin (Libertés et Territoires), Aurélien Taché (LaREM) et Paula Fortezza (ex-LaREM), ils pensent qu’"il y aura un avant et un après coronavirus" qui nécessitera bien plus qu’un « simple plan de relance ». Ils plaident pour établir collectivement un « grand plan de transformation de notre société et de notre économie » et estiment qu’il « faudra réapprendre la sobriété, la solidarité et l’innovation ». Les députés à l’origine de cette initiative sont issus de plusieurs groupes de l’Assemblée nationale (La République en Marche, Libertés et Territoires, Mouvement démocrate, Socialistes et apparentés, UDI Agir et Indépendants, non-inscrits).

      Cette crise « a violemment révélé les failles et les limites de notre modèle de développement, entretenu depuis des dizaines d’années. Elle nous rappelle le sens de l’essentiel : notre souveraineté alimentaire, notre besoin de sécurité sanitaire européenne, notre production locale pour des emplois de proximité, le besoin de relever les défis environnementaux, de réapprendre à vivre en concordance avec la nature, de réinventer le lien social et le vivre-ensemble, de développer la solidarité internationale plutôt que de favoriser le repli sur soi » écrivent les parlementaires dans leur appel.
      Des propositions tous azimuts

      Pour alimenter la réflexion sur la société de demain, des ateliers participatifs, visionnables en ligne, avec de grands témoins comme Laurence Tubiana, Laurent Berger et Cynthia Fleury, seront également organisés.

      Onze thèmes sont soumis à la discussion : la santé, le travail, les solidarités, le bien commun, le numérique, les territoires, le partage des richesses, etc. Autant de sujets sur lesquels les parlementaires avancent déjà des propositions, parfois déjà entendues lors de débats à l’Assemblée nationale. Parmi ces propositions : une revalorisation de 200 euros nets mensuels pour les aides à domicile, aides-soignantes, infirmières et autres agents hospitaliers, une TVA réduite sur les biens de consommation issus de l’économie circulaire, une relocalisation de l’activité industrielle en France et en Europe, un renforcement de 5 milliards par an des investissements des collectivités territoriales dans la transition écologique, une taxation du kérosène sur les vols intérieurs, la création d’une réserve solidaire de bénévoles associatifs, la revalorisation des salaires et des carrières des enseignants pour la rentrée de septembre 2020, la création d’un revenu universel dès l’âge de 18 ans.

      Autres propositions : une augmentation du barème des droits de succession et de mutation, une plus grande progressivité de l’impôt, une révision du barème de la flat tax, l’ajout d’impôt sur les liquidités pour compléter l’impôt sur le fortune immobilière, le fléchage du cibler le crédit impôt recherche vers les entreprises qui relocalisent, la mise en place d’un green new deal européen, d’un plan de relance par l’investissement abondé par une taxation européenne sur les transactions financières et d’une taxe carbone aux frontières de l’Europe,

      « Une synthèse de la consultation sera rendue publique avant mi-mai », indique le texte de l’appel. Avec à la clé, ambitionnent les parlementaires à l’origine de cette initiative, un plan d’action politique à décliner en mesures législatives.


  • En guerre ?! Non, complètement dépassés...
    très bon article, très critique sur les défaillances européennes et l’obsession de la compression des couts. Par ailleurs, un appel à utiliser dès aujourd’hui les médicaments qui marchent et réduisent l’effet du coronavirus par Jean Dominique Michel, anthropologue de la santé
    - Anthropo-logiques -

    Les dernières données en provenance d’Italie le confirment : ce virus n’est dangereux que pour les personnes souffrant de ces pathologies chroniques, ces « maladies de civilisation » qui seraient à 80% évitables si on avait une politique de santé digne de ce nom - problème que j’ai abordé dans ce blog à réitérées reprises.

    La vérité est qu’à peu près rien n’a été réellement fait au cours des décennies écoulées pour protéger la population contre les principaux facteurs de risque (que sont la malbouffe, la pollution, le stress et la sédentarité) malgré des dégâts sanitaires monstrueux. Aujourd’hui, c’est cette population déjà atteinte dans sa santé qui est frappée. 99% des victimes en Italie (parmi les 2’500 premiers morts) souffraient d’une à trois maladies chroniques, avec des taux de 75% de tension artérielle élevée, 35% de diabète, 30% de maladies cardio-vasculaires, etc. )
    Il faut oser le dire : ce n’est pas le virus qui tue (il est bénin pour les personnes en bonne santé), ce sont les pathologies chroniques qu’on a laissé honteusement se développer en favorisant des industries toxiques au détriment du bien commun et de la santé de population (pour un développement de ce constat, se référer à l’article suivant).

    Défaillance de la réponse

    L’autre cause majeure de cette crise, c’est la vétusté de notre réponse sanitaire. Les pays asiatiques ont réagi avec la connaissance, les moyens et la technologie du XXIème siècle. Avec les succès que l’on observe. En Europe, par manque de préparation, de moyens mais aussi de capacité à nous organiser, on est revenu ni plus ni moins aux méthodes du XIXème. Au lieu donc de réagir avec la seule méthode adaptée (dépister – confiner les personnes infectées – soigner), on en a été très vite contraints à renoncer à dépister (avec pour conséquence une ignorance de la situation réele) et faire le choix de confiner tout le monde. Avec pour conséquence de détruire la vie économique et sociale… en laissant les cas critiques tomber malades chez eux en attendant de venir saturer les services hospitaliers en urgence.

    Ce qui est contraire à toutes les recommandations et bonnes pratiques en santé publique face à une épidémie ! Et constitue à vrai dire un très pauvre pis-aller, en l’absence des moyens qui permettraient d’agir.

    Pourquoi en est-on arrivé là ? Parce que nous ne sommes pas parvenus, malgré le temps dont nous disposions, à mettre en place les bonnes réponses. Le manque de tests et de mesures de dépistage en particulier est critique, alors que la Corée, Hong-Kong et la Chine en faisaient leur priorité absolue. Les produire ne pose pas de problème technique et notre capacité industrielle est largement suffisante. C’est un problème d’organisation et de passage à l’action.

    Les pays mentionnés ont par ailleurs mis à profit l’intelligence artificielle notamment pour identifier les chaînes de transmissions possibles pour chaque cas positifs (avec les smartphones, on peut par exemple faire l’inventaire des déplacements et donc des contacts que les personnes infectées ont eu avec d’autres personnes dans les 48h précédent l’apparition des symptômes).

    Pour ne rien arranger, nous avons réduit de manière importante la capacité en soins intensifs de nos hôpitaux au cours de la décennie écoulée, ce qui nous conduit à être aujourd’hui en manque de lits et de matériel de réanimation. L’hôpital est devenu obèse en captant des activités médicales qui pourraient pour la plupart être assumées par des structures plus légères et moins coûteuses. Alors qu’on sabrait dans le même temps dans les services de soins intensifs -cf le graphique en tête d’article.
    And now ?

    notre passivité en particulier à rendre disponible des médicaments apparemment efficaces contre le virus, déjà inclus dans les treatment guidelines de différents pays, ressemble à un vrai scandale.

    L’#hydroxychloroquine en particulier (combinée avec l’azithromycine, un antibiotique donné contre les infections bactériennes opportunistes mais qui a aussi une action antivirale) s’est avérée curer la charge virale en 5 jours lors de différents essais cliniques.

    Ce médicament est utilisé depuis plus de 60 ans, nous en avons une parfaire connaissance pharmacocinétique. Les Chinois, les Coréens, les Indiens, les Belges et les Saoudiens l’ont homologué pour traiter le SARS-CoV-2.

    Bien sûr, des essais cliniques n’apportent pas la preuve scientifique rigoureuse (evidence) fournie par un essai randomisé en double-aveugle. Mais lorsque des essais cliniques portant sur 121 personnes (en Chine), 24 personnes (Marseille) et 30 personnes (Stanford, avec groupe-témoin) obtiennent tous une élimination de la charge virale en 5 jours, avec une substance dont on connaît parfaitement les caractéristiques et les modalités d’usage, il est juste invraisemblable qu’on ne l’incorpore pas d’urgence dans notre stratégie de soins. Les Américains (voir référence infra) suggèrent que l’hydroxychloroquine aurait de surcroît un effet prophylactique permettant, si cela se vérifie, d’en prescrire pour éviter de contracter le virus.

    On entend pour l’instant de vieilles huiles venir minauder qu’on ne saurait faire la moindre entorse aux procédures habituelles. Les objections qu’on entend (par exemple des centres français de pharmacovogilance) portent sur les risques de surdosage ou d’effets problématiques à long-terme, ce qui est peu compréhensible dès lors qu’il s’agit pour le Covid d’un traitement de 6 jours, à doses modérée, avec une molécule au sujet de laquelle on a une immense expérience, qu’on connaît, utilise et maîtrise depuis 60 ans et dont on connaît les interactions possibles avec d’autres substances !

    #covid-19 #santé #épidémie #pandémie

  • The Politics of Regulation in the Age of AI - Henri Verdier

    National sovereignty and independence are also at stake when it comes to AI, which has long been a major focus for tech leaders across industries. Big corporations across every sector, from retail to agriculture, are trying to integrate machine learning into their products. At the same time, there is an acute shortage of AI talent, as evoked earlier. This combination is fueling a heated race to scoop up top AI startups, many of which are still in the early stages of research and funding. Developing our own AI applications, technologies and infrastructure, as well as building and promoting a European model of regulation worldwide based on our European values is crucial to guarantee our digital sovereignty. To do so, we must analyze data, which fuels AI. We need to evaluate how data is created, and how it can be used to better serve our economy and our citizens. This implies sovereign cloud solutions and easier transfers of data, which can be achieved with the creation of common data spaces. At the European scale, this could take the shape of a common market of data. In addition, Europe needs to grasp the potential that the exploitation of “non-personal data”, or industrial data, represents.

    #Intelligence_artificielle #Géopolitique #Cloud_souverain

  • To Participants in the Plenary Assembly of the Pontifical Academy for Life (28 February 2020) | Francis



    Clementine Hall
    Friday, 28 February 2020


    Distinguished Authorities,
    Ladies and Gentlemen,
    Dear Brothers and Sisters,

    I offer you a cordial greeting on the occasion of the General Assembly of the Pontifical Academy for Life. I thank Archbishop Paglia for his kind words. I am grateful too for the presence of the President of the European Parliament, the FAO Director-General and the other authorities and leaders in field of information technology. I also greet those who join us from the Conciliazione Auditorium. And I am heartened by the numerous presence of young people: I see this as a sign of hope.

    The issues you have addressed in these days concern one of the most important changes affecting today’s world. Indeed, we could say that the digital galaxy, and specifically artificial intelligence, is at the very heart of the epochal change we are experiencing. Digital innovation touches every aspect of our lives, both personal and social. It affects our way of understanding the world and ourselves. It is increasingly present in human activity and even in human decisions, and is thus altering the way we think and act. Decisions, even the most important decisions, as for example in the medical, economic or social fields, are now the result of human will and a series of algorithmic inputs. A personal act is now the point of convergence between an input that is truly human and an automatic calculus, with the result that it becomes increasingly complicated to understand its object, foresee its effects and define the contribution of each factor.

    To be sure, humanity has already experienced profound upheavals in its history: for example, the introduction of the steam engine, or electricity, or the invention of printing which revolutionized the way we store and transmit information. At present, the convergence between different scientific and technological fields of knowledge is expanding and allows for interventions on phenomena of infinitesimal magnitude and planetary scope, to the point of blurring boundaries that hitherto were considered clearly distinguishable: for example, between inorganic and organic matter, between the real and the virtual, between stable identities and events in constant interconnection.

    On the personal level, the digital age is changing our perception of space, of time and of the body. It is instilling a sense of unlimited possibilities, even as standardization is becoming more and more the main criterion of aggregation. It has become increasingly difficult to recognize and appreciate differences. On the socio-economic level, users are often reduced to “consumers”, prey to private interests concentrated in the hands of a few. From digital traces scattered on the internet, algorithms now extract data that enable mental and relational habits to be controlled, for commercial or political ends, frequently without our knowledge. This asymmetry, by which a select few know everything about us while we know nothing about them, dulls critical thought and the conscious exercise of freedom. Inequalities expand enormously; knowledge and wealth accumulate in a few hands with grave risks for democratic societies. Yet these dangers must not detract from the immense potential that new technologies offer. We find ourselves before a gift from God, a resource that can bear good fruits.

    The issues with which your Academy has been concerned since its inception present themselves today in a new way. The biological sciences are increasingly employing devices provided by artificial intelligence. This development has led to profound changes in our way of understanding and managing living beings and the distinctive features of human life, which we are committed to safeguarding and promoting, not only in its constitutive biological dimension, but also in its irreducible biographical aspect. The correlation and integration between life that is “lived” and life that is “experienced” cannot be dismissed in favour of a simple ideological calculation of functional performance and sustainable costs. The ethical problems that emerge from the ways that these new devices can regulate the birth and destiny of individuals call for a renewed commitment to preserve the human quality of our shared history.

    For this reason, I am grateful to the Pontifical Academy for Life for its efforts to develop a serious reflection that has fostered dialogue between the different scientific disciplines indispensable for addressing these complex phenomena.

    I am pleased that this year’s meeting includes individuals with various important roles of responsibility internationally in the areas of science, industry and political life. I am gratified by this and I thank you. As believers, we have no ready-made ideas about how to respond to the unforeseen questions that history sets before us today. Our task is rather one of walking alongside others, listening attentively and seeking to link experience and reflection. As believers, we ought to allow ourselves to be challenged, so that the word of God and our faith tradition can help us interpret the phenomena of our world and identify paths of humanization, and thus of loving evangelization, that we can travel together. In this way we will be able to dialogue fruitfully with all those committed to human development, while keeping at the centre of knowledge and social praxis the human person in all his or her dimensions, including the spiritual. We are faced with a task involving the human family as a whole.

    In light of this, mere training in the correct use of new technologies will not prove sufficient. As instruments or tools, these are not “neutral”, for, as we have seen, they shape the world and engage consciences on the level of values. We need a broader educational effort. Solid reasons need to be developed to promote perseverance in the pursuit of the common good, even when no immediate advantage is apparent. There is a political dimension to the production and use of artificial intelligence, which has to do with more than the expanding of its individual and purely functional benefits. In other words, it is not enough simply to trust in the moral sense of researchers and developers of devices and algorithms. There is a need to create intermediate social bodies that can incorporate and express the ethical sensibilities of users and educators.

    There are many disciplines involved in the process of developing technological equipment (one thinks of research, planning, production, distribution, individual and collective use…), and each entails a specific area of responsibility. We are beginning to glimpse a new discipline that we might call “the ethical development of algorithms” or more simply “algor-ethics” (cf. Address to Participants in the Congress on Child Dignity in the Digital World, 14 November 2019). This would have as its aim ensuring a competent and shared review of the processes by which we integrate relationships between human beings and today’s technology. In our common pursuit of these goals, a critical contribution can be made by the principles of the Church’s social teaching: the dignity of the person, justice, subsidiarity and solidarity. These are expressions of our commitment to be at the service of every individual in his or her integrity and of all people, without discrimination or exclusion. The complexity of the technological world demands of us an increasingly clear ethical framework, so as to make this commitment truly effective.

    The ethical development of algorithms – algor-ethics – can be a bridge enabling those principles to enter concretely into digital technologies through an effective cross-disciplinary dialogue. Moreover, in the encounter between different visions of the world, human rights represent an important point of convergence in the search for common ground. At present, there would seem to be a need for renewed reflection on rights and duties in this area. The scope and acceleration of the transformations of the digital era have in fact raised unforeseen problems and situations that challenge our individual and collective ethos. To be sure, the Call that you have signed today is an important step in this direction, with its three fundamental coordinates along which to journey: ethics, education and law.

    Dear friends, I express my support for the generosity and energy with which you have committed yourselves to launching this courageous and challenging process of reassessment. I invite you to continue with boldness and discernment, as you seek ways to increase the involvement of all those who have the good of the human family at heart. Upon all of you, I invoke God’s blessings, so that your journey can continue with serenity and peace, in a spirit of cooperation. May the Blessed Virgin assist you. I accompany you with my blessing. And I ask you please to remember me in your prayers. Thank you.

    #Surveillance #Reconnaissance_faciale #Intelligence_Artificielle #Pape_François

  • Vatican joins IBM, Microsoft to call for facial recognition regulation - Reuters

    VATICAN CITY (Reuters) - The Vatican joined forces with tech giants Microsoft and IBM on Friday to promote the ethical development of artificial intelligence (AI) and call for regulation of intrusive technologies such as facial recognition.
    FILE PHOTO: Pope Francis waves during the weekly general audience at Vatican, February 26, 2020. REUTERS/Remo Casilli

    The three said AI should respect privacy, work reliably and without bias, consider human rights and operate transparently.

    Pope Francis, who has raised concerns about the uncontrolled spread of AI technologies, gave his backing in a speech read on his behalf at a conference attended by Microsoft president Brad Smith (MSFT.O) and IBM (IBM.N) Executive Vice President John Kelly. The pope is ill and could not deliver the address himself.

    Calling for the ethical development of algorithms, known as “algor-ethics”, Francis warned about the dangers of AI being used to extract data for commercial or political ends, often without the knowledge of individuals.

    “This asymmetry, by which a select few know everything about us while we know nothing about them, dulls critical thought and the conscious exercise of freedom,” he said in his message.

    “Inequalities expand enormously; knowledge and wealth accumulate in a few hands with grave risks for democratic societies,” he said.

    The joint document made a specific reference to the potential abuse of facial recognition technology.

    “New forms of regulation must be encouraged to promote transparency and compliance with ethical principles, especially for advanced technologies that have a higher risk of impacting human rights, such as facial recognition,” the document said.

    #Surveillance #Reconnaissance_faciale #Intelligence_Artificielle #Pape_François

  • What AI still can’t do - MIT Technology Review

    In less than a decade, computers have become extremely good at diagnosing diseases, translating languages, and transcribing speech. They can outplay humans at complicated strategy games, create photorealistic images, and suggest useful replies to your emails.

    Yet despite these impressive achievements, artificial intelligence has glaring weaknesses.

    Machine-learning systems can be duped or confounded by situations they haven’t seen before. A self-driving car gets flummoxed by a scenario that a human driver could handle easily. An AI system laboriously trained to carry out one task (identifying cats, say) has to be taught all over again to do something else (identifying dogs). In the process, it’s liable to lose some of the expertise it had in the original task. Computer scientists call this problem “catastrophic forgetting.”

    These shortcomings have something in common: they exist because AI systems don’t understand causation. They see that some events are associated with other events, but they don’t ascertain which things directly make other things happen. It’s as if you knew that the presence of clouds made rain likelier, but you didn’t know clouds caused rain.

    But there’s a growing consensus that progress in AI will stall if computers don’t get better at wrestling with causation. If machines could grasp that certain things lead to other things, they wouldn’t have to learn everything anew all the time—they could take what they had learned in one domain and apply it to another. And if machines could use common sense we’d be able to put more trust in them to take actions on their own, knowing that they aren’t likely to make dumb errors.

    Pearl’s work has also led to the development of causal Bayesian networks—software that sifts through large amounts of data to detect which variables appear to have the most influence on other variables. For example, GNS Healthcare, a company in Cambridge, Massachusetts, uses these techniques to advise researchers about experiments that look promising.

    In one project, GNS worked with researchers who study multiple myeloma, a kind of blood cancer. The researchers wanted to know why some patients with the disease live longer than others after getting stem-cell transplants, a common form of treatment. The software churned through data with 30,000 variables and pointed to a few that seemed especially likely to be causal. Biostatisticians and experts in the disease zeroed in on one in particular: the level of a certain protein in patients’ bodies. Researchers could then run a targeted clinical trial to see whether patients with the protein did indeed benefit more from the treatment. “It’s way faster than poking here and there in the lab,” says GNS cofounder Iya Khalil.

    Nonetheless, the improvements that Pearl and other scholars have achieved in causal theory haven’t yet made many inroads in deep learning, which identifies correlations without too much worry about causation. Bareinboim is working to take the next step: making computers more useful tools for human causal explorations.

    Getting people to think more carefully about causation isn’t necessarily much easier than teaching it to machines, he says. Researchers in a wide range of disciplines, from molecular biology to public policy, are sometimes content to unearth correlations that are not actually rooted in causal relationships. For instance, some studies suggest drinking alcohol will kill you early, while others indicate that moderate consumption is fine and even beneficial, and still other research has found that heavy drinkers outlive nondrinkers. This phenomenon, known as the “reproducibility crisis,” crops up not only in medicine and nutrition but also in psychology and economics. “You can see the fragility of all these inferences,” says Bareinboim. “We’re flipping results every couple of years.”

    On reste quand même dans la fascination technologique

    Bareinboim described this vision while we were sitting in the lobby of MIT’s Sloan School of Management, after a talk he gave last fall. “We have a building here at MIT with, I don’t know, 200 people,” he said. How do those social scientists, or any scientists anywhere, decide which experiments to pursue and which data points to gather? By following their intuition: “They are trying to see where things will lead, based on their current understanding.”

    That’s an inherently limited approach, he said, because human scientists designing an experiment can consider only a handful of variables in their minds at once. A computer, on the other hand, can see the interplay of hundreds or thousands of variables. Encoded with “the basic principles” of Pearl’s causal calculus and able to calculate what might happen with new sets of variables, an automated scientist could suggest exactly which experiments the human researchers should spend their time on.

    #Intelligence_artificielle #Causalité #Connaissance #Pragmatique #Machine_learning

  • Hackers can trick a Tesla into accelerating by 50 miles per hour - MIT Technology Review

    Hackers have manipulated multiple Tesla cars into speeding up by 50 miles per hour. The researchers fooled the car’s MobilEye EyeQ3 camera system by subtly altering a speed limit sign on the side of a road in a way that a person driving by would almost never notice.

    This demonstration from the cybersecurity firm McAfee is the latest indication that adversarial machine learning can potentially wreck autonomous driving systems, presenting a security challenge to those hoping to commercialize the technology.

    MobilEye EyeQ3 camera systems read speed limit signs and feed that information into autonomous driving features like Tesla’s automatic cruise control, said Steve Povolny and Shivangee Trivedi from McAfee’s Advanced Threat Research team.

    The researchers stuck a tiny and nearly imperceptible sticker on a speed limit sign. The camera read the sign as 85 instead of 35 and, in testing, both the 2016 Tesla Model X and that year’s Model S sped up 50 miles per hour.

    The modified speed limit sign reads as 85 on the Tesla’s heads-up display. A Mobileye spokesperson downplayed the research by suggesting this sign would fool a human into reading 85 as well.

    The Tesla, reading the modified 35 as 85, is tricked into accelerating.

    This is the latest in an increasing mountain of research showing how machine learning systems can be attacked and fooled in life-threatening situations.

    “Why we’re studying this in advance is because you have intelligent systems that at some point in the future are going to be doing tasks that are now handled by humans,” Povolny said. “If we are not very prescient about what the attacks are and very careful about how the systems are designed, you then have a rolling fleet of interconnected computers which are one of the most impactful and enticing attack surfaces out there.”

    As autonomous systems proliferate, the issue extends to machine learning algorithms far beyond vehicles: A March 2019 study showed medical machine-learning systems fooled into giving bad diagnoses.

    A Mobileye spokesperson downplayed the research by suggesting the modified sign would even fool a human into reading 85 instead of 35. The company doesn’t consider tricking the camera to be an attack and, despite the role the camera plays in Tesla’s cruise control and the camera wasn’t designed for autonomous driving.

    “Autonomous vehicle technology will not rely on sensing alone, but will also be supported by various other technologies and data, such as crowdsourced mapping, to ensure the reliability of the information received from the camera sensors and offer more robust redundancies and safety,” the Mobileye spokesperson said in a statement.

    Comme je cherchais des mots clés, je me disais que « #cyberattaque » n’était pas le bon terme, car l’attaque n’est pas via le numérique, mais bien en accolant un stocker sur un panneau physique. Il ne s’agit pas non plus d’une attaque destructive, mais simplement de « rendre fou (footing) » le système de guidage, car celui-ci ne « comprend » pas une situation. La réponse de MobilEye est intéressante : un véhicule autonome ne peut pas se fier à sa seule « perception », mais recouper l’information avec d’autres sources.

    #Machine_learning #Véhicules_autonomes #Tesla #Panneau_routiers #Intelligence_artificielle

  • AI bias creep is a problem that’s hard to fix | Biometric Update

    On the heels of a National Institute of Standards and Technology (NIST) study on demographic differentials of biometric facial recognition accuracy, Karen Hao, an artificial intelligence authority and reporter for MIT Technology Review, recently explained that “bias can creep in at many stages of the [AI] deep-learning process” because “the standard practices in computer science aren’t designed to detect it.”

    “Fixing discrimination in algorithmic systems is not something that can be solved easily,” explained Andrew Selbst, a post-doctoral candidate at the Data & Society Research Institute, and lead author of the recent paper, Fairness and Abstraction in Sociotechnical Systems.

    “A key goal of the fair-ML community is to develop machine-learning based systems that, once introduced into a social context, can achieve social and legal outcomes such as fairness, justice, and due process,” the paper’s authors, which include Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi, noted, adding that “(b)edrock concepts in computer science – such as abstraction and modular design – are used to define notions of fairness and discrimination, to produce fairness-aware learning algorithms, and to intervene at different stages of a decision-making pipeline to produce ‘fair’ outcomes.”

    Consequently, just recently a broad coalition of more than 100 civil rights, digital justice, and community-based organizations issued a joint statement of civil rights concerns in which they highlighted concerns with the adoption of algorithmic-based decision making tools.

    Explaining why “AI bias is hard to fix,” Hoa cited as an example, “unknown unknowns. The introduction of bias isn’t always obvious during a model’s construction because you may not realize the downstream impacts of your data and choices until much later. Once you do, it’s hard to retroactively identify where that bias came from and then figure out how to get rid of it.”

    Hoa also blames “lack of social context,” meaning “the way in which computer scientists are taught to frame problems often isn’t compatible with the best way to think about social problems.”

    Then there are the definitions of fairness where it’s not at all “clear what the absence of bias should look like,” Hoa argued, noting, “this isn’t true just in computer science – this question has a long history of debate in philosophy, social science, and law. What’s different about computer science is that the concept of fairness has to be defined in mathematical terms, like balancing the false positive and false negative rates of a prediction system. But as researchers have discovered, there are many different mathematical definitions of fairness that are also mutually exclusive.”

    “A very important aspect of ethical behavior is to avoid (intended, perceived, or accidental) bias,” which they said “occurs when the data distribution is not representative enough of the natural phenomenon one wants to model and reason about. The possibly biased behavior of a service is hard to detect and handle if the AI service is merely being used and not developed from scratch since the training data set is not available.”

    #Machine_learning #Intelligence_artificielle #Société #Sciences_sociales

  • « Le terme IA est tellement sexy qu’il fait prendre des calculs pour de l’intelligence »

    Croire que l’intelligence artificielle ait quelque chose à voir avec l’intelligence humaine est une illusion, détaille l’informaticien Vincent Bérenger dans une tribune au « Monde ». Tribune. C’est une éminente figure de l’intelligence artificielle (IA), Yann Le Cun, qui souligne que les prouesses de l’IA démontrent bien plus les limites intellectuelles humaines que l’intelligence de ses réalisations. Nous sommes de mauvais calculateurs, nous ne savons pas brasser de grandes quantités d’informations, (...)

    #algorithme #technologisme

  • Washington Must Bet Big on AI or Lose Its Global Clout | WIRED

    The report, from the Center for New American Security (CNAS), is the latest to highlight the importance of AI to the future of the US. It argues that the technology will define economic, military, and geopolitical power in coming decades.

    Advanced technologies, including AI, 5G wireless services, and quantum computing, are already at the center of an emerging technological cold war between the US and China. The Trump administration has declared AI a national priority, and it has enacted policies, such as technology export controls, designed to limit China’s progress in AI and related areas.

    The CNAS report calls for a broader national AI strategy and a level of commitment reminiscent of the Apollo program. “If the United States wants to continue to be the world leader, not just in technology but in political power and being able to promote democracy and human rights, that calls for this type of effort,” says Martijn Rasser, a senior fellow at CNAS and the lead author of the report.

    Rasser and his coauthors believe AI will be as pervasive and transformative as software itself has been. This means it will be of critical importance to economic success as well as military might and global influence. Rasser argues that $25 billion over five years is achievable, and notes that it would constitute less than 19 percent of total federal R&D in the 2020 budget.

    “We’re back in an era of great power competition, and technology is that the center,” Rasser says. “And the nation that leads, not just artificial intelligence but technology across the board, will truly dominate the 21st century.”

    “Both the Russians and the Chinese have concluded that the way to leapfrog the US is with AI,” says Bob Work, a distinguished senior fellow at CNAS who served as deputy secretary of defense under Presidents Obama and Trump. Work says the US needs to convince the public and that it doesn’t intend to develop lethal autonomous weapons, only technology that would counter the work Russia and China are doing.

    In addition to calling for new funding, the CNAS report argues that a different attitude towards international talent is needed. It recommends that the US attract and retain more foreign scientists by raising the number of H1-B visas and removing the cap for people with advanced degrees. “You want these people to live, work, and stay in the United States,” Rasser says. The report suggests early vetting of applications at foriegn embassies to identify potential security risks.

    #Intelligence_artificielle #Guerre_technologique #Géopolitique