technology:artificial intelligence

  • Chinese search firm Baidu joins global AI ethics body
    https://www.theguardian.com/technology/2018/oct/17/baidu-chinese-search-firm-joins-global-ai-ethics-body-google-apple-face

    Company is first Chinese member of Partnership on AI, following, Google, Apple, Facebook and others The AI ethics body formed by five of the largest US corporations has expanded to include its first Chinese member, the search firm Baidu. The Partnership on Artificial Intelligence to Benefit People and Society – known as the Partnership on AI (PAI) – was formed in 2016 by Google, Facebook, Amazon, IBM and Microsoft to act as an umbrella organisation for the five companies to conduct (...)

    #Google #Microsoft #IBM #Amazon #Baidu #algorithme #éthique

    https://i.guim.co.uk/img/media/36a0e34fef9da3e451c09c32ac69df46c80554c4/77_765_2217_1330/master/2217.jpg


  • Uganda’s refugee policies: the history, the politics, the way forward

    Uganda’s refugee policy urgently needs an honest discussion, if sustainable solutions for both refugees and host communities are to be found, a new policy paper by International Refugee Rights Initiative (IRRI) reveals.

    The paper, entitled Uganda’s refugee policies: the history, the politics, the way forward puts the “Ugandan model” in its historical and political context, shines a spotlight on its implementation gaps, and proposes recommendations for the way forward.

    Uganda has since 2013 opened its borders to hundreds of thousands of refugees from South Sudan, bringing the total number of refugees to more than one million. It has been praised for its positive steps on freedom of movement and access to work for refugees, going against the global grain. But generations of policy, this paper shows, have only entrenched the sole focus on refugee settlements and on repatriation as the only viable durable solution. Support to urban refugees and local integration have been largely overlooked.

    The Ugandan refugee crisis unfolded at the same time as the UN adopted the New York Declaration for Refugees and Migrants, and states committed to implement a Comprehensive Refugee Response Framework (CRRF). Uganda immediately seized this opportunity and adopted its own strategy to implement these principles. As the world looks to Uganda for best practices in refugee policy, and rightly so, it is vital to understand the gaps between rhetoric and reality, and the pitfalls of Uganda’s policy. This paper identifies the following challenges:

    There is a danger that the promotion of progressive refugee policies becomes more rhetoric than reality, creating a smoke-screen that squeezes out meaningful discussion about robust alternatives. Policy-making has come at the expense of real qualitative change on the ground.
    Refugees in urban areas continue to be largely excluded from any support due to an ongoing focus on refugee settlements, including through aid provision
    Local integration and access to citizenship have been virtually abandoned, leaving voluntary repatriation as the only solution on the table. Given the protracted crises in South Sudan and Democratic Republic of Congo, this remains unrealistic.
    Host communities remain unheard, with policy conversations largely taking place in Kampala and Geneva. Many Ugandans and refugees have neither the economic resources nor sufficient political leverage to influence the policies that are meant to benefit them.

    The policy paper proposes a number of recommendations to improve the Ugandan refugee model:

    First, international donors need to deliver on their promise of significant financial support.
    Second, repatriation cannot remain the only serious option on the table. There has to be renewed discussion on local integration with Uganda communities and a dramatic increase in resettlement to wealthier states across the globe.
    Third, local communities hosting refugees must be consulted and their voices incorporated in a more meaningful and systematic way, if tensions within and between communities are to be avoided.
    Fourth, in order to genuinely enhance refugee self-reliance, the myth of the “local settlement” needs to be debunked and recognized for what it is: the ongoing isolation of refugees and the utilization of humanitarian assistance to keep them isolated and dependent on aid.


    http://refugee-rights.org/uganda-refugee-policies-the-history-the-politics-the-way-forward
    #modèle_ougandais #Ouganda #asile #migrations #réfugiés

    Pour télécharger le #rapport:
    http://refugee-rights.org/wp-content/uploads/2018/10/IRRI-Uganda-policy-paper-October-2018-Paper.pdf

    • A New Deal for Refugees

      Global policies that aim to resettle and integrate displaced populations into local societies is providing a way forward.

      For many years now, groups that work with refugees have fought to put an end to the refugee camp. It’s finally starting to happen.

      Camps are a reasonable solution to temporary dislocation. But refugee crises can go on for decades. Millions of refugees have lived in their country of shelter for more than 30 years. Two-thirds of humanitarian assistance — intended for emergencies — is spent on crises that are more than eight years old.

      Camps are stagnant places. Refugees have access to water and medical care and are fed and educated, but are largely idle. “You keep people for 20 years in camps — don’t expect the next generation to be problem-free,” said Xavier Devictor, who advises the World Bank on refugee issues. “Keeping people in those conditions is not a good idea.” It’s also hard to imagine a better breeding ground for terrorists.

      “As long as the system is ‘we feed you,’ it’s always going to be too expensive for the international community to pay for,” Mr. Devictor said. It’s gotten more and more difficult for the United Nations High Commissioner for Refugees to raise that money; in many crises, the refugee agency can barely keep people from starving. It’s even harder now as nations turn against foreigners — even as the number of people fleeing war and violence has reached a record high.

      At the end of last year, nearly 70 million people were either internally displaced in their own countries, or had crossed a border and become a refugee. That is the largest number of displaced in history — yes, more than at the end of World War II. The vast majority flee to neighboring countries — which can be just as badly off.

      Last year, the United States accepted about 30,000 refugees.

      Uganda, which is a global model for how it treats refugees, has one-seventh of America’s population and a tiny fraction of the wealth. Yet it took in 1,800 refugees per day between mid-2016 and mid-2017 from South Sudan alone. And that’s one of four neighbors whose people take refuge in Uganda.

      Bangladesh, already the world’s most crowded major nation, has accepted more than a million Rohingya fleeing ethnic cleansing in Myanmar. “If we can feed 160 million people, then (feeding) another 500,00-700,000 …. We can do it. We can share our food,” Shiekh Hasina, Bangladesh’s prime minister, said last year.

      Lebanon is host to approximately 1.5 million Syrian refugees, in addition to a half-million Palestinians, some of whom have been there for generations. One in three residents of Lebanon is a refugee.

      The refugee burden falls heavily on a few, poor countries, some of them at risk of destabilization, which can in turn produce more refugees. The rest of the world has been unwilling to share that burden.

      But something happened that could lead to real change: Beginning in 2015, hundreds of thousands of Syrian refugees crossed the Mediterranean in small boats and life rafts into Europe.

      Suddenly, wealthy European countries got interested in fixing a broken system: making it more financially viable, more dignified for refugees, and more palatable for host governments and communities.

      In September 2016, the United Nations General Assembly unanimously passed a resolution stating that all countries shared the responsibility of protecting refugees and supporting host countries. It also laid out a plan to move refugees out of camps into normal lives in their host nations.

      Donor countries agreed they would take more refugees and provide more long-term development aid to host countries: schools, hospitals, roads and job-creation measures that can help both refugees and the communities they settle in. “It looked at refugee crises as development opportunities, rather than a humanitarian risk to be managed,” said Marcus Skinner, a policy adviser at the International Rescue Committee.

      The General Assembly will vote on the specifics next month (whatever they come up with won’t be binding). The Trump administration pulled out of the United Nations’ Global Compact on Migration, but so far it has not opposed the refugee agreement.

      There’s a reason refugee camps exist: Host governments like them. Liberating refugees is a hard sell. In camps, refugees are the United Nations’ problem. Out of camps, refugees are the local governments’ problem. And they don’t want to do anything to make refugees comfortable or welcome.

      Bangladesh’s emergency response for the Rohingya has been staggeringly generous. But “emergency” is the key word. The government has resisted granting Rohingya schooling, work permits or free movement. It is telling Rohingya, in effect, “Don’t get any ideas about sticking around.”

      This attitude won’t deter the Rohingya from coming, and it won’t send them home more quickly. People flee across the closest border — often on foot — that allows them to keep their families alive. And they’ll stay until home becomes safe again. “It’s the simple practicality of finding the easiest way to refuge,” said Victor Odero, regional advocacy coordinator for East Africa and the Horn of Africa at the International Rescue Committee. “Any question of policies is a secondary matter.”

      So far, efforts to integrate refugees have had mixed success. The first experiment was a deal for Jordan, which was hosting 650,000 Syrian refugees, virtually none of whom were allowed to work. Jordan agreed to give them work permits. In exchange, it got grants, loans and trade concessions normally available only to the poorest countries.

      However, though the refugees have work permits, Jordan has put only a moderate number of them into jobs.

      Any agreement should include the views of refugees from the start — the Jordan Compact failed to do this. Aid should be conditioned upon the right things. The deal should have measured refugee jobs, instead of work permits. Analysts also said the benefits should have been targeted more precisely, to reach the areas with most refugees.

      To spread this kind of agreement to other nations, the World Bank established a $2 billion fund in July 2017. The money is available to very poor countries that host many refugees, such as Uganda and Bangladesh. In return, they must take steps to integrate refugees into society. The money will come as grants and zero interest loans with a 10-year grace period. Middle-income countries like Lebanon and Colombia would also be eligible for loans at favorable rates under a different fund.

      Over the last 50 years, only one developing country has granted refugees full rights. In Uganda, refugees can live normally. Instead of camps there are settlements, where refugees stay voluntarily because they get a plot of land. Refugees can work, live anywhere, send their children to school and use the local health services. The only thing they can’t do is become Ugandan citizens.

      Given the global hostility to refugees, it is remarkable that Ugandans still approve of these policies. “There have been flashes of social tension or violence between refugees and their hosts, mostly because of a scarcity of resources,” Mr. Odero said. “But they have not become widespread or protracted.”

      This is the model the United Nations wants the world to adopt. But it is imperiled even in Uganda — because it requires money that isn’t there.

      The new residents are mainly staying near the South Sudan border in Uganda’s north — one of the least developed parts of the country. Hospitals, schools, wells and roads were crumbling or nonexistent before, and now they must serve a million more people.

      Joël Boutroue, the head of the United Nations refugee agency in Uganda, said current humanitarian funding covered a quarter of what the crisis required. “At the moment, not even half of refugees go to primary school,” he said. “There are around 100 children per classroom.”

      Refugees are going without food, medical care and water. The plots of land they get have grown smaller and smaller.

      Uganda is doing everything right — except for a corruption scandal. It could really take advantage of the new plan to develop the refugee zone. That would not only help refugees, it would help their host communities. And it would alleviate growing opposition to rights for refugees. “The Ugandan government is under pressure from politicians who see the government giving favored treatment to refugees,” Mr. Boutroue said. “If we want to change the perception of refugees from recipients of aid to economic assets, we have to showcase that refugees bring development.”

      The World Bank has so far approved two projects — one for water and sanitation and one for city services such as roads and trash collection. But they haven’t gotten started yet.

      Mr. Devictor said that tackling long-term development issues was much slower than providing emergency aid. “The reality is that it will be confusing and confused for a little while,” he said. Water, for example, is trucked in to Uganda’s refugee settlements, as part of humanitarian aid. “That’s a huge cost,” he said. “But if we think this crisis is going to last for six more months, it makes sense. If it’s going to last longer, we should think about upgrading the water system.”

      Most refugee crises are not surprises, Mr. Devictor said. “If you look at a map, you can predict five or six crises that are going to produce refugees over the next few years.” It’s often the same places, over and over. That means developmental help could come in advance, minimizing the burden on the host. “Do we have to wait until people cross the border to realize we’re going to have an emergency?” he said.

      Well, we might. If politicians won’t respond to a crisis, it’s hard to imagine them deciding to plan ahead to avert one. Political commitment, or lack of it, always rules. The world’s new approach to refugees was born out of Europe’s panic about the Syrians on their doorstep. But no European politician is panicking about South Sudanese or Rohingya refugees — or most crises. They’re too far away. The danger is that the new approach will fall victim to the same political neglect that has crippled the old one.

      https://www.nytimes.com/2018/08/21/opinion/refugee-camps-integration.html

      #Ouganda #modèle_ougandais #réinstallation #intégration

      avec ce commentaire de #Jeff_Crisp sur twitter :

      “Camps are stagnant places. Refugees have access to water and medical care and are fed and educated, but are largely idle.”
      Has this prizewinning author actually been to a refugee camp?

      https://twitter.com/JFCrisp/status/1031892657117831168

    • Appreciating Uganda’s ‘open door’ policy for refugees

      While the rest of the world is nervous and choosing to take an emotional position on matters of forced migration and refugees, sometimes closing their doors in the face of people who are running from persecution, Uganda’s refugee policy and practice continues to be liberal, with an open door to all asylum seekers, writes Arthur Matsiko

      http://thisisafrica.me/appreciating-ugandas-open-door-policy-refugees

    • Ouganda. La générosité intéressée du pays le plus ouvert du monde aux réfugiés

      L’Ouganda est le pays qui accueille le plus de réfugiés. Un million de Sud-Soudanais fuyant la guerre s’y sont installés. Mais cette noble intention des autorités cache aussi des calculs moins avouables : l’arrivée massive de l’aide internationale encourage l’inaction et la #corruption.

      https://www.courrierinternational.com/article/ouganda-la-generosite-interessee-du-pays-le-plus-ouvert-du-mo

    • Refugees in Uganda to benefit from Dubai-funded schools but issues remain at crowded settlement

      Dubai Cares is building three classrooms in a primary school at Ayilo II but the refugee settlement lacks a steady water supply, food and secondary schools, Roberta Pennington writes from Adjumani


      https://www.thenational.ae/uae/refugees-in-uganda-to-benefit-from-dubai-funded-schools-but-issues-remai

    • FUGA DAL SUD SUDAN. LUIS, L’UGANDA E QUEL PEZZO DI TERRA DONATA AI PROFUGHI

      Luis zappa, prepara dei fori per tirare su una casa in attesa di ritrovare la sua famiglia. Il terreno è una certezza, glielo ha consegnato il Governo ugandese. Il poterci vivere con i suoi cari non ancora. L’ultima volta li ha visti in Sud Sudan. Nel ritornare a casa sua moglie e i suoi otto figli non c’erano più. É sicuro si siano messi in cammino verso l’Uganda, così da quel giorno è iniziata la sua rincorsa. É certo che li ritroverà nella terra che ora lo ha accolto. Quella di Luis è una delle tante storie raccolte nei campi profughi del nord dell’Uganda, in una delle ultime missioni di Amref, in cui era presente anche Giusi Nicolini, già Sindaco di Lampedusa e Premio Unesco per la pace. 



      Modello Uganda? Dell’Uganda il mondo dice «campione di accoglienza». Accoglienza che sta sperimentando da mesi nei confronti dei profughi sud sudanesi, che scappano da uno dei Paesi più drammaticamente in crisi al mondo. Sono 4 milioni le persone che in Sud Sudan hanno dovuto lasciare le proprie case. Chi muovendosi verso altri Paesi e chi in altre regioni sud sudanesi. In questi ultimi tempi arrivano in Uganda anche persone che fuggono dalla Rep. Democratica del Congo.

      https://www.amref.it/2018_02_23_Fuga_dal_Sud_Sudan_Luis_lUganda_e_quel_pezzo_di_terra_donata_ai_pro


  • From the birth of computing to Amazon : why tech’s gender problem is nothing new
    https://www.theguardian.com/technology/2018/oct/11/tech-gender-problem-amazon-facebook-bias-women

    Decades after women were pushed out of programming, Amazon’s AI recruiting technology carried on the industry’s legacy of bias A recent report revealed Amazon’s AI recruiting technology developed a bias against women because it was trained predominantly on men’s resumes. Although Amazon shut the project down, this kind of mechanized sexism is common and growing – and the problem isn’t limited to AI mishaps. Facebook allows the targeting of job ads by gender, resulting in discrimination in (...)

    #Alphabet #Google #Amazon #Facebook #algorithme #BigData #discrimination #GAFAM


  • Amazon ditched AI recruiting tool that favored men for technical jobs
    https://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engine

    Specialists had been building computer programs since 2014 to review résumés in an effort to automate the search process Amazon’s machine-learning specialists uncovered a big problem : their new recruiting engine did not like women. The team had been building computer programs since 2014 to review job applicants’ résumés, with the aim of mechanizing the search for top talent, five people familiar with the effort told Reuters. Automation has been key to Amazon’s e-commerce dominance, be it (...)

    #Amazon #algorithme #discrimination #travail


  • Amazon scraps secret AI recruiting tool that showed bias against women
    https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUS

    Amazon.com Inc’s (AMZN.O) machine-learning specialists uncovered a big problem : their new recruiting engine did not like women. The team had been building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent, five people familiar with the effort told Reuters. Automation has been key to Amazon’s e-commerce dominance, be it inside warehouses or driving pricing decisions. The company’s experimental hiring tool used artificial (...)

    #Amazon #algorithme #discrimination

    https://s3.reutersmedia.net/resources/r


  • Amazon scrapped a secret AI recruitment tool that showed bias against women | VentureBeat
    https://venturebeat.com/2018/10/10/amazon-scrapped-a-secret-ai-recruitment-tool-that-showed-bias-against-w

    Everyone wanted this holy grail,” one of the people said. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

    But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.

    That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.

    In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools.

    Amazon edited the programs to make them neutral to these particular terms. But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory, the people said.

    The Seattle company ultimately disbanded the team by the start of last year because executives lost hope for the project, according to the people, who spoke on condition of anonymity. Amazon’s recruiters looked at the recommendations generated by the tool when searching for new hires, but never relied solely on those rankings, they said.

    Their goal was to develop AI that could rapidly crawl the web and spot candidates worth recruiting, the people familiar with the matter said.

    The group created 500 computer models focused on specific job functions and locations. They taught each to recognize some 50,000 terms that showed up on past candidates’ resumes. The algorithms learned to assign little significance to skills that were common across IT applicants, such as the ability to write various computer codes, the people said.

    Instead, the technology favored candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as “executed” and “captured,” one person said.

    Gender bias was not the only issue. Problems with the data that underpinned the models’ judgments meant that unqualified candidates were often recommended for all manner of jobs, the people said. With the technology returning results almost at random, Amazon shut down the project, they said.

    Some activists say they are concerned about transparency in AI. The American Civil Liberties Union is currently challenging a law that allows criminal prosecution of researchers and journalists who test hiring websites’ algorithms for discrimination.

    “We are increasingly focusing on algorithmic fairness as an issue,” said Rachel Goodman, a staff attorney with the Racial Justice Program at the ACLU.

    Still, Goodman and other critics of AI acknowledged it could be exceedingly difficult to sue an employer over automated hiring: Job candidates might never know it was being used.

    #Intelligence_artificielle #Amazon #Ressources_humaines #Recrutement #Gender_bias #Discrimination



  • Bots at the Gate A Human Rights Analysis of Automated Decision. Making in Canada’s Immigration and Refugee System

    A new report from the Citizen Lab and the International Human Rights Program at the University of Toronto’s Faculty of Law investigates the use of artificial intelligence and automated decision-making in Canada’s immigration and refugee systems. The report finds that use of automated decision-making technologies to augment or replace human judgment threatens to violate domestic and international human rights law, with alarming implications for the fundamental human rights of those subjected to these technologies.

    The ramifications of using automated decision-making in the sphere of immigration and refugee law and policy are far-reaching. Marginalized and under-resourced communities such as residents without citizenship status often have access to less robust human rights protections and less legal expertise with which to defend those rights. The report notes that adopting these autonomous decision-making systems without first ensuring responsible best practices and building in human rights principles at the outset may only exacerbate pre-existing disparities and can lead to rights violations including unjust deportation.

    Since at least 2014, Canada has been introducing automated decision-making experiments in its immigration mechanisms, most notably to automate certain activities currently conducted by immigration officials and to support the evaluation of some immigrant and visitor applications. Recent announcements signal an expansion of the uses of these technologies in a variety of immigration decisions that are normally made by a human immigration official. These can include decisions on a spectrum of complexity, including whether an application is complete, whether a marriage is “genuine”, or whether someone should be designated as a “risk.”

    The report provides a critical interdisciplinary analysis of public statements, records, policies, and drafts by relevant departments within the Government of Canada, including Immigration, Refugees and Citizenship Canada, and the Treasury Board of Canada Secretariat. The report additionally provides a comparative analysis to similar initiatives occurring in similar jurisdictions such as Australia and the United Kingdom. In February, the IHRP and the Citizen Lab submitted 27 separate Access to Information Requests and continue to await responses from Canada’s government.

    The report concludes with a series of specific recommendations for the federal government, the complete and detailed list of which are available at the end of this publication. In summary, they include recommendations that the federal government:

    1. Publish a complete and detailed report, to be maintained on an ongoing basis, of all automated decision systems currently in use within Canada’s immigration and refugee system, including detailed and specific information about each system.

    2. Freeze all efforts to procure, develop, or adopt any new automated decision system technology until existing systems fully comply with a government-wide Standard or Directive governing the responsible use of these technologies.

    3. Adopt a binding, government-wide Standard or Directive for the use of automated decision systems, which should apply to all new automated decision systems as well as those currently in use by the federal government.

    4. Establish an independent, arms-length body with the power to engage in all aspects of oversight and review of all use of automated decision systems by the federal government.

    5. Create a rational, transparent, and public methodology for determining the types of administrative processes and systems which are appropriate for the experimental use of automated decision system technologies, and which are not.

    6. Commit to making complete source code for all federal government automated decision systems—regardless of whether they are developed internally or by the private sector—public and open source by default, subject only to limited exceptions for reasons of privacy and national security.

    7. Launch a federal Task Force that brings key government stakeholders alongside academia and civil society to better understand the current and prospective impacts of automated decision system technologies on human rights and the public interest more broadly.


    https://citizenlab.ca/2018/09/bots-at-the-gate-human-rights-analysis-automated-decision-making-in-canad
    #frontières #surveillance #migrations #catégorisation #tri #Droits_Humains #rapport #Canada #réfugiés #protection_des_données #smart_borders #frontières_intelligentes #algorithme #automatisme
    signalé par @etraces sur seenthis


  • Bots at the Gate
    https://citizenlab.ca/2018/09/bots-at-the-gate-human-rights-analysis-automated-decision-making-in-canad

    A Human Rights Analysis of Automated Decision Making in Canada’s Immigration and Refugee System A new report from the Citizen Lab and the International Human Rights Program at the University of Toronto’s Faculty of Law investigates the use of artificial intelligence and automated decision-making in Canada’s immigration and refugee systems. The report finds that use of automated decision-making technologies to augment or replace human judgment threatens to violate domestic and international (...)

    #algorithme #frontières #migration #discrimination #solutionnisme #CitizenLab


  • MOL Tests AI Watch Keeping System on Ferry in Seto Inland Sea – gCaptain
    https://gcaptain.com/mol-tests-ai-watch-keeping-system-on-ferry-in-seto-inland-sea


    Image credit: MOL/Rolls-Royce

    Japanese shipping group MOL has tested an artificial intelligence (A.I.) system aimed at improving safe watch keeping on board a ferry operating in one of Japan’s busiest waterways.

    The test of the so-called Intelligence Awareness System was conducted in collaboration with Rolls-Royce Marine using the Sunflower Gold car and passenger ferry. The ferry, which is operated by Ferry Sunflower Co., which is part of MOL, serves Japan’s Seto Inland Sea route.

    MOL said the aim of the project is to conduct research related to the advancement of watch keeping from the bridge.
    […]
    During the test, the project team verified the IAS system’s performance for detecting debris and other obstacles, as well as its ‘data fusion’ capabilities by conducting the demonstration test in the Seto Inland Sea, one of world’s most congested waterways with general merchant ships, pleasure boats, fishing boats, many other vessels active in the area.

    MOL said the test also led to an idea for an advanced user interface, which can provide information with greater precision. “MOL plans to continuously accumulate data on the sea and use it to generate practical improvements in watch standing performance that make the system suitable for navigation in the Seto Inland Sea, while upgrading its performance in adverse weather,” the company said.

    As for the crew of the ferry, they seemed to be receptive to the idea of using artificial intelligence, an autonomous shipping technology, as part of vessel operations. “We can expect more reliable watch keeping from the bridge,” MOL quoted one crew member as saying.


  • Google admits it lets hundreds of other companies access your Gmail inbox

    https://www.telegraph.co.uk/technology/2018/09/20/google-admits-hundreds-companies-read-gmail-inbox

    Google is allowing hundreds of companies to scan people’s Gmail accounts, read their emails and even share their data with other firms, the company has confirmed.

    In a letter to US senators Susan Molinari, Google’s vice president for public policy in the Americas admitted that it lets app developers access the inboxes of millions of users – even though Google itself stopped looking in 2017.

    In some cases human employees have manually read thousands of emails in order to help train AI systems which perform the same task.


  • Google erases ’Don’t be evil’ from code of conduct after 18 years | ZDNet
    https://www.zdnet.com/article/google-erases-dont-be-evil-from-code-of-conduct-after-18-years

    At some point in the past month, Google removed its famous ’Don’t be evil’ motto from the introduction to its code of conduct.

    As spotted by Gizmodo, the phrase was dropped from the preface of Google’s code of conduct in late April or early May.

    Until then, ’Don’t be evil’ were the first words of the opening and closing sentences of Google’s code of conduct and have been part of it since 2000.

    The phase occasionally guides debate within the company. The 4,000 staff protesting Google’s work for the Pentagon’s AI Project Maven referred to the motto to highlight how the contract conflicted with the company’s values.

    Google’s parent company, Alphabet, also adopted and still retains a variant of the motto in the form of ’Do the right thing’.

    A copy of the Google’s Code of Conduct page from April 21 on the Wayback Machine shows the old version.

    "’Don’t be evil.’ Googlers generally apply those words to how we serve our users. But ’Don’t be evil’ is much more than that. Yes, it’s about providing our users unbiased access to information, focusing on their needs and giving them the best products and services that we can. But it’s also about doing the right thing more generally — following the law, acting honorably, and treating co-workers with courtesy and respect.

    "The Google Code of Conduct is one of the ways we put ’Don’t be evil’ into practice. It’s built around the recognition that everything we do in connection with our work at Google will be, and should be, measured against the highest possible standards of ethical business conduct.

    “We set the bar that high for practical as well as aspirational reasons: Our commitment to the highest standards helps us hire great people, build great products, and attract loyal users. Trust and mutual respect among employees and users are the foundation of our success, and they are something we need to earn every day.”

    The whole first paragraph has been removed from the current Code of Conduct page, which now begins with:

    "The Google Code of Conduct is one of the ways we put Google’s values into practice. It’s built around the recognition that everything we do in connection with our work at Google will be, and should be, measured against the highest possible standards of ethical business conduct.

    “We set the bar that high for practical as well as aspirational reasons: Our commitment to the highest standards helps us hire great people, build great products, and attract loyal users. Respect for our users, for the opportunity, and for each other are foundational to our success, and are something we need to support every day.”

    While the phrase no longer leads Google’s code of conduct, one remnant remains at the end.

    “And remember... don’t be evil, and if you see something that you think isn’t right — speak up.”

    #Google #Histoire_numérique #Motto #Evil


  • Amazon’s Alexa knows what you forgot and can guess what you’re thinking
    https://www.theguardian.com/technology/2018/sep/20/alexa-amazon-hunches-artificial-intelligence

    AI voice assistant will soon give users with connected smart home devices reminders to lock doors and turn off lights Amazon says its AI voice assistant Alexa can now guess what you might be thinking of – or what you’ve forgotten. At an event in Seattle on Thursday, the technology company unveiled a new feature called Alexa Hunches that aims to replicate human curiosity and insight using artificial intelligence. “We’ve reached a point with deep neural networks and machine learning that we (...)

    #Amazon #algorithme #Alexa #domination #voix #solutionnisme #AlexaHunches



  • #Internet Crime Complaint Center (IC3) | #Education #Technologies: Data Collection and Unsecured Systems Could Pose Risks to Students
    https://www.ic3.gov/media/2018/180913.aspx

    The FBI is encouraging public awareness of cyber threat concerns related to K-12 students. The US school systems’ rapid growth of education technologies (EdTech) and widespread collection of student data could have privacy and safety implications if compromised or exploited.

    The tech #elite is making a power-grab for public education – code acts in education
    https://codeactsineducation.wordpress.com/2018/09/14/new-tech-power-elite-education

    The FBI and the ‘ed-techlash’

    The tech elite now making a power-grab for public education probably has little to fear from FBI warnings about education technology. The FBI is primarily concerned with potentially malicious uses of sensitive student information by cybercriminals. There’s nothing criminal about creating Montessori-inspired preschool networks, using ClassDojo as a vehicle to build a liberal society, reimagining high school as personalized learning, or reshaping universities as AI-enhanced factories for producing labour market outcomes– unless you consider all of this a kind of theft of public education for private #commercial advantage and #influence .

    The FBI intervention does, however, at least generate greater visibility for concerns about student data use. The tech power-elite of Zuckerberg, Musk, Thiel, Bezos, Powell Jobs, and the rest, is trying to reframe public education as part of the tech sector, and subject it to ever-greater precision in measurement, prediction and intervention. These entrepreneurs are already experiencing a ‘#techlash‘ as people realize how much they have affected politics, culture and social life. Maybe the FBI warning is the first indication of a growing ‘#ed-techlash’, as the public becomes increasingly aware of how the tech power-elite is seeking to remake public education to serve its own private interests.

    #conflit_d'intérêt


  • What worries me about AI – François Chollet – Medium
    https://medium.com/@francois.chollet/what-worries-me-about-ai-ed9df072b704

    This data, in theory, allows the entities that collect it to build extremely accurate psychological profiles of both individuals and groups. Your opinions and behavior can be cross-correlated with that of thousands of similar people, achieving an uncanny understanding of what makes you tick — probably more predictive than what yourself could achieve through mere introspection (for instance, Facebook “likes” enable algorithms to better assess your personality that your own friends could). This data makes it possible to predict a few days in advance when you will start a new relationship (and with whom), and when you will end your current one. Or who is at risk of suicide. Or which side you will ultimately vote for in an election, even while you’re still feeling undecided. And it’s not just individual-level profiling power — large groups can be even more predictable, as aggregating data points erases randomness and individual outliers.
    Digital information consumption as a psychological control vector

    Passive data collection is not where it ends. Increasingly, social network services are in control of what information we consume. What see in our newsfeeds has become algorithmically “curated”. Opaque social media algorithms get to decide, to an ever-increasing extent, which political articles we read, which movie trailers we see, who we keep in touch with, whose feedback we receive on the opinions we express.

    In short, social network companies can simultaneously measure everything about us, and control the information we consume. And that’s an accelerating trend. When you have access to both perception and action, you’re looking at an AI problem. You can start establishing an optimization loop for human behavior, in which you observe the current state of your targets and keep tuning what information you feed them, until you start observing the opinions and behaviors you wanted to see. A large subset of the field of AI — in particular “reinforcement learning” — is about developing algorithms to solve such optimization problems as efficiently as possible, to close the loop and achieve full control of the target at hand — in this case, us. By moving our lives to the digital realm, we become vulnerable to that which rules it — AI algorithms.

    From an information security perspective, you would call these vulnerabilities: known exploits that can be used to take over a system. In the case of the human minds, these vulnerabilities never get patched, they are just the way we work. They’re in our DNA. The human mind is a static, vulnerable system that will come increasingly under attack from ever-smarter AI algorithms that will simultaneously have a complete view of everything we do and believe, and complete control of the information we consume.

    The issue is not AI itself. The issue is control.

    Instead of letting newsfeed algorithms manipulate the user to achieve opaque goals, such as swaying their political opinions, or maximally wasting their time, we should put the user in charge of the goals that the algorithms optimize for. We are talking, after all, about your news, your worldview, your friends, your life — the impact that technology has on you should naturally be placed under your own control. Information management algorithms should not be a mysterious force inflicted on us to serve ends that run opposite to our own interests; instead, they should be a tool in our hand. A tool that we can use for our own purposes, say, for education and personal instead of entertainment.

    Here’s an idea — any algorithmic newsfeed with significant adoption should:

    Transparently convey what objectives the feed algorithm is currently optimizing for, and how these objectives are affecting your information diet.
    Give you intuitive tools to set these goals yourself. For instance, it should be possible for you to configure your newsfeed to maximize learning and personal growth — in specific directions.
    Feature an always-visible measure of how much time you are spending on the feed.
    Feature tools to stay control of how much time you’re spending on the feed — such as a daily time target, past which the algorithm will seek to get you off the feed.

    Augmenting ourselves with AI while retaining control

    We should build AI to serve humans, not to manipulate them for profit or political gain.

    You may be thinking, since a search engine is still an AI layer between us and the information we consume, could it bias its results to attempt to manipulate us? Yes, that risk is latent in every information-management algorithm. But in stark contrast with social networks, market incentives in this case are actually aligned with users needs, pushing search engines to be as relevant and objective as possible. If they fail to be maximally useful, there’s essentially no friction for users to move to a competing product. And importantly, a search engine would have a considerably smaller psychological attack surface than a social newsfeed. The threat we’ve profiled in this post requires most of the following to be present in a product:

    Both perception and action: not only should the product be in control of the information it shows you (news and social updates), it should also be able to “perceive” your current mental states via “likes”, chat messages, and status updates. Without both perception and action, no reinforcement learning loop can be established. A read-only feed would only be dangerous as a potential avenue for classical propaganda.
    Centrality to our lives: the product should be a major source of information for at least a subset of its users, and typical users should be spending several hours per day on it. A feed that is auxiliary and specialized (such as Amazon’s product recommendations) would not be a serious threat.
    A social component, enabling a far broader and more effective array of psychological control vectors (in particular social reinforcement). An impersonal newsfeed has only a fraction of the leverage over our minds.
    Business incentives set towards manipulating users and making users spend more time on the product.

    Most AI-driven information-management products don’t meet these requirements. Social networks, on the other hand, are a frightening combination of risk factors.

    #Intelligence_artificielle #Manipulation #Médias_sociaux

    • This is made all the easier by the fact that the human mind is highly vulnerable to simple patterns of social manipulation. Consider, for instance, the following vectors of attack:

      Identity reinforcement: this is an old trick that has been leveraged since the first very ads in history, and still works just as well as it did the first time, consisting of associating a given view with markers that you identify with (or wish you did), thus making you automatically siding with the target view. In the context of AI-optimized social media consumption, a control algorithm could make sure that you only see content (whether news stories or posts from your friends) where the views it wants you to hold co-occur with your own identity markers, and inversely for views the algorithm wants you to move away from.
      Negative social reinforcement: if you make a post expressing a view that the control algorithm doesn’t want you to hold, the system can choose to only show your post to people who hold the opposite view (maybe acquaintances, maybe strangers, maybe bots), and who will harshly criticize it. Repeated many times, such social backlash is likely to make you move away from your initial views.
      Positive social reinforcement: if you make a post expressing a view that the control algorithm wants to spread, it can choose to only show it to people who will “like” it (it could even be bots). This will reinforce your belief and put you under the impression that you are part of a supportive majority.
      Sampling bias: the algorithm may also be more likely to show you posts from your friends (or the media at large) that support the views it wants you to hold. Placed in such an information bubble, you will be under the impression that these views have much broader support than they do in reality.
      Argument personalization: the algorithm may observe that exposure to certain pieces of content, among people with a psychological profile close to yours, has resulted in the sort of view shift it seeks. It may then serve you with content that is expected to be maximally effective for someone with your particular views and life experience. In the long run, the algorithm may even be able to generate such maximally-effective content from scratch, specifically for you.

      From an information security perspective, you would call these vulnerabilities: known exploits that can be used to take over a system. In the case of the human minds, these vulnerabilities never get patched, they are just the way we work. They’re in our DNA. The human mind is a static, vulnerable system that will come increasingly under attack from ever-smarter AI algorithms that will simultaneously have a complete view of everything we do and believe, and complete control of the information we consume.


  • Joseph Stiglitz on artificial intelligence : ’We’re going towards a more divided society’
    https://www.theguardian.com/technology/2018/sep/08/joseph-stiglitz-on-artificial-intelligence-were-going-towards-a-more-di

    The technology could vastly improve lives, the economist says – but only if the tech titans that control it are properly regulated. ‘What we have now is totally inadequate’ It must be hard for Joseph Stiglitz to remain an optimist in the face of the grim future he fears may be coming. The Nobel laureate and former chief economist at the World Bank has thought carefully about how artificial intelligence will affect our lives. On the back of the technology, we could build ourselves a richer (...)

    #algorithme #solutionnisme #discrimination


  • Military robots are getting smaller and more capable (https://www.e...
    https://diasp.eu/p/7664483

    Military robots are getting smaller and more capable

    Soon, they will travel in swarms Article word count: 1557

    HN Discussion: https://news.ycombinator.com/item?id=17898057 Posted by prostoalex (karma: 64719) Post stats: Points: 90 - Comments: 68 - 2018-09-02T18:45:51Z

    #HackerNews #and #are #capable #getting #military #more #robots #smaller

    Article content:

    ON NOVEMBER 12th a video called “Slaughterbots” was uploaded to YouTube. It is the brainchild of Stuart Russell, a professor of artificial intelligence at the University of California, Berkeley, and was paid for by the Future of Life Institute (FLI), a group of concerned scientists and technologists that includes Elon Musk, Stephen Hawking and Martin Rees, Britain’s Astronomer Royal. It is set in a near-future in which (...)



  • Toyota invests $500 million in Uber
    https://money.cnn.com/2018/08/27/technology/toyota-uber/index.html

    Eine halbe Milliarde Spielgeld steckt Toyota in vermeintliche Innovation. Kein Wunder, denn nur autonome Fahrzeuge versprechen auf längere Zeit den Markt für PKW am Leben zu erhalten. Das ist konsequent aus der Perspekzive eines der prößten Problemproduzenten der Welt . Toyota setzt darauf, dass Probleme, die durch die massenhafte Verbreitung von Karaftfahrzeugen entstehen sollen durch bessere Kraftfahrzeige gelöst werden können. Jede realistische Problemlösung würde die Abschaffung der Kfz-Produzenten bedeuten. Dagegen wird Spielgeld in die Kriegskasse des Gesellschaftszerstörers Uber gepumpt. Lösungen für menschenfreundliche Umwelt und Gesellschaftsformen werden so nicht befördert. Lemminge allesamt.

    Toyota just placed a big bet on autonomous vehicles.
    The automaker announced on Monday that it is investing $500 million in Uber and working more closely with the company to accelerate the development and deployment of self-driving vehicles. Uber plans to retrofit Toyota Sienna minivans with its autonomous technology and begin real-world testing in 2021.

    The deal gives Toyota a key partner in a field that is growing rapidly, and comes on the same day that four of the automaker’s suppliers announced a partnership to develop some of the software underpinning autonomous vehicles.

    “This agreement and investment marks an important milestone in our transformation to a mobility company,” Shigeki Tomoyama, the president of Toyota Connected Company, said in a statement.

    Automakers and tech companies continue scrambling to position themselves for a future in which car ownership gives way to mobility as a service. That’s led to a growing number of partnerships as companies like Toyota realize they don’t know much about ridesharing and companies like Uber discover that building cars is hard.

    Other tech and auto companies have forged similar arrangements. Waymo, for example, buys vehicles from Chrysler and Jaguar Land Rover.

    “We’re seeing marriages of companies of complementary abilities,” said Brian Collie of Boston Consulting Group. “Partnerships are quite necessary and create value toward bringing mobility as a service to the market faster.”

    Uber CEO Dara Khosrowshahi, shakes hands with Shigeki Tomoyama, president of Toyota Connected Company.
    Uber leads the world in ridesharing, which gives it an edge in finding an audience for autonomous vehicles. Uber could create a ready market for Toyota self-driving cars through its app, which is used by millions of people.

    Monday’s announcement builds on an existing partnership. During the International Consumer Electronics Show in January, the two companies announced e-Palette, an autonomous vehicle concept that could be used for everything from pizza delivery to ridesharing.

    Toyota’s latest infusion of cash provides Uber with an unreserved endorsement of a self-driving car program rocked by a lawsuit from Google and the death of a pedestrian in Arizona in March. Uber shuttered its research and development efforts in Arizona in May, and only recently returned to the streets of Pittsburgh, Pennsylvania. It still has not started testing its cars again in autonomous mode.

    Related: How free self-driving car rides could change everything

    This isn’t Toyota’s first move into the space. In 2015, it said it would invest $1 billion in the Toyota Research Institute artificial intelligence lab. Institute CEO Gill Pratt said in a statement Monday that the Uber partnership would accelerate efforts to deliver autonomous technology.

    Toyota’s financial investment will also prove useful given the high costs of running a self-driving car program. Engineers who specialize in the technology are rare and command salaries of several hundred thousand dollars a year. Maintaining a large fleet of test vehicles brings additional costs.

    In May, SoftBank invested $2.25 billion in Cruise, the self-driving startup of General Motors. That just goes to show that even the biggest companies need partners.

    #Uber #Wirtschaft



  • The Fake-News Fallacy | The New Yorker
    https://www.newyorker.com/magazine/2017/09/04/the-fake-news-fallacy

    Not so very long ago, it was thought that the tension between commercial pressure and the public interest would be one of the many things made obsolete by the Internet. In the mid-aughts, during the height of the Web 2.0 boom, the pundit Henry Jenkins declared that the Internet was creating a “participatory culture” where the top-down hegemony of greedy media corporations would be replaced by a horizontal network of amateur “prosumers” engaged in a wonderfully democratic exchange of information in cyberspace—an epistemic agora that would allow the whole globe to come together on a level playing field. Google, Facebook, Twitter, and the rest attained their paradoxical gatekeeper status by positioning themselves as neutral platforms that unlocked the Internet’s democratic potential by empowering users. It was on a private platform, Twitter, where pro-democracy protesters organized, and on another private platform, Google, where the knowledge of a million public libraries could be accessed for free. These companies would develop into what the tech guru Jeff Jarvis termed “radically public companies,” which operate more like public utilities than like businesses.

    But there has been a growing sense among mostly liberal-minded observers that the platforms’ championing of openness is at odds with the public interest. The image of Arab Spring activists using Twitter to challenge repressive dictators has been replaced, in the public imagination, by that of ISIS propagandists luring vulnerable Western teen-agers to Syria via YouTube videos and Facebook chats. The openness that was said to bring about a democratic revolution instead seems to have torn a hole in the social fabric. Today, online misinformation, hate speech, and propaganda are seen as the front line of a reactionary populist upsurge threatening liberal democracy. Once held back by democratic institutions, the bad stuff is now sluicing through a digital breach with the help of irresponsible tech companies. Stanching the torrent of fake news has become a trial by which the digital giants can prove their commitment to democracy. The effort has reignited a debate over the role of mass communication that goes back to the early days of radio.

    The debate around radio at the time of “The War of the Worlds” was informed by a similar fall from utopian hopes to dystopian fears. Although radio can seem like an unremarkable medium—audio wallpaper pasted over the most boring parts of your day—the historian David Goodman’s book “Radio’s Civic Ambition: American Broadcasting and Democracy in the 1930s” makes it clear that the birth of the technology brought about a communications revolution comparable to that of the Internet. For the first time, radio allowed a mass audience to experience the same thing simultaneously from the comfort of their homes. Early radio pioneers imagined that this unprecedented blurring of public and private space might become a sort of ethereal forum that would uplift the nation, from the urban slum dweller to the remote Montana rancher. John Dewey called radio “the most powerful instrument of social education the world has ever seen.” Populist reformers demanded that radio be treated as a common carrier and give airtime to anyone who paid a fee. Were this to have come about, it would have been very much like the early online-bulletin-board systems where strangers could come together and leave a message for any passing online wanderer. Instead, in the regulatory struggles of the twenties and thirties, the commercial networks won out.

    Corporate networks were supported by advertising, and what many progressives had envisaged as the ideal democratic forum began to seem more like Times Square, cluttered with ads for soap and coffee. Rather than elevating public opinion, advertisers pioneered techniques of manipulating it. Who else might be able to exploit such techniques? Many saw a link between the domestic on-air advertising boom and the rise of Fascist dictators like Hitler abroad.

    Today, when we speak about people’s relationship to the Internet, we tend to adopt the nonjudgmental language of computer science. Fake news was described as a “virus” spreading among users who have been “exposed” to online misinformation. The proposed solutions to the fake-news problem typically resemble antivirus programs: their aim is to identify and quarantine all the dangerous nonfacts throughout the Web before they can infect their prospective hosts. One venture capitalist, writing on the tech blog Venture Beat, imagined deploying artificial intelligence as a “media cop,” protecting users from malicious content. “Imagine a world where every article could be assessed based on its level of sound discourse,” he wrote. The vision here was of the news consumers of the future turning the discourse setting on their browser up to eleven and soaking in pure fact. It’s possible, though, that this approach comes with its own form of myopia. Neil Postman, writing a couple of decades ago, warned of a growing tendency to view people as computers, and a corresponding devaluation of the “singular human capacity to see things whole in all their psychic, emotional and moral dimensions.” A person does not process information the way a computer does, flipping a switch of “true” or “false.” One rarely cited Pew statistic shows that only four per cent of American Internet users trust social media “a lot,” which suggests a greater resilience against online misinformation than overheated editorials might lead us to expect. Most people seem to understand that their social-media streams represent a heady mixture of gossip, political activism, news, and entertainment. You might see this as a problem, but turning to Big Data-driven algorithms to fix it will only further entrench our reliance on code to tell us what is important about the world—which is what led to the problem in the first place. Plus, it doesn’t sound very fun.

    In recent times, Donald Trump supporters are the ones who have most effectively applied Grierson’s insight to the digital age. Young Trump enthusiasts turned Internet trolling into a potent political tool, deploying the “folk stuff” of the Web—memes, slang, the nihilistic humor of a certain subculture of Web-native gamer—to give a subversive, cyberpunk sheen to a movement that might otherwise look like a stale reactionary blend of white nationalism and anti-feminism. As crusaders against fake news push technology companies to “defend the truth,” they face a backlash from a conservative movement, retooled for the digital age, which sees claims for objectivity as a smoke screen for bias.

    For conservatives, the rise of online gatekeepers may be a blessing in disguise. Throwing the charge of “liberal media bias” against powerful institutions has always provided an energizing force for the conservative movement, as the historian Nicole Hemmer shows in her new book, “Messengers of the Right.” Instead of focussing on ideas, Hemmer focusses on the galvanizing struggle over the means of distributing those ideas. The first modern conservatives were members of the America First movement, who found their isolationist views marginalized in the lead-up to the Second World War and vowed to fight back by forming the first conservative media outlets. A “vague claim of exclusion” sharpened into a “powerful and effective ideological arrow in the conservative quiver,” Hemmer argues, through battles that conservative radio broadcasters had with the F.C.C. in the nineteen-fifties and sixties. Their main obstacle was the F.C.C.’s Fairness Doctrine, which sought to protect public discourse by requiring controversial opinions to be balanced by opposing viewpoints. Since attacks on the mid-century liberal consensus were inherently controversial, conservatives found themselves constantly in regulators’ sights. In 1961, a watershed moment occurred with the leak of a memo from labor leaders to the Kennedy Administration which suggested using the Fairness Doctrine to suppress right-wing viewpoints. To many conservatives, the memo proved the existence of the vast conspiracy they had long suspected. A fund-raising letter for a prominent conservative radio show railed against the doctrine, calling it “the most dastardly collateral attack on freedom of speech in the history of the country.” Thus was born the character of the persecuted truthteller standing up to a tyrannical government—a trope on which a billion-dollar conservative-media juggernaut has been built.

    The online tumult of the 2016 election fed into a growing suspicion of Silicon Valley’s dominance over the public sphere. Across the political spectrum, people have become less trusting of the Big Tech companies that govern most online political expression. Calls for civic responsibility on the part of Silicon Valley companies have replaced the hope that technological innovation alone might bring about a democratic revolution. Despite the focus on algorithms, A.I., filter bubbles, and Big Data, these questions are political as much as technical.

    #Démocratie #Science_information #Fake_news #Regulation


  • Facebook Aims To Make MRI Scans 10x Faster With NYU
    https://www.forbes.com/sites/samshead/2018/08/20/facebook-aims-to-make-mri-scans-10x-faster-with-nyu/#2b6219047a04

    Si même l’Université de New York a besoin de Facebook pour faire des recherches... mais tout est clean hein, peut être même open source.

    Zitnick added that partnering with NYU could help the social media giant get the technology into practice if it proves to be successful. “If we do show success, we have an avenue to get this out into clinical practice, test it out, put it in front of real radiologists, and make sure that what we’re doing is actually going to be impactful,” he said.

    But when asked if Facebook plans to release and build medical products in the future, Zitnick didn’t give much away. Instead, he said that “FAIR’s mission is to push the science of AI forward,” before going on to say that FAIR is looking for problems where AI can have a positive impact on the world.

    Facebook and NYU have a long-standing relationship, with several people working for both organizations including Yann LeCun, who was the director of FAIR before he became Facebook’s chief AI scientist. “This all got started with a connection by someone working both for NYU and in collaboration with FAIR. They suggested it’d be good for us to start talking, which we did,” said Sodickson.

    Facebook and NYU plan to open source their work so that other researchers can build on their developments. As the project unfolds, Facebook said it will publish AI models, baselines, and evaluation metrics associated with the research, while NYU will open source the image dataset.

    Facebook isn’t the only tech company exploring how AI can be used to assist radiologists. For example, DeepMind, an AI lab owned by Google, has developed deep learning software that can detect over 50 eye diseases from scans.

    DeepMind has a number of other healthcare projects but Facebook (who was reportedly interested in buying DeepMind at one stage) claims this project is the first of its kind, as it aims to change the way medical images are created in the first place, as opposed to using existing medical images to see what can be achieved.

    #Facebook #Résonance_magnétique #Neuromarketing #Intelligence_artificielle #Université #Partenariats


  • Facebook and NYU School of Medicine launch research collaboration to improve MRI – Facebook Code
    https://code.fb.com/ai-research/facebook-and-nyu-school-of-medicine-launch-research-collaboration-to-improv

    C’est bô le langage fleuri des experts en public relation...

    Using AI, it may be possible to capture less data and therefore scan faster, while preserving or even enhancing the rich information content of magnetic resonance images. The key is to train artificial neural networks to recognize the underlying structure of the images in order to fill in views omitted from the accelerated scan. This approach is similar to how humans process sensory information. When we experience the world, our brains often receive an incomplete picture — as in the case of obscured or dimly lit objects — that we need to turn into actionable information. Early work performed at NYU School of Medicine shows that artificial neural networks can accomplish a similar task, generating high-quality images from far less data than was previously thought to be necessary.

    In practice, reconstructing images from partial information poses an exceedingly hard problem. Neural networks must be able to effectively bridge the gaps in scanning data without sacrificing accuracy. A few missing or incorrectly modeled pixels could mean the difference between an all-clear scan and one in which radiologists find a torn ligament or a possible tumor. Conversely, capturing previously inaccessible information in an image can quite literally save lives.

    Advancing the AI and medical communities
    Unlike other AI-related projects, which use medical images as a starting point and then attempt to derive anatomical or diagnostic information from them (in emulation of human observers), this collaboration focuses on applying the strengths of machine learning to reconstruct the most high-value images in entirely new ways. With the goal of radically changing the way medical images are acquired in the first place, our aim is not simply enhanced data mining with AI, but rather the generation of fundamentally new capabilities for medical visualization to benefit human health.

    In the interest of advancing the state of the art in medical imaging as quickly as possible, we plan to open-source this work to allow the wider research community to build on our developments. As the project progresses, Facebook will share the AI models, baselines, and evaluation metrics associated with this research, and NYU School of Medicine will open-source the image data set. This will help ensure the work’s reproducibility and accelerate adoption of resulting methods in clinical practice.

    What’s next
    Though this project will initially focus on MRI technology, its long-term impact could extend to many other medical imaging applications. For example, the improvements afforded by AI have the potential to revolutionize CT scans as well. Advanced image reconstruction might enable ultra-low-dose CT scans suitable for vulnerable populations, such as pediatric patients. Such improvements would not only help transform the experience and effectiveness of medical imaging, but they’d also help equalize access to an indispensable element of medical care.

    We believe the fastMRI project will demonstrate how domain-specific experts from different fields and industries can work together to produce the kind of open research that will make a far-reaching and lasting positive impact in the world.

    #Resonance_magnetique #Intelligence_artificielle #Facebook #Neuromarketing


  • How Facebook — yes, Facebook — might make MRIs faster
    https://money.cnn.com/2018/08/20/technology/facebook-mri-ai-nyu/index.html

    Impeccable pour le neuromarketing...

    Doctors use MRI — shorthand for magnetic resonance imaging — to get a closer look at organs, tissues and bones without exposing patients to harmful radiation. The image quality makes them especially helpful in spotting soft tissue damage, too. The problem is, tests can take as long as an hour. Anyone with even a hint of claustrophobia can struggle to remain perfectly still in the tube-like machine that long. Tying up a machine for that long also drives up costs by limiting the number of exams a hospital can perform each day.

    Computer scientists at Facebook (FB) think they can use machine learning to make things a lot faster. To that end, NYU is providing an anonymous dataset of 10,000 MRI exams, a trove that will include as many as three million images of knees, brains and livers.

    Related: What happens when automation comes for highly paid doctors

    Researchers will use the data to train an algorithm, using a method called deep learning, to recognize the arrangement of bones, muscles, ligaments, and other things that make up the human body. Building this knowledge into the software that powers an MRI machine will allow the AI to create a portion of the image, saving time.

    #Résonance_magnetique #Neuromarketing #Facebook