• Technologies and Startups that Hack Brain

    Technologies and Startups that Hack the BrainWhat they do and how machine learning fits inCover of the Brazilian edition of ‘Neuromancer’‘…physics and #neuroscience are in some ways the most fundamental subjects: one is concerned with the external world out there, and the other with the internal world in our minds’.Demis Hassabis, a co-founder of DeepMind.***The development of technologies that study and affect the ‘internal world in our minds’ is fuelled by investment activity, among other things. In the summer of 2016, CB Insights, an investment database, published a review of 17 startups that boost the brain. In just two years, in June 2018, Neuronetics, the most well-funded #startup from the list, went public. Other companies from that list raised substantial investment rounds. For (...)

    #data-science #mindfulness #ai

  • Google ’betrays patient trust’ with DeepMind Health move

    Moving healthcare subsidiary into main company breaks pledge that ‘data will not be connected to Google accounts’ Google has been accused of breaking promises to patients, after the company announced it would be moving a healthcare-focused subsidiary, DeepMind Health, into the main arm of the organisation. The restructure, critics argue, breaks a pledge DeepMind made when it started working with the NHS that “data will never be connected to Google accounts or services”. The change has also (...)

    #Alphabet #Google #DeepMind #algorithme #terms #santé #NHS


  • The Shallowness of Google Translate - The Atlantic

    Un excellent papier par Douglas Hofstadter (ah, D.H., Godel, Escher et Bach... !!!)

    As a language lover and an impassioned translator, as a cognitive scientist and a lifelong admirer of the human mind’s subtlety, I have followed the attempts to mechanize translation for decades. When I first got interested in the subject, in the mid-1970s, I ran across a letter written in 1947 by the mathematician Warren Weaver, an early machine-translation advocate, to Norbert Wiener, a key figure in cybernetics, in which Weaver made this curious claim, today quite famous:

    When I look at an article in Russian, I say, “This is really written in English, but it has been coded in some strange symbols. I will now proceed to decode.”

    Some years later he offered a different viewpoint: “No reasonable person thinks that a machine translation can ever achieve elegance and style. Pushkin need not shudder.” Whew! Having devoted one unforgettably intense year of my life to translating Alexander Pushkin’s sparkling novel in verse Eugene Onegin into my native tongue (that is, having radically reworked that great Russian work into an English-language novel in verse), I find this remark of Weaver’s far more congenial than his earlier remark, which reveals a strangely simplistic view of language. Nonetheless, his 1947 view of translation-as-decoding became a credo that has long driven the field of machine translation.

    Before showing my findings, though, I should point out that an ambiguity in the adjective “deep” is being exploited here. When one hears that Google bought a company called DeepMind whose products have “deep neural networks” enhanced by “deep learning,” one cannot help taking the word “deep” to mean “profound,” and thus “powerful,” “insightful,” “wise.” And yet, the meaning of “deep” in this context comes simply from the fact that these neural networks have more layers (12, say) than do older networks, which might have only two or three. But does that sort of depth imply that whatever such a network does must be profound? Hardly. This is verbal spinmeistery .

    I began my explorations very humbly, using the following short remark, which, in a human mind, evokes a clear scenario:

    In their house, everything comes in pairs. There’s his car and her car, his towels and her towels, and his library and hers.

    The translation challenge seems straightforward, but in French (and other Romance languages), the words for “his” and “her” don’t agree in gender with the possessor, but with the item possessed. So here’s what Google Translate gave me:

    Dans leur maison, tout vient en paires. Il y a sa voiture et sa voiture, ses serviettes et ses serviettes, sa bibliothèque et les siennes.

    We humans know all sorts of things about couples, houses, personal possessions, pride, rivalry, jealousy, privacy, and many other intangibles that lead to such quirks as a married couple having towels embroidered “his” and “hers.” Google Translate isn’t familiar with such situations. Google Translate isn’t familiar with situations, period. It’s familiar solely with strings composed of words composed of letters. It’s all about ultrarapid processing of pieces of text, not about thinking or imagining or remembering or understanding. It doesn’t even know that words stand for things. Let me hasten to say that a computer program certainly could, in principle, know what language is for, and could have ideas and memories and experiences, and could put them to use, but that’s not what Google Translate was designed to do. Such an ambition wasn’t even on its designers’ radar screens.

    It’s hard for a human, with a lifetime of experience and understanding and of using words in a meaningful way, to realize how devoid of content all the words thrown onto the screen by Google Translate are. It’s almost irresistible for people to presume that a piece of software that deals so fluently with words must surely know what they mean. This classic illusion associated with artificial-intelligence programs is called the “Eliza effect,” since one of the first programs to pull the wool over people’s eyes with its seeming understanding of English, back in the 1960s, was a vacuous phrase manipulator called Eliza, which pretended to be a psychotherapist, and as such, it gave many people who interacted with it the eerie sensation that it deeply understood their innermost feelings.

    To me, the word “translation” exudes a mysterious and evocative aura. It denotes a profoundly human art form that graciously carries clear ideas in Language A into clear ideas in Language B, and the bridging act not only should maintain clarity, but also should give a sense for the flavor, quirks, and idiosyncrasies of the writing style of the original author. Whenever I translate, I first read the original text carefully and internalize the ideas as clearly as I can, letting them slosh back and forth in my mind. It’s not that the words of the original are sloshing back and forth; it’s the ideas that are triggering all sorts of related ideas, creating a rich halo of related scenarios in my mind. Needless to say, most of this halo is unconscious. Only when the halo has been evoked sufficiently in my mind do I start to try to express it—to “press it out”—in the second language. I try to say in Language B what strikes me as a natural B-ish way to talk about the kinds of situations that constitute the halo of meaning in question.

    This process, mediated via meaning, may sound sluggish, and indeed, in comparison with Google Translate’s two or three seconds per page, it certainly is—but it is what any serious human translator does. This is the kind of thing I imagine when I hear an evocative phrase like “deep mind.”

    A friend asked me whether Google Translate’s level of skill isn’t merely a function of the program’s database. He figured that if you multiplied the database by a factor of, say, a million or a billion, eventually it would be able to translate anything thrown at it, and essentially perfectly. I don’t think so. Having ever more “big data” won’t bring you any closer to understanding, since understanding involves having ideas, and lack of ideas is the root of all the problems for machine translation today. So I would venture that bigger databases—even vastly bigger ones—won’t turn the trick.

    Another natural question is whether Google Translate’s use of neural networks—a gesture toward imitating brains—is bringing us closer to genuine understanding of language by machines. This sounds plausible at first, but there’s still no attempt being made to go beyond the surface level of words and phrases. All sorts of statistical facts about the huge databases are embodied in the neural nets, but these statistics merely relate words to other words, not to ideas. There’s no attempt to create internal structures that could be thought of as ideas, images, memories, or experiences. Such mental etherea are still far too elusive to deal with computationally, and so, as a substitute, fast and sophisticated statistical word-clustering algorithms are used. But the results of such techniques are no match for actually having ideas involved as one reads, understands, creates, modifies, and judges a piece of writing.

    Let me return to that sad image of human translators, soon outdone and outmoded, gradually turning into nothing but quality controllers and text tweakers. That’s a recipe for mediocrity at best. A serious artist doesn’t start with a kitschy piece of error-ridden bilgewater and then patch it up here and there to produce a work of high art. That’s not the nature of art. And translation is an art.

    In my writings over the years, I’ve always maintained that the human brain is a machine—a very complicated kind of machine—and I’ve vigorously opposed those who say that machines are intrinsically incapable of dealing with meaning. There is even a school of philosophers who claim computers could never “have semantics” because they’re made of “the wrong stuff” (silicon). To me, that’s facile nonsense. I won’t touch that debate here, but I wouldn’t want to leave readers with the impression that I believe intelligence and understanding to be forever inaccessible to computers. If in this essay I seem to come across sounding that way, it’s because the technology I’ve been discussing makes no attempt to reproduce human intelligence. Quite the contrary: It attempts to make an end run around human intelligence, and the output passages exhibited above clearly reveal its giant lacunas.

    From my point of view, there is no fundamental reason that machines could not, in principle, someday think, be creative, funny, nostalgic, excited, frightened, ecstatic, resigned, hopeful, and, as a corollary, able to translate admirably between languages. There’s no fundamental reason that machines might not someday succeed smashingly in translating jokes, puns, screenplays, novels, poems, and, of course, essays like this one. But all that will come about only when machines are as filled with ideas, emotions, and experiences as human beings are. And that’s not around the corner. Indeed, I believe it is still extremely far away. At least that is what this lifelong admirer of the human mind’s profundity fervently hopes.

    When, one day, a translation engine crafts an artistic novel in verse in English, using precise rhyming iambic tetrameter rich in wit, pathos, and sonic verve, then I’ll know it’s time for me to tip my hat and bow out.

    #Traduction #Google_translate #Deep_learning

  • À Londres, les données médicales partagées entre les hôpitaux et Google posent problème

    Le partenariat conclu entre DeepMind, la branche de Google dédiée à l’intelligence artificielle, et les hôpitaux londoniens pour concevoir une appli de suivi médical à partir de données récoltées auprès d’environ 1,6 million de patients, ne respecte pas la loi. C’est ce qu’estime la Cnil britannique dans sa décision en date du 3 juillet 2017. Outre-Manche, les soupçons qui pesaient de longue date sur le milieu hospitalier londonien et sur DeepMind, la filiale de Google (Alphabet) dédiée à l’intelligence (...)

    #Alphabet #Google #DeepMind #santé #BigData #ICO #NHS


  • Royal Free breached UK data law in 1.6m patient deal with Google’s DeepMind

    Information Commissioner’s Office rules record transfer from London hospital to AI company failed to comply with Data Protection Act London’s Royal Free hospital failed to comply with the Data Protection Act when it handed over personal data of 1.6 million patients to DeepMind, a Google subsidiary, according to the Information Commissioner’s Office. The data transfer was part of the two organisation’s partnership to create the healthcare app Streams, an alert, diagnosis and detection system (...)

    #Alphabet #Google #DeepMind #santé #BigData #NHS #ICO


  • Google DeepMind 1.6m patient record deal ’inappropriate’

    National data guardian says patient data transfer from Royal Free to Google subsidiary has ‘inappropriate legal basis’ as information not used for direct care The transfer of 1.6m patient records to Google’s artificial intelligence company DeepMind Health has been criticised for its “inappropriate legal basis” by the UK’s national data guardian. In a letter leaked to Sky News, the national data guardian, Dame Fiona Caldicott, warned DeepMind’s partner hospital, the Royal Free, that the patient (...)

    #Google #DeepMind #santé #algorithme #NHS #profiling


  • Google’s DeepMind made ‘inexcusable’ errors handling UK health data, says report

    A new academic report examining a deal between Google’s AI subsidiary DeepMind and the UK’s National Health Service (NHS) has said that the US tech giant made “inexcusable” errors in terms of transparency and oversight when handling sensitive medical information. The data sharing agreement — which was signed in 2015 and has since been superseded by a new contract — allows DeepMind access to medical records from 1.6 million patients attending London hospitals run by the NHS Royal Free Trust. (...)

    #Google #DeepMind #santé #NHS #données


  • Google convoite le réseau électrique britannique

    Etant donné les dernières prouesses des IA Google, ils seraient bien avisé de continuer de refuser....

    Le trust technologique américain Alphabet, maison-mère de Google, a mené des discussions avec le réseau national d’électricité britannique (National Grid) pour améliorer via les algorithmes de DeepMind l’efficacité de ce dernier, rapporte le Journal The Times.

    Ces négociations témoignent de l’impact important des grandes entreprises de la Silicon Valley sur la vie quotidienne de l’homme.

    DeepMind a été fondé il y a 7 ans à Londres par Demis Hassabis, Mustafa Suleyman et Shane Legg. L’entreprise a été reprise par Google pour un montant de 400 millions de livres et est notamment déjà active dans la gestion des dossiers médicaux des patients britanniques.

    Sécurité énergétique ?

    DeepMind, société spécialisée (...)

    #En_vedette #Actualités_High-Tech #High_Tech

  • Google is now involved with healthcare data – is that a good thing?

    Google has some of the most powerful computers and smartest algorithms in the world, has hired some of the best brains in computing, and through its purchase of British firm Deepmind has acquired AI expertise that recently saw an AI beat a human grandmaster at the game of go. Why then would we not want to apply this to potentially solving medical problems – something Google’s grandiose, even hyperbolic statements suggest the company wishes to?

    The New Scientist recently revealed a data sharing agreement between the Royal Free London NHS trust and Google Deepmind. The trust released incorrect statements (since corrected) claiming Deepmind would not receive any patient-identifiable data (it will), leading to irrelevant confusion about what data encryption and anonymisation can and cannot achieve.

    As people have very strong feelings about third-party access to medical records, all of this has caused a bit of a scandal. But is this an overreaction, following previous health data debacles? Or does this represent a new and worrying development in the sharing of medical records?

  • Revealed : Google AI has access to huge haul of NHS patient data

    A data-sharing agreement obtained by New Scientist shows that Google DeepMind’s collaboration with the NHS goes far beyond what it has publicly announced It’s no secret that Google has broad ambitions in healthcare. But a document obtained by New Scientist reveals that the tech giant’s collaboration with the UK’s National Health Service goes far beyond what has been publicly announced. The document – a data-sharing agreement between Google-owned artificial intelligence company DeepMind and the (...)

    #Google #santé #NHS #surveillance_des_malades #data #DeepMind


  • AI Algorithm Masters Space Invaders in All-Night Gaming Session

    the software in the video went from terrible to superhuman in about eight hours of play

    the most obvious reason [#Google] acquired #DeepMind is the technology’s potential to improve search of text or images. And even that is likely too narrow in the longer run.

    Google’s current fleet of self-driving cars, for example, are pretty amazing. But they don’t learn. They aren’t flexible in a world of endless variety, relying instead on programmers to account for as many situations as they can. A Sisyphean task.

    That isn’t to say we can’t get a high degree automation without deep learning. But a fully self-driving car will likely require more flexibility on the fly than is possible now.

    Or consider Google’s acquisition of eight robotics firms last December. Robots will remain glorified Roombas until they can learn and interact with their surroundings. Perhaps Google will pair deep learning with future robots in the factory or home.

    #intelligence_artificielle #jeux_vidéo #autopilote #apprentissage

  • Quand on vous disait que Google se construisait un Terminator

    Après le monde des robots, Google s’attaque à l’intelligence artificielle. Le géant du web serait en passe de racheter, pour la bagatelle de 400 millions de dollars, la jeune société londonienne DeepMind... [Tout lire]