industryterm:neural networks

  • 10 Top Open Source AI Technologies For Startups
    https://hackernoon.com/10-top-open-source-ai-technologies-for-startups-7c5f10b82fb1?source=rss-

    In the area of technology research, Artificial intelligence is one of the hottest trends. In fact, many startups have already made progress in areas like natural language, neural networks, AI, machine learning and image processing. Many other big companies like Google, Microsoft, IBM, Amazon and Facebook are heavily investing in their own R&D.Hence, it is no surprise now AI applications are increasingly useful for small as well as large businesses in 2019. In this blog, I have listed top 10 open source AI Technologies for small businesses and startups.1) Apache SystemMLIt is the machine learning technology created at IBM that has reached one of the top-level project levels in the Apache Software Foundation and is a flexible and scalable machine learning system. The important (...)

    #machine-learning #artificial-intelligence #open-source #startup #open-source-ai

  • Data is the New Oil
    https://hackernoon.com/data-is-the-new-oil-1227197762b2?source=rss----3a8144eabfe3---4

    “Data is the new oil. It’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals, etc to create a valuable entity that drives profitable activity; so must data be broken down, analyzed for it to have value.”— Clive HumbyDeep Learning is a revolutionary field, but for it to work as intended, it requires data. The area related to these big datasets is known as Big Data, which stands for the abundance of digital data. Data is as important for Deep Learning algorithms as the architecture of the network itself, i.e., the software. Acquiring and cleaning the data is one of the most valuable aspects of the work. Without data, the neural networks cannot learn.Most of the time, researchers can use the data given to them directly, but there are many (...)

    #machine-learning #feifei-li #data-science #imagenet

  • Building a Neural Network Only Using NumPy
    https://hackernoon.com/building-a-neural-network-only-using-numpy-7ba75da60ec0?source=rss----3a

    Using Andrew Ng’s Project Structure to Build a Neural Net in PythonIntroductionAfter having completed the deeplearning.ai Deep Learning specialization taught by Andrew Ng, I have decided to work through some of the assignments of the specialization and try to figure out the code myself without only filling in certain parts of it. Doing so, I want to deepen my understanding of neural networks and help others gain intuition by documenting my progress in articles. The complete notebook is available here.In this article, I’m going to build a neural network in #python only using NumPy based on the project structure proposed in the deeplearning.ai Deep Learning specialization:Define the structure of the neural network2. Initialize the parameters of the neural network defined in step one3. Loop (...)

    #deep-learning #machine-learning #artificial-intelligence #data-science

  • How we used #ai to hybridize humans with cartoon animals and made a business out of it.
    https://hackernoon.com/how-we-used-ai-to-hybridize-humans-with-cartoon-animals-and-made-a-busin

    Have you ever imagined yourself as a cartoon character? Well, now this is more than real.We are a team of 20 engineers and art designers who have developed a machine learning technology that morphs human faces with animated characters.The process starts by constructing a user’s 3D face model from just a single selfie shot. Importantly, our technology even works with older, regular smartphone cameras. With this single photo, our neural network builds a 3D mesh of the user’s head that looks like this:The neural network regresses a 3D model from a 2D photoNext, 3 other neural networks swing into action. The first draws eyebrows, the second detects and matches eye color, and the third detects and draws glasses if the user is wearing them. When these elements are ready, we morph the user with (...)

    #machine-learning #artificial-intelligence #ar #startup

  • GIPSA-lab invite Pablo JENSEN, directeur de recherche CNRS au Laboratoire de Physique de l’ENS de LYON pour un séminaire exceptionnel le 10 janvier 2019 à 10h30.

    The unexpected link between neural nets and liberalism

    Sixty years ago, Arthur Rosenblatt, a psychologist working for the army invented the perceptron, the first neural network capable of learning. Unexpectedly, Rosenblatt cites, as a major source of inspiration, an economist: Friedrich Hayek. He is well-known for his 1974 Nobel prize… and by his ultra-liberal stances, justifying the Pinochet coup in a Chilean newspaper: «Personally, I prefer a liberal dictator to a democratic government that lacks liberalism». This talk presents ongoing work on the link between Hayek’s ideology and neural networks.

    After a PhD on experimental condensed-matter physics, Pablo JENSEN worked for 15 years on the modeling of nanostructure growth. This lead to major publications in top journals, including Nature, Phys Rev Lett and a widely cited review in Rev Mod Phys. After these achievements, he decided to follow an unconventional path and switch to the modeling of social systems. It takes time to become familiar with social science topics and literature, but it is mandatory to establish serious interdisciplinary connections. During that period, he also had national responsibilities at CNRS, to improve communication of physics. This investment has now started to pay, as shown by recent publications in major interdisciplinary or social science (geography, economics, sociology) journals, including PNAS, J Pub Eco and British J Sociology. His present work takes advantage of the avalanche of social data available on the Web to improve our understanding of society. To achieve this, he collaborate with hard scientists to develop appropriate analysis tools and with social scientists to find relevant questions and interpretations.
    His last book : Pourquoi la société ne se laisse pas mettre en équations, Pablo Jensen, Seuil, coll. “Science ouverte”, mars 2018
    Personal Web page : http://perso.ens-lyon.fr/pablo.jensen

    Lieu du séminaire : Laboratoire GIPSA-lab, 11 rue des Mathématiques, Campus de Saint Martin d’Hères, salle Mont-Blanc (bâtiment Ampère D, 1er étage)

    #grenoble #neural_net #liberalism

  • Interview. How Neural Networks And Machine Learning Are Making #games More Interesting
    https://hackernoon.com/interview-how-neural-networks-and-machine-learning-are-making-games-more

    [Interview] How Neural Networks And Machine Learning Are Making Games More InterestingImage credit: UnsplashMachine learning and neural networks are hot topics in many tech areas, and the game dev is one of them. There such new technologies are used to make games more interesting.How this is achieved, what companies are now leaders in new tech adoption and research when we as users will see any notable results of this research and lots more to be discussed today. We will talk to Vladimir Ivanov, the leading ML in gaming expert.The first question is: what do you mean when talking that games are “not interesting” and the new tech could fix this?Well, the thing is pretty simple: if we are talking not about human vs. human game mode, you need to compete with bots. Often this is not that (...)

    #gamedev #machine-learning #game-development #artificial-intelligence

  • Preprocess Keras Model for TensorSpace
    https://hackernoon.com/preprocess-keras-model-for-tensorspace-ed5e4db9a2a1?source=rss----3a8144

    How to preprocess Keras model to be TensorSpace compatible for neural network 3D visualizationTensorSpace & Keras“TensorSpace is a neural network 3D visualization framework. — TensorSpace.org”“Keras is a high-level neural network API. — keras.io ”IntroductionYou may learn about TensorSpace can be used to 3D visualize the neural networks. You might have read my previous introduction about TensorSpace. Maybe you find it is a little complicated on the model preprocess.Hence today, I want to talk about the model preprocess of TensorSpace for more details. To be more specific, how to preprocess the deep learning model built by Keras to be TensorSpace compatible.Fig. 1 — Use TensorSpace to visualize an LeNet built by KerasWhat we should have?To make a model built by Keras to be TensorSpace compatible, (...)

    #python #data-visualization #machine-learning #technology #javascript

  • How to optimize C and C++ code in 2018—Iurii Krasnoshchok
    http://isocpp.org/feeder/?FeederAction=clicked&feed=All+Posts&seed=http%3A%2F%2Fisocpp.org%2Fblog%2F2

    Are you aware?

    How to optimize C and C++ code in 2018 by Iurii Krasnoshchok

    From the article:

    We are still limited by our current hardware. There are numerous areas where it just not good enough: neural networks and virtual reality to name a few. There are plenty of devices where battery life is crucial, and we must count every single CPU tick. Even when we’re talking about clouds and microservices and lambdas, there are enormous data centers that consume vast amounts of electricity. Even boring tests routine may quietly start to take 5 hours to run. And this is tricky. Program performance doesn‘t matter, only until it does. A modern way to squeeze performance out of silicon is to make hardware more and more (...)

    #News,Articles&_Books,

  • In the Age of A.I., Is Seeing Still Believing ? | The New Yorker
    https://www.newyorker.com/magazine/2018/11/12/in-the-age-of-ai-is-seeing-still-believing

    In a media environment saturated with fake news, such technology has disturbing implications. Last fall, an anonymous Redditor with the username Deepfakes released a software tool kit that allows anyone to make synthetic videos in which a neural network substitutes one person’s face for another’s, while keeping their expressions consistent. Along with the kit, the user posted pornographic videos, now known as “deepfakes,” that appear to feature various Hollywood actresses. (The software is complex but comprehensible: “Let’s say for example we’re perving on some innocent girl named Jessica,” one tutorial reads. “The folders you create would be: ‘jessica; jessica_faces; porn; porn_faces; model; output.’ ”) Around the same time, “Synthesizing Obama,” a paper published by a research group at the University of Washington, showed that a neural network could create believable videos in which the former President appeared to be saying words that were really spoken by someone else. In a video voiced by Jordan Peele, Obama seems to say that “President Trump is a total and complete dipshit,” and warns that “how we move forward in the age of information” will determine “whether we become some kind of fucked-up dystopia.”

    “People have been doing synthesis for a long time, with different tools,” he said. He rattled off various milestones in the history of image manipulation: the transposition, in a famous photograph from the eighteen-sixties, of Abraham Lincoln’s head onto the body of the slavery advocate John C. Calhoun; the mass alteration of photographs in Stalin’s Russia, designed to purge his enemies from the history books; the convenient realignment of the pyramids on the cover of National Geographic, in 1982; the composite photograph of John Kerry and Jane Fonda standing together at an anti-Vietnam demonstration, which incensed many voters after the Times credulously reprinted it, in 2004, above a story about Kerry’s antiwar activities.

    “In the past, anybody could buy Photoshop. But to really use it well you had to be highly skilled,” Farid said. “Now the technology is democratizing.” It used to be safe to assume that ordinary people were incapable of complex image manipulations. Farid recalled a case—a bitter divorce—in which a wife had presented the court with a video of her husband at a café table, his hand reaching out to caress another woman’s. The husband insisted it was fake. “I noticed that there was a reflection of his hand in the surface of the table,” Farid said, “and getting the geometry exactly right would’ve been really hard.” Now convincing synthetic images and videos were becoming easier to make.

    The acceleration of home computing has converged with another trend: the mass uploading of photographs and videos to the Web. Later, when I sat down with Efros in his office, he explained that, even in the early two-thousands, computer graphics had been “data-starved”: although 3-D modellers were capable of creating photorealistic scenes, their cities, interiors, and mountainscapes felt empty and lifeless. True realism, Efros said, requires “data, data, data” about “the gunk, the dirt, the complexity of the world,” which is best gathered by accident, through the recording of ordinary life.

    Today, researchers have access to systems like ImageNet, a site run by computer scientists at Stanford and Princeton which brings together fourteen million photographs of ordinary places and objects, most of them casual snapshots posted to Flickr, eBay, and other Web sites. Initially, these images were sorted into categories (carrousels, subwoofers, paper clips, parking meters, chests of drawers) by tens of thousands of workers hired through Amazon Mechanical Turk. Then, in 2012, researchers at the University of Toronto succeeded in building neural networks capable of categorizing ImageNet’s images automatically; their dramatic success helped set off today’s neural-networking boom. In recent years, YouTube has become an unofficial ImageNet for video. Efros’s lab has overcome the site’s “platform bias”—its preference for cats and pop stars—by developing a neural network that mines, from “life style” videos such as “My Spring Morning Routine” and “My Rustic, Cozy Living Room,” clips of people opening packages, peering into fridges, drying off with towels, brushing their teeth. This vast archive of the uninteresting has made a new level of synthetic realism possible.

    In 2016, the Defense Advanced Research Projects Agency (DARPA) launched a program in Media Forensics, or MediFor, focussed on the threat that synthetic media poses to national security. Matt Turek, the program’s manager, ticked off possible manipulations when we spoke: “Objects that are cut and pasted into images. The removal of objects from a scene. Faces that might be swapped. Audio that is inconsistent with the video. Images that appear to be taken at a certain time and place but weren’t.” He went on, “What I think we’ll see, in a couple of years, is the synthesis of events that didn’t happen. Multiple images and videos taken from different perspectives will be constructed in such a way that they look like they come from different cameras. It could be something nation-state driven, trying to sway political or military action. It could come from a small, low-resource group. Potentially, it could come from an individual.”

    As with today’s text-based fake news, the problem is double-edged. Having been deceived by a fake video, one begins to wonder whether many real videos are fake. Eventually, skepticism becomes a strategy in itself. In 2016, when the “Access Hollywood” tape surfaced, Donald Trump acknowledged its accuracy while dismissing his statements as “locker-room talk.” Now Trump suggests to associates that “we don’t think that was my voice.”

    “The larger danger is plausible deniability,” Farid told me. It’s here that the comparison with counterfeiting breaks down. No cashier opens up the register hoping to find counterfeit bills. In politics, however, it’s often in our interest not to believe what we are seeing.

    As alarming as synthetic media may be, it may be more alarming that we arrived at our current crises of misinformation—Russian election hacking; genocidal propaganda in Myanmar; instant-message-driven mob violence in India—without it. Social media was enough to do the job, by turning ordinary people into media manipulators who will say (or share) anything to win an argument. The main effect of synthetic media may be to close off an escape route from the social-media bubble. In 2014, video of the deaths of Michael Brown and Eric Garner helped start the Black Lives Matter movement; footage of the football player Ray Rice assaulting his fiancée catalyzed a reckoning with domestic violence in the National Football League. It seemed as though video evidence, by turning us all into eyewitnesses, might provide a path out of polarization and toward reality. With the advent of synthetic media, all that changes. Body cameras may still capture what really happened, but the aesthetic of the body camera—its claim to authenticity—is also a vector for misinformation. “Eyewitness video” becomes an oxymoron. The path toward reality begins to wash away.

    #Fake_news #Image #Synthèse

  • DeepLearning 101 : #coursera Vs #udemy Vs #udacity
    https://hackernoon.com/deeplearning-101-coursera-vs-udemy-vs-udacity-b4eb3de06dbe?source=rss---

    The era of self-learningDeep Learning has taken the world by storm and the juggernaut has kept rolling since early 2017.As far as the core methodology goes, neural networks have been around since decades and convolutional neural networks and recurrent neural networks have been around since 15 odd years.What has changed suddenly you ask? GPUs and breakthroughs in automated systems like self driving cars. As Andrew Ng himself says, “I think the other reasons the term deep learning has taken off is just branding. These things are just neural networks with more hidden layers, but the phrase deep learning is just a great brand, it’s just so deep.”Developers all around the globe are heavily motivated seeing the innovation DL is driving in each and every sector. Every company is either pitching (...)

    #machine-learning #deep-learning

  • Understanding YOLO
    https://hackernoon.com/understanding-yolo-f5a74bbc7967?source=rss----3a8144eabfe3---4

    This article explains the YOLO object detection architecture, from the point of view of someone who wants to implement it from scratch. It will not describe the advantages/disadvantages of the network or the reasons for each design choice. Instead, it focus on how it works. You should have a basic understanding of neural networks, specially CNNS, before you read this.All the descriptions in this post are related to the original YOLO paper: You Only Look Once: Unified, Real-Time Object Detection by Joseph Redmon, Santosh Divvala, Ross Girshick and Ali Farhadi (2015). There have been many improvements proposed since then, that were combined in the newer YOLOv2 version which I might write about another time. It is easier to understand this original version first, and then check what (...)

    #object-detection #deep-learning #machine-learning #computer-vision #neural-networks

  • 5 advantages of the top-down approach in the creation of AI
    https://hackernoon.com/five-advantages-of-the-top-down-approach-in-the-creation-of-ai-3e4166a74

    Nowadays, the progress in the development of the neural networks has shifted the focus in the creation of AI towards the «top-down» approach. At the same time, we may notice a certain decrease of the advancement pace towards this approach.Our team, mainly, consists of the psychologists and the psychoanalysts. For many years we have been modeling the mental processes in various IT-products and this predetermined our choice of strategy in the creation of AI. We have chosen the «top-down» approach. Similar views on this issue of such authorities in AI technologies as Marvin Minsky and Seymour Papert strengthen our confidence.In the article I will constantly compare these two approaches. We assume that among the solutions which must form the basis of a strong AI have to be such that founded (...)

    #artificial-intelligence #neural-networks #machine-learning #marvin-minsky #psychology

  • The Shallowness of Google Translate - The Atlantic
    https://www.theatlantic.com/technology/archive/2018/01/the-shallowness-of-google-translate/551570

    Un excellent papier par Douglas Hofstadter (ah, D.H., Godel, Escher et Bach... !!!)

    As a language lover and an impassioned translator, as a cognitive scientist and a lifelong admirer of the human mind’s subtlety, I have followed the attempts to mechanize translation for decades. When I first got interested in the subject, in the mid-1970s, I ran across a letter written in 1947 by the mathematician Warren Weaver, an early machine-translation advocate, to Norbert Wiener, a key figure in cybernetics, in which Weaver made this curious claim, today quite famous:

    When I look at an article in Russian, I say, “This is really written in English, but it has been coded in some strange symbols. I will now proceed to decode.”

    Some years later he offered a different viewpoint: “No reasonable person thinks that a machine translation can ever achieve elegance and style. Pushkin need not shudder.” Whew! Having devoted one unforgettably intense year of my life to translating Alexander Pushkin’s sparkling novel in verse Eugene Onegin into my native tongue (that is, having radically reworked that great Russian work into an English-language novel in verse), I find this remark of Weaver’s far more congenial than his earlier remark, which reveals a strangely simplistic view of language. Nonetheless, his 1947 view of translation-as-decoding became a credo that has long driven the field of machine translation.

    Before showing my findings, though, I should point out that an ambiguity in the adjective “deep” is being exploited here. When one hears that Google bought a company called DeepMind whose products have “deep neural networks” enhanced by “deep learning,” one cannot help taking the word “deep” to mean “profound,” and thus “powerful,” “insightful,” “wise.” And yet, the meaning of “deep” in this context comes simply from the fact that these neural networks have more layers (12, say) than do older networks, which might have only two or three. But does that sort of depth imply that whatever such a network does must be profound? Hardly. This is verbal spinmeistery .

    I began my explorations very humbly, using the following short remark, which, in a human mind, evokes a clear scenario:

    In their house, everything comes in pairs. There’s his car and her car, his towels and her towels, and his library and hers.

    The translation challenge seems straightforward, but in French (and other Romance languages), the words for “his” and “her” don’t agree in gender with the possessor, but with the item possessed. So here’s what Google Translate gave me:

    Dans leur maison, tout vient en paires. Il y a sa voiture et sa voiture, ses serviettes et ses serviettes, sa bibliothèque et les siennes.

    We humans know all sorts of things about couples, houses, personal possessions, pride, rivalry, jealousy, privacy, and many other intangibles that lead to such quirks as a married couple having towels embroidered “his” and “hers.” Google Translate isn’t familiar with such situations. Google Translate isn’t familiar with situations, period. It’s familiar solely with strings composed of words composed of letters. It’s all about ultrarapid processing of pieces of text, not about thinking or imagining or remembering or understanding. It doesn’t even know that words stand for things. Let me hasten to say that a computer program certainly could, in principle, know what language is for, and could have ideas and memories and experiences, and could put them to use, but that’s not what Google Translate was designed to do. Such an ambition wasn’t even on its designers’ radar screens.

    It’s hard for a human, with a lifetime of experience and understanding and of using words in a meaningful way, to realize how devoid of content all the words thrown onto the screen by Google Translate are. It’s almost irresistible for people to presume that a piece of software that deals so fluently with words must surely know what they mean. This classic illusion associated with artificial-intelligence programs is called the “Eliza effect,” since one of the first programs to pull the wool over people’s eyes with its seeming understanding of English, back in the 1960s, was a vacuous phrase manipulator called Eliza, which pretended to be a psychotherapist, and as such, it gave many people who interacted with it the eerie sensation that it deeply understood their innermost feelings.

    To me, the word “translation” exudes a mysterious and evocative aura. It denotes a profoundly human art form that graciously carries clear ideas in Language A into clear ideas in Language B, and the bridging act not only should maintain clarity, but also should give a sense for the flavor, quirks, and idiosyncrasies of the writing style of the original author. Whenever I translate, I first read the original text carefully and internalize the ideas as clearly as I can, letting them slosh back and forth in my mind. It’s not that the words of the original are sloshing back and forth; it’s the ideas that are triggering all sorts of related ideas, creating a rich halo of related scenarios in my mind. Needless to say, most of this halo is unconscious. Only when the halo has been evoked sufficiently in my mind do I start to try to express it—to “press it out”—in the second language. I try to say in Language B what strikes me as a natural B-ish way to talk about the kinds of situations that constitute the halo of meaning in question.

    This process, mediated via meaning, may sound sluggish, and indeed, in comparison with Google Translate’s two or three seconds per page, it certainly is—but it is what any serious human translator does. This is the kind of thing I imagine when I hear an evocative phrase like “deep mind.”

    A friend asked me whether Google Translate’s level of skill isn’t merely a function of the program’s database. He figured that if you multiplied the database by a factor of, say, a million or a billion, eventually it would be able to translate anything thrown at it, and essentially perfectly. I don’t think so. Having ever more “big data” won’t bring you any closer to understanding, since understanding involves having ideas, and lack of ideas is the root of all the problems for machine translation today. So I would venture that bigger databases—even vastly bigger ones—won’t turn the trick.

    Another natural question is whether Google Translate’s use of neural networks—a gesture toward imitating brains—is bringing us closer to genuine understanding of language by machines. This sounds plausible at first, but there’s still no attempt being made to go beyond the surface level of words and phrases. All sorts of statistical facts about the huge databases are embodied in the neural nets, but these statistics merely relate words to other words, not to ideas. There’s no attempt to create internal structures that could be thought of as ideas, images, memories, or experiences. Such mental etherea are still far too elusive to deal with computationally, and so, as a substitute, fast and sophisticated statistical word-clustering algorithms are used. But the results of such techniques are no match for actually having ideas involved as one reads, understands, creates, modifies, and judges a piece of writing.

    Let me return to that sad image of human translators, soon outdone and outmoded, gradually turning into nothing but quality controllers and text tweakers. That’s a recipe for mediocrity at best. A serious artist doesn’t start with a kitschy piece of error-ridden bilgewater and then patch it up here and there to produce a work of high art. That’s not the nature of art. And translation is an art.

    In my writings over the years, I’ve always maintained that the human brain is a machine—a very complicated kind of machine—and I’ve vigorously opposed those who say that machines are intrinsically incapable of dealing with meaning. There is even a school of philosophers who claim computers could never “have semantics” because they’re made of “the wrong stuff” (silicon). To me, that’s facile nonsense. I won’t touch that debate here, but I wouldn’t want to leave readers with the impression that I believe intelligence and understanding to be forever inaccessible to computers. If in this essay I seem to come across sounding that way, it’s because the technology I’ve been discussing makes no attempt to reproduce human intelligence. Quite the contrary: It attempts to make an end run around human intelligence, and the output passages exhibited above clearly reveal its giant lacunas.

    From my point of view, there is no fundamental reason that machines could not, in principle, someday think, be creative, funny, nostalgic, excited, frightened, ecstatic, resigned, hopeful, and, as a corollary, able to translate admirably between languages. There’s no fundamental reason that machines might not someday succeed smashingly in translating jokes, puns, screenplays, novels, poems, and, of course, essays like this one. But all that will come about only when machines are as filled with ideas, emotions, and experiences as human beings are. And that’s not around the corner. Indeed, I believe it is still extremely far away. At least that is what this lifelong admirer of the human mind’s profundity fervently hopes.

    When, one day, a translation engine crafts an artistic novel in verse in English, using precise rhyming iambic tetrameter rich in wit, pathos, and sonic verve, then I’ll know it’s time for me to tip my hat and bow out.

    #Traduction #Google_translate #Deep_learning

  • How an A.I. ‘Cat-and-Mouse Game’ Generates Believable Fake Photos - The New York Times
    https://www.nytimes.com/interactive/2018/01/02/technology/ai-generated-photos.html

    At a lab in Finland, a small team of Nvidia researchers recently built a system that can analyze thousands of (real) celebrity snapshots, recognize common patterns, and create new images that look much the same — but are still a little different. The system can also generate realistic images of horses, buses, bicycles, plants and many other common objects.

    The project is part of a vast and varied effort to build technology that can automatically generate convincing images — or alter existing images in equally convincing ways. The hope is that this technology can significantly accelerate and improve the creation of computer interfaces, games, movies and other media, eventually allowing software to create realistic imagery in moments rather than the hours — if not days — it can now take human developers.

    In recent years, thanks to a breed of algorithm that can learn tasks by analyzing vast amounts of data, companies like Google and Facebook have built systems that can recognize faces and common objects with an accuracy that rivals the human eye. Now, these and other companies, alongside many of the world’s top academic A.I. labs, are using similar methods to both recognize and create.

    As it built a system that generates new celebrity faces, the Nvidia team went a step further in an effort to make them far more believable. It set up two neural networks — one that generated the images and another that tried to determine whether those images were real or fake. These are called generative adversarial networks, or GANs. In essence, one system does its best to fool the other — and the other does its best not to be fooled.

    “The computer learns to generate these images by playing a cat-and-mouse game against itself,” said Mr. Lehtinen.

    A second team of Nvidia researchers recently built a system that can automatically alter a street photo taken on a summer’s day so that it looks like a snowy winter scene. Researchers at the University of California, Berkeley, have designed another that learns to convert horses into zebras and Monets into Van Goghs. DeepMind, a London-based A.I. lab owned by Google, is exploring technology that can generate its own videos. And Adobe is fashioning similar machine learning techniques with an eye toward pushing them into products like Photoshop, its popular image design tool.

    Trained designers and engineers have long used technology like Photoshop and other programs to build realistic images from scratch. This is what movie effects houses do. But it is becoming easier for machines to learn how to generate these images on their own, said Durk Kingma, a researcher at OpenAI, the artificial intelligence lab founded by Tesla chief executive Elon Musk and others, who specializes in this kind of machine learning.

    “We now have a model that can generate faces that are more diverse and in some ways more realistic than what we could program by hand,” he said, referring to Nvidia’s work in Finland.

    But new concerns come with the power to create this kind of imagery.

    With so much attention on fake media these days, we could soon face an even wider range of fabricated images than we do today.

    “The concern is that these techniques will rise to the point where it becomes very difficult to discern truth from falsity,” said Tim Hwang, who previously oversaw A.I. policy at Google and is now director of the Ethics and Governance of Artificial Intelligence Fund, an effort to fund ethical A.I. research. “You might believe that accelerates problems we already have.”

    But many of us still put a certain amount of trust in photos and videos that we don’t necessarily put in text or word of mouth. Mr. Hwang believes the technology will evolve into a kind of A.I. arms race pitting those trying to deceive against those trying to identify the deception.

    Mr. Lehtinen downplays the effect his research will have on the spread of misinformation online. But he does say that, as a time goes on, we may have to rethink the very nature of imagery. “We are approaching some fundamental questions,” he said.

    #Image #Fake_news #Post_truth #Intelligence_artificielle #AI_war #Désinformation

  • [1710.10777] Understanding Hidden Memories of Recurrent Neural Networks

    https://arxiv.org/abs/1710.10777

    Recurrent neural networks (RNNs) have been successfully applied to various natural language processing (NLP) tasks and achieved better results than conventional methods. However, the lack of understanding of the mechanisms behind their effectiveness limits further improvements on their architectures. In this paper, we present a visual analytics method for understanding and comparing RNN models for NLP tasks. We propose a technique to explain the function of individual hidden state units based on their expected response to input texts. We then co-cluster hidden state units and words based on the expected response and visualize co-clustering results as memory chips and word clouds to provide more structured knowledge on RNNs’ hidden states. We also propose a glyph-based sequence visualization based on aggregate information to analyze the behavior of an RNN’s hidden state at the sentence-level. The usability and effectiveness of our method are demonstrated through case studies and reviews from domain experts.

    #langues #langage #mots #terminologie #grammaire

  • Visualizing neural networks as large directed graphs [OC] : dataisbeautiful
    https://www.reddit.com/r/dataisbeautiful/comments/78vo65/visualizing_neural_networks_as_large_directed

    Been a while since I posted on here regarding the large directed graph visualization that I have been doing whilst working at www.graphcore.ai. I am continually moving these forwards as I understand how to push the size of the graph and get good results. The image here is the first time I have been able to generate a full layout of the ResNet-50 training graph which is a neural network that came out of Microsoft research. It has ~3 million nodes and ~10 million edges and uses Gephi for the graph layout.

    https://www.graphcore.ai/posts/graph-computing-for-machine-intelligence-with-poplar

    #beau (j’ai juste admiré cette #visualisation mais pas lu #machine_learning :))

  • Algoliterary Encounter
    http://constantvzw.org/site/Algoliterary-Encounter.html

    In the framework of Saison Numérique the Maison du Livre opens its space for #Algolit during three days in a row. The group presents lectures, workshops and a small #Exhibition about the narrative perspective of neural networks. Neural networks are selflearning algorithms based on statistics. They often function as opaque ’blackbox’ algorithms, while they shape applications that are daily used on a worldwide scale, like search engines on the web, translation machines, advertising profiling, (...)

    Algolit / #Workshop, #Lecture, Exhibition, #Hybrid_languages, #Literature, #Algorithm

    • The series, titled “Portraits of Imaginary People” explores the latent space of human faces by training a #neural_network to imagine and then depict portraits of people who don’t exist. To do so, many thousands of photographs of faces taken from Flickr are fed to a type of #machine-learning program called a Generative Adversarial Network (GAN). GANs work by using two neural networks that play an adversarial game: one (the “Generator”) tries to generate increasingly convincing output, while a second (the “Discriminator”) tries to learn to distinguish real photos from the artificially generated ones. At first, both networks are poor at their respective tasks. But as the Discriminator network starts to learn to predict fake from real, it keeps the Generator on its toes, pushing it to generate harder and more convincing examples. In order to keep up, the Generator gets better and better, and the Discriminator correspondingly has to improve its response. With time, the images generated become increasingly realistic, as both adversaries try to outwit each other. The images you see here are thus a result of the rules and internal correlations the neural networks learned from the training images.

      http://www.miketyka.com/?p=00098000

  • The ’creepy Facebook AI’ story that captivated the media - BBC News
    http://www.bbc.com/news/technology-40790258

    Where did the story come from?

    Way back in June, Facebook published a blog post about interesting research on chatbot programs - which have short, text-based conversations with humans or other bots. The story was covered by New Scientist and others at the time.

    Facebook had been experimenting with bots that negotiated with each other over the ownership of virtual items.

    It was an effort to understand how linguistics played a role in the way such discussions played out for negotiating parties, and crucially the bots were programmed to experiment with language in order to see how that affected their dominance in the discussion.

    A few days later, some coverage picked up on the fact that in a few cases the exchanges had become - at first glance - nonsensical:

    Bob: “I can can I I everything else”
    Alice: “Balls have zero to me to me to me to me to me to me to me to me to”

    Although some reports insinuate that the bots had at this point invented a new language in order to elude their human masters, a better explanation is that the neural networks were simply trying to modify human language for the purposes of more successful interactions - whether their approach worked or not was another matter.

    As technology news site Gizmodo said: “In their attempts to learn from each other, the bots thus began chatting back and forth in a derived shorthand - but while it might look creepy, that’s all it was.”

    AIs that rework English as we know it in order to better compute a task are not new.

    Google reported that its translation software had done this during development. “The network must be encoding something about the semantics of the sentence” Google said in a blog.

    And earlier this year, Wired reported on a researcher at OpenAI who is working on a system in which AIs invent their own language, improving their ability to process information quickly and therefore tackle difficult problems more effectively.

    The story seems to have had a second wind in recent days, perhaps because of a verbal scrap over the potential dangers of AI between Facebook chief executive Mark Zuckerberg and technology entrepreneur Elon Musk.

    Robo-fear

    But the way the story has been reported says more about cultural fears and representations of machines than it does about the facts of this particular case.

    Plus, let’s face it, robots just make for great villains on the big screen.

    In the real world, though, AI is a huge area of research at the moment and the systems currently being designed and tested are increasingly complicated.

  • Fake news: you ain’t seen nothing yet
    https://www.economist.com/news/science-and-technology/21724370-generating-convincing-audio-and-video-fake-events-fake-news-you-

    Mr Klingemann’s experiment foreshadows a new battlefield between falsehood and veracity. Faith in written information is under attack in some quarters by the spread of what is loosely known as “fake news”. But images and sound recordings retain for many an inherent trustworthiness. GANs are part of a technological wave that threatens this credibility.

    Audio is easier to fake. Normally, computers generate speech by linking lots of short recorded speech fragments to create a sentence. That is how the voice of Siri, Apple’s digital assistant, is generated. But digital voices like this are limited by the range of fragments they have memorised. They only sound truly realistic when speaking a specific batch of phrases.

    Generative audio works differently, using neural networks to learn the statistical properties of the audio source in question, then reproducing those properties directly in any context, modelling how speech changes not just second-by-second, but millisecond-by-millisecond. Putting words into the mouth of Mr Trump, say, or of any other public figure, is a matter of feeding recordings of his speeches into the algorithmic hopper and then telling the trained software what you want that person to say.

    When pressed for an estimate, he suggests that the generation of YouTube fakes that are very plausible may be possible within three years. Others think it might take longer. But all agree that it is a question of when, not if. “We think that AI is going to change the kinds of evidence that we can trust,” says Mr Goodfellow.

    Yet even as technology drives new forms of artifice, it also offers new ways to combat it. One form of verification is to demand that recordings come with their metadata, which show when, where and how they were captured. Knowing such things makes it possible to eliminate a photograph as a fake on the basis, for example, of a mismatch with known local conditions at the time. A rather recherché example comes from work done in 2014 by NVIDIA, a chip-making company whose devices power a lot of AI. It used its chips to analyse photos from the Apollo 11 Moon landing. By simulating the way light rays bounce around, NVIDIA showed that the odd-looking lighting of Buzz Aldrin’s space suit—taken by some nitwits as evidence of fakery—really is reflected lunar sunlight and not the lights of a Hollywood film rig.

  • A Japanese Researcher Tweets Vintage Photos Colorized Using Neural Networks · Global Voices
    https://globalvoices.org/2017/01/05/a-japanese-researcher-tweets-vintage-photos-colorized-using-neural-net

    https://twitter.com/hwtnv

    Hidenori Watanave, an associate professor at Tokyo Metropolitan University, has been exploring a tool created by researchers at Japan’s Waseda University that colorizes images using neural networks and posting some of his results to Twitter.

    Waseda University’s online project, called Neural Network-based Automatic Image Colorization, was developed by researchers Satoshi Iizuka, Edgar Simo-Serra and Hiroshi Ishikawa. Neural networks are computer systems that work in a way that’s similar to the human brain. Anyone can use their web-based tool to add color to black-and-white images.

    #photographie #colorisation