technology:neural network

  • Training a single AI model can emit as much carbon as five cars in their lifetimes - MIT Technology Review

    In a new paper, researchers at the University of Massachusetts, Amherst, performed a life cycle assessment for training several common large AI models. They found that the process can emit more than 626,000 pounds of carbon dioxide equivalent—nearly five times the lifetime emissions of the average American car (and that includes manufacture of the car itself).

    It’s a jarring quantification of something AI researchers have suspected for a long time. “While probably many of us have thought of this in an abstract, vague level, the figures really show the magnitude of the problem,” says Carlos Gómez-Rodríguez, a computer scientist at the University of A Coruña in Spain, who was not involved in the research. “Neither I nor other researchers I’ve discussed them with thought the environmental impact was that substantial.”

    They found that the computational and environmental costs of training grew proportionally to model size and then exploded when additional tuning steps were used to increase the model’s final accuracy. In particular, they found that a tuning process known as neural architecture search, which tries to optimize a model by incrementally tweaking a neural network’s design through exhaustive trial and error, had extraordinarily high associated costs for little performance benefit. Without it, the most costly model, BERT, had a carbon footprint of roughly 1,400 pounds of carbon dioxide equivalent, close to a round-trip trans-American flight.

    What’s more, the researchers note that the figures should only be considered as baselines. “Training a single model is the minimum amount of work you can do,” says Emma Strubell, a PhD candidate at the University of Massachusetts, Amherst, and the lead author of the paper. In practice, it’s much more likely that AI researchers would develop a new model from scratch or adapt an existing model to a new data set, either of which can require many more rounds of training and tuning.

    The significance of those figures is colossal—especially when considering the current trends in AI research. “In general, much of the latest research in AI neglects efficiency, as very large neural networks have been found to be useful for a variety of tasks, and companies and institutions that have abundant access to computational resources can leverage this to obtain a competitive advantage,” Gómez-Rodríguez says. “This kind of analysis needed to be done to raise awareness about the resources being spent [...] and will spark a debate.”

    “What probably many of us did not comprehend is the scale of it until we saw these comparisons,” echoed Siva Reddy, a postdoc at Stanford University who was not involved in the research.
    The privatization of AI research

    The results underscore another growing problem in AI, too: the sheer intensity of resources now required to produce paper-worthy results has made it increasingly challenging for people working in academia to continue contributing to research.

    #Intelligence_artificielle #Consommation_énergie #Empreinte_carbone

  • #Nextcloud 16 becomes smarter with #Machine_Learning for security and productivity – Nextcloud

    The #Suspicious #Login Detection app tracks successful logins on the instance for a set period of time (default is 60 days) and then uses the generated data to train a neural network. As soon as the first model is trained, the app starts classifying logins. Should it detect a password login classified as suspicious by the trained model, it will add an entry to the suspicious_login table, including the timestamp, request id and URL. The user will get a notification and the system administrator will be able to find this information in the logs.

    Plus de détail sur le blog de la personne qui a développé le bouzin :

    Qui utilise

    Il y a peut-être des trucs à pomper pour #SPIP là dedans...

  • 10 Open Source #ai Project Ideas For Startups

    The open source AI projects particularly pay attention to deep learning, machine learning, neural network and other applications that are extending the use of AI.Those involved in deep researches have always had the goal of building machines capable of thinking like human beings.For the last few years, computer scientists have made unbelievable progress in Artificial Intelligence (AI) to this extent that the interest in AI project ideas keeps increasing among technology enthusiasts.As per Gartner’s prediction, Artificial Intelligence technologies going to be virtually prevalent in nearly all new software products and services.The contribution of open source software development to the rise of Artificial Intelligence is immeasurable. And, innumerable top machine learning, deep learning, (...)

    #startup #business #open-source #machine-learning

  • Malicious Attacks to Neural Networks

    Adversarial Examples for Humans — An IntroductionThis article is based on a twenty-minute talk I gave for TrendMicro Philippines Decode Event 2018. It’s about how malicious people can attack deep neural networks. A trained neural network is a model; I’ll be using the terms network (short for neural network) and model interchangeably throughout this article.Deep learning in a nutshellThe basic building block of any neural network is an artificial neuron.Essentially, a neuron takes a bunch of inputs and outputs a value. A neuron gets the weighted sum of the inputs (plus a number called a bias) and feeds it to a non-linear activation function. Then, the function outputs a value that can be used as one of the inputs to other neurons.You can connect neurons in various different (usually (...)

    #artificial-intelligence #neural-networks #deep-learning #machine-learning

  • YouTube Executives Ignored Warnings, Let Toxic Videos Run Rampant - Bloomberg

    Wojcicki’s media behemoth, bent on overtaking television, is estimated to rake in sales of more than $16 billion a year. But on that day, Wojcicki compared her video site to a different kind of institution. “We’re really more like a library,” she said, staking out a familiar position as a defender of free speech. “There have always been controversies, if you look back at libraries.”

    Since Wojcicki took the stage, prominent conspiracy theories on the platform—including one on child vaccinations; another tying Hillary Clinton to a Satanic cult—have drawn the ire of lawmakers eager to regulate technology companies. And YouTube is, a year later, even more associated with the darker parts of the web.

    The conundrum isn’t just that videos questioning the moon landing or the efficacy of vaccines are on YouTube. The massive “library,” generated by users with little editorial oversight, is bound to have untrue nonsense. Instead, YouTube’s problem is that it allows the nonsense to flourish. And, in some cases, through its powerful artificial intelligence system, it even provides the fuel that lets it spread.

    Mais justement NON ! Ce ne peut être une “bibliothèque”, car une bibliothèque ne conserve que des documents qui ont été publiés, donc avec déjà une première instance de validation (ou en tout cas de responsabilité éditoriale... quelqu’un ira en procès le cas échéant).

    YouTube est... YouTube, quelque chose de spécial à internet, qui remplit une fonction majeure... et également un danger pour la pensée en raison de “l’économie de l’attention”.

    The company spent years chasing one business goal above others: “Engagement,” a measure of the views, time spent and interactions with online videos. Conversations with over twenty people who work at, or recently left, YouTube reveal a corporate leadership unable or unwilling to act on these internal alarms for fear of throttling engagement.

    In response to criticism about prioritizing growth over safety, Facebook Inc. has proposed a dramatic shift in its core product. YouTube still has struggled to explain any new corporate vision to the public and investors – and sometimes, to its own staff. Five senior personnel who left YouTube and Google in the last two years privately cited the platform’s inability to tame extreme, disturbing videos as the reason for their departure. Within Google, YouTube’s inability to fix its problems has remained a major gripe. Google shares slipped in late morning trading in New York on Tuesday, leaving them up 15 percent so far this year. Facebook stock has jumped more than 30 percent in 2019, after getting hammered last year.

    YouTube’s inertia was illuminated again after a deadly measles outbreak drew public attention to vaccinations conspiracies on social media several weeks ago. New data from Moonshot CVE, a London-based firm that studies extremism, found that fewer than twenty YouTube channels that have spread these lies reached over 170 million viewers, many who were then recommended other videos laden with conspiracy theories.

    So YouTube, then run by Google veteran Salar Kamangar, set a company-wide objective to reach one billion hours of viewing a day, and rewrote its recommendation engine to maximize for that goal. When Wojcicki took over, in 2014, YouTube was a third of the way to the goal, she recalled in investor John Doerr’s 2018 book Measure What Matters.

    “They thought it would break the internet! But it seemed to me that such a clear and measurable objective would energize people, and I cheered them on,” Wojcicki told Doerr. “The billion hours of daily watch time gave our tech people a North Star.” By October, 2016, YouTube hit its goal.

    YouTube doesn’t give an exact recipe for virality. But in the race to one billion hours, a formula emerged: Outrage equals attention. It’s one that people on the political fringes have easily exploited, said Brittan Heller, a fellow at Harvard University’s Carr Center. “They don’t know how the algorithm works,” she said. “But they do know that the more outrageous the content is, the more views.”

    People inside YouTube knew about this dynamic. Over the years, there were many tortured debates about what to do with troublesome videos—those that don’t violate its content policies and so remain on the site. Some software engineers have nicknamed the problem “bad virality.”

    Yonatan Zunger, a privacy engineer at Google, recalled a suggestion he made to YouTube staff before he left the company in 2016. He proposed a third tier: Videos that were allowed to stay on YouTube, but, because they were “close to the line” of the takedown policy, would be removed from recommendations. “Bad actors quickly get very good at understanding where the bright lines are and skating as close to those lines as possible,” Zunger said.

    His proposal, which went to the head of YouTube policy, was turned down. “I can say with a lot of confidence that they were deeply wrong,” he said.

    Rather than revamp its recommendation engine, YouTube doubled down. The neural network described in the 2016 research went into effect in YouTube recommendations starting in 2015. By the measures available, it has achieved its goal of keeping people on YouTube.

    “It’s an addiction engine,” said Francis Irving, a computer scientist who has written critically about YouTube’s AI system.

    Wojcicki and her lieutenants drew up a plan. YouTube called it Project Bean or, at times, “Boil The Ocean,” to indicate the enormity of the task. (Sometimes they called it BTO3 – a third dramatic overhaul for YouTube, after initiatives to boost mobile viewing and subscriptions.) The plan was to rewrite YouTube’s entire business model, according to three former senior staffers who worked on it.

    It centered on a way to pay creators that isn’t based on the ads their videos hosted. Instead, YouTube would pay on engagement—how many viewers watched a video and how long they watched. A special algorithm would pool incoming cash, then divvy it out to creators, even if no ads ran on their videos. The idea was to reward video stars shorted by the system, such as those making sex education and music videos, which marquee advertisers found too risqué to endorse.

    Coders at YouTube labored for at least a year to make the project workable. But company managers failed to appreciate how the project could backfire: paying based on engagement risked making its “bad virality” problem worse since it could have rewarded videos that achieved popularity achieved by outrage. One person involved said that the algorithms for doling out payments were tightly guarded. If it went into effect then, this person said, it’s likely that someone like Alex Jones—the Infowars creator and conspiracy theorist with a huge following on the site, before YouTube booted him last August—would have suddenly become one of the highest paid YouTube stars.

    In February of 2018, the video calling the Parkland shooting victims “crisis actors” went viral on YouTube’s trending page. Policy staff suggested soon after limiting recommendations on the page to vetted news sources. YouTube management rejected the proposal, according to a person with knowledge of the event. The person didn’t know the reasoning behind the rejection, but noted that YouTube was then intent on accelerating its viewing time for videos related to news.

    #YouTube #Economie_attention #Engagement #Viralité

  • How to Understand Machine Learning with simple Code Examples

    Understanding machine learning using simple code examples.“Machine Learning, Artificial Intelligence, Deep Learning, Data Science, Neural Networks”You must’ve surely read somewhere about how these things are gonna take away future jobs, overthrow us as dominant species on earth and how we’d have to find Arnold Schwarzenegger and John Connor to save humanity.With current hype, there is no surprise you might have.But, what is Machine Learning and what is Artificial Intelligence?Machine learning is the scientific study of algorithms and statistical models that computer systems use to effectively perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence.And what is Neural Network?Artificial neural (...)

    #deep-learning #artificial-intelligence #neural-networks #machine-learning #javascript

  • (tutorial 3)What is seq2seq for text summarization and why

    This tutorial is the third one from a series of tutorials that would help you build an abstractive text summarizer using tensorflow , today we would discuss the main building block for the text summarization task , begining from RNN why we use it and not just a normal neural network , till finally reaching seq2seq modelAbout SeriesThis is a series of tutorials that would help you build an abstractive text summarizer using tensorflow using multiple approaches , you don’t need to download the data nor you need to run the code locally on your device , as data is found on google drive , (you can simply copy it to your google drive , learn more here) , and the code for this series is written in Jupyter notebooks to run on google colab can be found hereWe have covered so far (code for this series can (...)

    #machine-learning #tech #nlp #technology #artificial-intelligence

  • Introduction to #keras: Build a Neural Network to Classify Digits!

    Keras is a neural networks API that runs on top of Tensorflow, Theano, or CNTK. Essentially, Keras provides high level building blocks for developing deep learning models and uses backend engines like Tensorflow to operate. As a “hello world” tutorial to Keras, we will be building a handwritten digit classifier using a convolutional neural network (CNN)!Before getting started you should…Have some python knowledgeUnderstand the basics of neural networksHave the following packages installed:Python 2.7+TensorflowKerasNumpyMatplotlibNumpy is the core python library for scientific computing and Matplotlib is another library for creating data visualizations. If you are unfamiliar with Python and Numpy, I would highly recommend reading through this guide from the popular Stanford CS231 class on (...)

    #build-a-neural-network #neural-networks #classify-digits #machine-learning

  • Dueling Neural Networks

    “What I cannot create, I do not understand.”— Richard FeynmanGANs generated by a computerThe above images look real, but more than that, they look familiar. They resemble a famous actress that you may have seen on television or in the movies. They are not real, however. A new type of neural network created them.Generative Adversarial Networks (GANs), sometimes called generative networks, generated these fake images. The NVIDIA research team used this new technique by feeding thousands of photos of celebrities to a neural network. The neural network produced thousands of pictures, like the ones above, that resembled the famous faces. They look real, but machines created them. #gans allow researchers to build images that look like the real ones that share many of the features the neural (...)

    #birthday-paradox #deep-learning #generative-adversarial #machine-learning

  • #perceptron — Deep Learning Basics

    Perceptron — Deep Learning BasicsAn upgrade to McCulloch-Pitts Neuron.Perceptron is a fundamental unit of the neural network which takes weighted inputs, process it and capable of performing binary classifications. In this post, we will discuss the working of the Perceptron Model. This is a follow-up blog post to my previous post on McCulloch-Pitts Neuron.In 1958 Frank Rosenblatt proposed the perceptron, a more generalized computational model than the McCulloch-Pitts Neuron. The important feature in the Rosenblatt proposed perceptron was the introduction of weights for the inputs. Later in 1960s Rosenblatt’s Model was refined and perfected by Minsky and Papert. Rosenblatt’s model is called as classical perceptron and the model analyzed by Minsky and Papert is called perceptron.Disclaimer: (...)

    #neurons #artificial-intelligence #deep-learning #deep-learning-basics

  • Can #blockchain with Artificial Intelligence Fight Deep Fake?

    Truth has been the subject of discussion in its own rights, objectively and independently of the ways we think about it or describe it, for many ages. Philosophical theories about truth may have many relative grounds but in mathematics there exist absolute truth.Can truth shapeshift? In an emotion based market, truth is subjective to the intellectual spectrum of people’s belief and opinion. The deepfake video of Barack Obama’s speech created by BuzzFeed using power face swapping neural networking technology is one such example. what is a deep fake?“Deepfake, a portmanteau of “deep learning” and “fake”,[1] is an artificial intelligence-based human image synthesis technique. It is used to combine and superimpose existing images and (...)

    #machine-learning #artificial-intelligence #deep-learning #venture-capital

  • Building a Neural Network Only Using NumPy

    Using Andrew Ng’s Project Structure to Build a Neural Net in PythonIntroductionAfter having completed the Deep Learning specialization taught by Andrew Ng, I have decided to work through some of the assignments of the specialization and try to figure out the code myself without only filling in certain parts of it. Doing so, I want to deepen my understanding of neural networks and help others gain intuition by documenting my progress in articles. The complete notebook is available here.In this article, I’m going to build a neural network in #python only using NumPy based on the project structure proposed in the Deep Learning specialization:Define the structure of the neural network2. Initialize the parameters of the neural network defined in step one3. Loop (...)

    #deep-learning #machine-learning #artificial-intelligence #data-science

  • The Perceptron

    In machine learning, the perceptron is an #Algorithm for supervised learning of binary classifiers. A binary classifier is a model which can decide whether an input belongs to some specific class. Neural Networks work the same way as the perceptron. Perceptron is a single layer neural network and a multi-layer perceptron is called Neural Networks. During this session we will perform the Perceptron physically as a game. Afterwards, we will look at the code. For those wishing to experiment (...)


    / #Workshop, #Netnative_literature, Algorithm

  • How we used #ai to hybridize humans with cartoon animals and made a business out of it.

    Have you ever imagined yourself as a cartoon character? Well, now this is more than real.We are a team of 20 engineers and art designers who have developed a machine learning technology that morphs human faces with animated characters.The process starts by constructing a user’s 3D face model from just a single selfie shot. Importantly, our technology even works with older, regular smartphone cameras. With this single photo, our neural network builds a 3D mesh of the user’s head that looks like this:The neural network regresses a 3D model from a 2D photoNext, 3 other neural networks swing into action. The first draws eyebrows, the second detects and matches eye color, and the third detects and draws glasses if the user is wearing them. When these elements are ready, we morph the user with (...)

    #machine-learning #artificial-intelligence #ar #startup

  • GIPSA-lab invite Pablo JENSEN, directeur de recherche CNRS au Laboratoire de Physique de l’ENS de LYON pour un séminaire exceptionnel le 10 janvier 2019 à 10h30.

    The unexpected link between neural nets and liberalism

    Sixty years ago, Arthur Rosenblatt, a psychologist working for the army invented the perceptron, the first neural network capable of learning. Unexpectedly, Rosenblatt cites, as a major source of inspiration, an economist: Friedrich Hayek. He is well-known for his 1974 Nobel prize… and by his ultra-liberal stances, justifying the Pinochet coup in a Chilean newspaper: «Personally, I prefer a liberal dictator to a democratic government that lacks liberalism». This talk presents ongoing work on the link between Hayek’s ideology and neural networks.

    After a PhD on experimental condensed-matter physics, Pablo JENSEN worked for 15 years on the modeling of nanostructure growth. This lead to major publications in top journals, including Nature, Phys Rev Lett and a widely cited review in Rev Mod Phys. After these achievements, he decided to follow an unconventional path and switch to the modeling of social systems. It takes time to become familiar with social science topics and literature, but it is mandatory to establish serious interdisciplinary connections. During that period, he also had national responsibilities at CNRS, to improve communication of physics. This investment has now started to pay, as shown by recent publications in major interdisciplinary or social science (geography, economics, sociology) journals, including PNAS, J Pub Eco and British J Sociology. His present work takes advantage of the avalanche of social data available on the Web to improve our understanding of society. To achieve this, he collaborate with hard scientists to develop appropriate analysis tools and with social scientists to find relevant questions and interpretations.
    His last book : Pourquoi la société ne se laisse pas mettre en équations, Pablo Jensen, Seuil, coll. “Science ouverte”, mars 2018
    Personal Web page :

    Lieu du séminaire : Laboratoire GIPSA-lab, 11 rue des Mathématiques, Campus de Saint Martin d’Hères, salle Mont-Blanc (bâtiment Ampère D, 1er étage)

    #grenoble #neural_net #liberalism

  • ConvNet from scratch : just lovely Numpy, Forward Pass |Part 1|

    High level frameworks and APIs make it a lot easy for us to implement such a complex architecture but may be implementing them from scratch gives us the ground truth intuition of how actually ConvNets work.- Outline of the ArticleWe’ll be implementing the building blocks of a convolutional neural network! Each function we’ll implement will have detailed instructions that will walk you through the steps needed:Zero-PaddingConvolution forwardPooling forwardWe’ll use DLS jupyter notebooks to execute our modules. Check out DLS here. The fact is it comes with pre-installed libraries and frameworks required for Deep Learning. So it’s good to go for DL.A video walkthrough of Deep CognitionGenerate stories using RNNs |pure Mathematics with code|:Zero Padding•Zero padding adds zeros around the borders (...)

    #deep-learning #machine-learning #technology #python #artificial-intelligence

  • Preprocess Keras Model for TensorSpace

    How to preprocess Keras model to be TensorSpace compatible for neural network 3D visualizationTensorSpace & Keras“TensorSpace is a neural network 3D visualization framework. —”“Keras is a high-level neural network API. — ”IntroductionYou may learn about TensorSpace can be used to 3D visualize the neural networks. You might have read my previous introduction about TensorSpace. Maybe you find it is a little complicated on the model preprocess.Hence today, I want to talk about the model preprocess of TensorSpace for more details. To be more specific, how to preprocess the deep learning model built by Keras to be TensorSpace compatible.Fig. 1 — Use TensorSpace to visualize an LeNet built by KerasWhat we should have?To make a model built by Keras to be TensorSpace compatible, (...)

    #python #data-visualization #machine-learning #technology #javascript

  • How Artists Can Set Up Their Own Neural Network — Part 3 — Image Generation

    How Artists Can Set Up Their Own Neural Network — Part 3 — Image GenerationAlright, so we’ve installed linux and the neural network now it’s time to actually run it!First though I want to apologize for the delay in getting these last two parts of the #tutorial series out. As I explained in my Skonk Works post, I’ve been learning so fast that it’s actually been kind of hard to catch time to digest and write any of it down.For instance, this tutorial series began with teaching you how to install Ubuntu 16.04, but support for Ubuntu 16.04 has just ended and you really should install Ubuntu 18.04, which is what I did after wiping my desktop and turning it into a fulltime personal cloud server! This is good because now I have a completely dedicated Linux machine to run neural network batch jobs on (...)

    #artist #artist-neural-network #neural-networks #setup-neural-network

  • Fake fingerprints can imitate real ones in biometric systems – research

    DeepMasterPrints created by a machine learning technique have error rate of only one in five Researchers have used a neural network to generate artificial fingerprints that work as a “master key” for biometric identification systems and prove fake fingerprints can be created. According to a paper presented at a security conference in Los Angeles, the artificially generated fingerprints, dubbed “DeepMasterPrints” by the researchers from New York University, were able to imitate more than one (...)

    #fraude #biométrie #empreintes

  • In the Age of A.I., Is Seeing Still Believing ? | The New Yorker

    In a media environment saturated with fake news, such technology has disturbing implications. Last fall, an anonymous Redditor with the username Deepfakes released a software tool kit that allows anyone to make synthetic videos in which a neural network substitutes one person’s face for another’s, while keeping their expressions consistent. Along with the kit, the user posted pornographic videos, now known as “deepfakes,” that appear to feature various Hollywood actresses. (The software is complex but comprehensible: “Let’s say for example we’re perving on some innocent girl named Jessica,” one tutorial reads. “The folders you create would be: ‘jessica; jessica_faces; porn; porn_faces; model; output.’ ”) Around the same time, “Synthesizing Obama,” a paper published by a research group at the University of Washington, showed that a neural network could create believable videos in which the former President appeared to be saying words that were really spoken by someone else. In a video voiced by Jordan Peele, Obama seems to say that “President Trump is a total and complete dipshit,” and warns that “how we move forward in the age of information” will determine “whether we become some kind of fucked-up dystopia.”

    “People have been doing synthesis for a long time, with different tools,” he said. He rattled off various milestones in the history of image manipulation: the transposition, in a famous photograph from the eighteen-sixties, of Abraham Lincoln’s head onto the body of the slavery advocate John C. Calhoun; the mass alteration of photographs in Stalin’s Russia, designed to purge his enemies from the history books; the convenient realignment of the pyramids on the cover of National Geographic, in 1982; the composite photograph of John Kerry and Jane Fonda standing together at an anti-Vietnam demonstration, which incensed many voters after the Times credulously reprinted it, in 2004, above a story about Kerry’s antiwar activities.

    “In the past, anybody could buy Photoshop. But to really use it well you had to be highly skilled,” Farid said. “Now the technology is democratizing.” It used to be safe to assume that ordinary people were incapable of complex image manipulations. Farid recalled a case—a bitter divorce—in which a wife had presented the court with a video of her husband at a café table, his hand reaching out to caress another woman’s. The husband insisted it was fake. “I noticed that there was a reflection of his hand in the surface of the table,” Farid said, “and getting the geometry exactly right would’ve been really hard.” Now convincing synthetic images and videos were becoming easier to make.

    The acceleration of home computing has converged with another trend: the mass uploading of photographs and videos to the Web. Later, when I sat down with Efros in his office, he explained that, even in the early two-thousands, computer graphics had been “data-starved”: although 3-D modellers were capable of creating photorealistic scenes, their cities, interiors, and mountainscapes felt empty and lifeless. True realism, Efros said, requires “data, data, data” about “the gunk, the dirt, the complexity of the world,” which is best gathered by accident, through the recording of ordinary life.

    Today, researchers have access to systems like ImageNet, a site run by computer scientists at Stanford and Princeton which brings together fourteen million photographs of ordinary places and objects, most of them casual snapshots posted to Flickr, eBay, and other Web sites. Initially, these images were sorted into categories (carrousels, subwoofers, paper clips, parking meters, chests of drawers) by tens of thousands of workers hired through Amazon Mechanical Turk. Then, in 2012, researchers at the University of Toronto succeeded in building neural networks capable of categorizing ImageNet’s images automatically; their dramatic success helped set off today’s neural-networking boom. In recent years, YouTube has become an unofficial ImageNet for video. Efros’s lab has overcome the site’s “platform bias”—its preference for cats and pop stars—by developing a neural network that mines, from “life style” videos such as “My Spring Morning Routine” and “My Rustic, Cozy Living Room,” clips of people opening packages, peering into fridges, drying off with towels, brushing their teeth. This vast archive of the uninteresting has made a new level of synthetic realism possible.

    In 2016, the Defense Advanced Research Projects Agency (DARPA) launched a program in Media Forensics, or MediFor, focussed on the threat that synthetic media poses to national security. Matt Turek, the program’s manager, ticked off possible manipulations when we spoke: “Objects that are cut and pasted into images. The removal of objects from a scene. Faces that might be swapped. Audio that is inconsistent with the video. Images that appear to be taken at a certain time and place but weren’t.” He went on, “What I think we’ll see, in a couple of years, is the synthesis of events that didn’t happen. Multiple images and videos taken from different perspectives will be constructed in such a way that they look like they come from different cameras. It could be something nation-state driven, trying to sway political or military action. It could come from a small, low-resource group. Potentially, it could come from an individual.”

    As with today’s text-based fake news, the problem is double-edged. Having been deceived by a fake video, one begins to wonder whether many real videos are fake. Eventually, skepticism becomes a strategy in itself. In 2016, when the “Access Hollywood” tape surfaced, Donald Trump acknowledged its accuracy while dismissing his statements as “locker-room talk.” Now Trump suggests to associates that “we don’t think that was my voice.”

    “The larger danger is plausible deniability,” Farid told me. It’s here that the comparison with counterfeiting breaks down. No cashier opens up the register hoping to find counterfeit bills. In politics, however, it’s often in our interest not to believe what we are seeing.

    As alarming as synthetic media may be, it may be more alarming that we arrived at our current crises of misinformation—Russian election hacking; genocidal propaganda in Myanmar; instant-message-driven mob violence in India—without it. Social media was enough to do the job, by turning ordinary people into media manipulators who will say (or share) anything to win an argument. The main effect of synthetic media may be to close off an escape route from the social-media bubble. In 2014, video of the deaths of Michael Brown and Eric Garner helped start the Black Lives Matter movement; footage of the football player Ray Rice assaulting his fiancée catalyzed a reckoning with domestic violence in the National Football League. It seemed as though video evidence, by turning us all into eyewitnesses, might provide a path out of polarization and toward reality. With the advent of synthetic media, all that changes. Body cameras may still capture what really happened, but the aesthetic of the body camera—its claim to authenticity—is also a vector for misinformation. “Eyewitness video” becomes an oxymoron. The path toward reality begins to wash away.

    #Fake_news #Image #Synthèse

  • Deep learning of aftershock patterns following large earthquakes | Nature

    we use a deep-learning approach to identify a static-stress-based criterion that forecasts aftershock locations without prior assumptions about fault orientation. We show that a neural network trained on more than 131,000 mainshock–aftershock pairs can predict the locations of aftershocks in an independent test dataset of more than 30,000 mainshock–aftershock pairs more accurately (area under curve of 0.849) than can classic Coulomb failure stress change (area under curve of 0.583). We find that the learned aftershock pattern is physically interpretable

  • Detecting ’deepfake’ videos in the blink of an eye

    What’s a ‘deepfake,’ anyway?

    Making a deepfake video is a lot like translating between languages. Services like Google Translate use machine learning – computer analysis of tens of thousands of texts in multiple languages – to detect word-use patterns that they use to create the translation.

    Deepfake algorithms work the same way: They use a type of machine learning system called a deep neural network to examine the facial movements of one person. Then they synthesize images of another person’s face making analogous movements. Doing so effectively creates a video of the target person appearing to do or say the things the source person did.
    How deepfake videos are made.

    Before they can work properly, deep neural networks need a lot of source information, such as photos of the persons being the source or target of impersonation. The more images used to train a deepfake algorithm, the more realistic the digital impersonation will be.
    Detecting blinking

    There are still flaws in this new type of algorithm. One of them has to do with how the simulated faces blink – or don’t. Healthy adult humans blink somewhere between every 2 and 10 seconds, and a single blink takes between one-tenth and four-tenths of a second. That’s what would be normal to see in a video of a person talking. But it’s not what happens in many deepfake videos.
    A real person blinks while talking.
    A simulated face doesn’t blink the way a real person does.

    When a deepfake algorithm is trained on face images of a person, it’s dependent on the photos that are available on the internet that can be used as training data. Even for people who are photographed often, few images are available online showing their eyes closed. Not only are photos like that rare – because people’s eyes are open most of the time – but photographers don’t usually publish images where the main subjects’ eyes are shut.

    Without training images of people blinking, deepfake algorithms are less likely to create faces that blink normally. When we calculate the overall rate of blinking, and compares that with the natural range, we found that characters in deepfake videos blink a lot less frequent in comparison with real people. Our research uses machine learning to examine eye opening and closing in videos.

    This gives us an inspiration to detect deepfake videos. Subsequently, we develop a method to detect when the person in the video blinks. To be more specific, it scans each frame of a video in question, detects the faces in it and then locates the eyes automatically. It then utilizes another deep neural network to determine if the detected eye is open or close, using the eye’ appearance, geometric features and movement.

    We know that our work is taking advantage of a flaw in the sort of data available to train deepfake algorithms. To avoid falling prey to a similar flaw, we have trained our system on a large library of images of both open and closed eyes. This method seems to work well, and as a result, we’ve achieved an over 95 percent detection rate.

    This isn’t the final word on detecting deepfakes, of course. The technology is improving rapidly, and the competition between generating and detecting fake videos is analogous to a chess game. In particular, blinking can be added to deepfake videos by including face images with closed eyes or using video sequences for training. People who want to confuse the public will get better at making false videos – and we and others in the technology community will need to continue to find ways to detect them.

    #Fake_news #Fake_videos #Intelligence_artificielle #Deep_learning

  • Forecasting Market Movements Using #tensorflow

    Photo by jesse orrico on UnsplashMulti-Layer Perceptron for ClassificationIs it possible to create a neural network for predicting daily market movements from a set of standard trading indicators?In this post we’ll be looking at a simple model using Tensorflow to create a framework for testing and development, along with some preliminary results and suggested improvements.The ML Task and Input FeaturesTo keep the basic design simple, it’s setup for a binary classification task, predicting whether the next day’s close is going to be higher or lower than the current, corresponding to a prediction to either go long or short for the next time period. In reality, this could be applied to a bot which calculates and executes a set of positions at the start of a trading day to capture the day’s (...)

    #forecasting-tensorflow #machine-learning #market-movement

  • Don’t Trust a Pickle

    Don’t Trust a PickleIf you are using #python, especially for machine learning, you should be somewhat familiar with the standard library module named pickle. It is used for Python object serialization and comes very handy in wide range of applications. Some objects that you might want to serialize: a trained scikit-learn model, a Pandas DataFrame that you got after a lengthy join of several tables; basically any Python object that consists of heterogeneous data that you might want to quickly load in a new environment in the future (for homogeneous data, like neural network weights or training data tensor, it’s better to use a more suitable format like HDF5).In this article I would like to tell you why you should be very cautious when unpickling an object that you obtained from an untrusted (...)

    #python-pickl #dont-trust-a-pickle #pickles #programming

  • #rnn or Recurrent Neural Network for Noobs

    What is a Recurrent Neural Network or RNN, how it works, where it can be used? This article tries to answer the above questions. It also shows a demo implementation of a RNN used for a specific purpose, but you would be able to generalise it for your needs.Recurrent Neural Network ArchitectureKnowhow. Python, CNN knowledge is required. CNN is required to compare why and where RNN performs better than CNN? No need to understand the math. If you want to check then go back to my earlier article to check what is a CNN.We will begin with the word use of the word “Recurrent”. Why is it called Recurrent? In english the word recurrent means:occurring often or repeatedlyIn the case of this type of Neural Network it’s called Recurrent since it does the same operation over and over on sets of (...)

    #deep-learning #machine-learning #recurrent-neural-network #neural-networks