industryterm:neural network

  • Training a single AI model can emit as much carbon as five cars in their lifetimes - MIT Technology Review
    https://www.technologyreview.com/s/613630/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in

    In a new paper, researchers at the University of Massachusetts, Amherst, performed a life cycle assessment for training several common large AI models. They found that the process can emit more than 626,000 pounds of carbon dioxide equivalent—nearly five times the lifetime emissions of the average American car (and that includes manufacture of the car itself).

    It’s a jarring quantification of something AI researchers have suspected for a long time. “While probably many of us have thought of this in an abstract, vague level, the figures really show the magnitude of the problem,” says Carlos Gómez-Rodríguez, a computer scientist at the University of A Coruña in Spain, who was not involved in the research. “Neither I nor other researchers I’ve discussed them with thought the environmental impact was that substantial.”

    They found that the computational and environmental costs of training grew proportionally to model size and then exploded when additional tuning steps were used to increase the model’s final accuracy. In particular, they found that a tuning process known as neural architecture search, which tries to optimize a model by incrementally tweaking a neural network’s design through exhaustive trial and error, had extraordinarily high associated costs for little performance benefit. Without it, the most costly model, BERT, had a carbon footprint of roughly 1,400 pounds of carbon dioxide equivalent, close to a round-trip trans-American flight.

    What’s more, the researchers note that the figures should only be considered as baselines. “Training a single model is the minimum amount of work you can do,” says Emma Strubell, a PhD candidate at the University of Massachusetts, Amherst, and the lead author of the paper. In practice, it’s much more likely that AI researchers would develop a new model from scratch or adapt an existing model to a new data set, either of which can require many more rounds of training and tuning.

    The significance of those figures is colossal—especially when considering the current trends in AI research. “In general, much of the latest research in AI neglects efficiency, as very large neural networks have been found to be useful for a variety of tasks, and companies and institutions that have abundant access to computational resources can leverage this to obtain a competitive advantage,” Gómez-Rodríguez says. “This kind of analysis needed to be done to raise awareness about the resources being spent [...] and will spark a debate.”

    “What probably many of us did not comprehend is the scale of it until we saw these comparisons,” echoed Siva Reddy, a postdoc at Stanford University who was not involved in the research.
    The privatization of AI research

    The results underscore another growing problem in AI, too: the sheer intensity of resources now required to produce paper-worthy results has made it increasingly challenging for people working in academia to continue contributing to research.

    #Intelligence_artificielle #Consommation_énergie #Empreinte_carbone

  • #Nextcloud 16 becomes smarter with #Machine_Learning for security and productivity – Nextcloud
    https://nextcloud.com/blog/nextcloud-16-becomes-smarter-with-machine-learning-for-security-and-produ

    The #Suspicious #Login Detection app tracks successful logins on the instance for a set period of time (default is 60 days) and then uses the generated data to train a neural network. As soon as the first model is trained, the app starts classifying logins. Should it detect a password login classified as suspicious by the trained model, it will add an entry to the suspicious_login table, including the timestamp, request id and URL. The user will get a notification and the system administrator will be able to find this information in the logs.

    Plus de détail sur le blog de la personne qui a développé le bouzin :

    https://blog.wuc.me/2019/04/25/nextcloud-suspicious-login-detection

    Qui utilise https://php-ml.org

    Il y a peut-être des trucs à pomper pour #SPIP là dedans...

  • 10 Open Source #ai Project Ideas For Startups
    https://hackernoon.com/10-open-source-ai-project-ideas-for-startups-1afda6fb0aa8?source=rss----

    The open source AI projects particularly pay attention to deep learning, machine learning, neural network and other applications that are extending the use of AI.Those involved in deep researches have always had the goal of building machines capable of thinking like human beings.For the last few years, computer scientists have made unbelievable progress in Artificial Intelligence (AI) to this extent that the interest in AI project ideas keeps increasing among technology enthusiasts.As per Gartner’s prediction, Artificial Intelligence technologies going to be virtually prevalent in nearly all new software products and services.The contribution of open source software development to the rise of Artificial Intelligence is immeasurable. And, innumerable top machine learning, deep learning, (...)

    #startup #business #open-source #machine-learning

  • 10 Top Open Source AI Technologies For Startups
    https://hackernoon.com/10-top-open-source-ai-technologies-for-startups-7c5f10b82fb1?source=rss-

    In the area of technology research, Artificial intelligence is one of the hottest trends. In fact, many startups have already made progress in areas like natural language, neural networks, AI, machine learning and image processing. Many other big companies like Google, Microsoft, IBM, Amazon and Facebook are heavily investing in their own R&D.Hence, it is no surprise now AI applications are increasingly useful for small as well as large businesses in 2019. In this blog, I have listed top 10 open source AI Technologies for small businesses and startups.1) Apache SystemMLIt is the machine learning technology created at IBM that has reached one of the top-level project levels in the Apache Software Foundation and is a flexible and scalable machine learning system. The important (...)

    #machine-learning #artificial-intelligence #open-source #startup #open-source-ai

  • Malicious Attacks to Neural Networks
    https://hackernoon.com/malicious-attacks-to-neural-networks-8b966793dfe1?source=rss----3a8144ea

    Adversarial Examples for Humans — An IntroductionThis article is based on a twenty-minute talk I gave for TrendMicro Philippines Decode Event 2018. It’s about how malicious people can attack deep neural networks. A trained neural network is a model; I’ll be using the terms network (short for neural network) and model interchangeably throughout this article.Deep learning in a nutshellThe basic building block of any neural network is an artificial neuron.Essentially, a neuron takes a bunch of inputs and outputs a value. A neuron gets the weighted sum of the inputs (plus a number called a bias) and feeds it to a non-linear activation function. Then, the function outputs a value that can be used as one of the inputs to other neurons.You can connect neurons in various different (usually (...)

    #artificial-intelligence #neural-networks #deep-learning #machine-learning

  • YouTube Executives Ignored Warnings, Let Toxic Videos Run Rampant - Bloomberg
    https://www.bloomberg.com/news/features/2019-04-02/youtube-executives-ignored-warnings-letting-toxic-videos-run-rampant

    Wojcicki’s media behemoth, bent on overtaking television, is estimated to rake in sales of more than $16 billion a year. But on that day, Wojcicki compared her video site to a different kind of institution. “We’re really more like a library,” she said, staking out a familiar position as a defender of free speech. “There have always been controversies, if you look back at libraries.”

    Since Wojcicki took the stage, prominent conspiracy theories on the platform—including one on child vaccinations; another tying Hillary Clinton to a Satanic cult—have drawn the ire of lawmakers eager to regulate technology companies. And YouTube is, a year later, even more associated with the darker parts of the web.

    The conundrum isn’t just that videos questioning the moon landing or the efficacy of vaccines are on YouTube. The massive “library,” generated by users with little editorial oversight, is bound to have untrue nonsense. Instead, YouTube’s problem is that it allows the nonsense to flourish. And, in some cases, through its powerful artificial intelligence system, it even provides the fuel that lets it spread.

    Mais justement NON ! Ce ne peut être une “bibliothèque”, car une bibliothèque ne conserve que des documents qui ont été publiés, donc avec déjà une première instance de validation (ou en tout cas de responsabilité éditoriale... quelqu’un ira en procès le cas échéant).

    YouTube est... YouTube, quelque chose de spécial à internet, qui remplit une fonction majeure... et également un danger pour la pensée en raison de “l’économie de l’attention”.

    The company spent years chasing one business goal above others: “Engagement,” a measure of the views, time spent and interactions with online videos. Conversations with over twenty people who work at, or recently left, YouTube reveal a corporate leadership unable or unwilling to act on these internal alarms for fear of throttling engagement.

    In response to criticism about prioritizing growth over safety, Facebook Inc. has proposed a dramatic shift in its core product. YouTube still has struggled to explain any new corporate vision to the public and investors – and sometimes, to its own staff. Five senior personnel who left YouTube and Google in the last two years privately cited the platform’s inability to tame extreme, disturbing videos as the reason for their departure. Within Google, YouTube’s inability to fix its problems has remained a major gripe. Google shares slipped in late morning trading in New York on Tuesday, leaving them up 15 percent so far this year. Facebook stock has jumped more than 30 percent in 2019, after getting hammered last year.

    YouTube’s inertia was illuminated again after a deadly measles outbreak drew public attention to vaccinations conspiracies on social media several weeks ago. New data from Moonshot CVE, a London-based firm that studies extremism, found that fewer than twenty YouTube channels that have spread these lies reached over 170 million viewers, many who were then recommended other videos laden with conspiracy theories.

    So YouTube, then run by Google veteran Salar Kamangar, set a company-wide objective to reach one billion hours of viewing a day, and rewrote its recommendation engine to maximize for that goal. When Wojcicki took over, in 2014, YouTube was a third of the way to the goal, she recalled in investor John Doerr’s 2018 book Measure What Matters.

    “They thought it would break the internet! But it seemed to me that such a clear and measurable objective would energize people, and I cheered them on,” Wojcicki told Doerr. “The billion hours of daily watch time gave our tech people a North Star.” By October, 2016, YouTube hit its goal.

    YouTube doesn’t give an exact recipe for virality. But in the race to one billion hours, a formula emerged: Outrage equals attention. It’s one that people on the political fringes have easily exploited, said Brittan Heller, a fellow at Harvard University’s Carr Center. “They don’t know how the algorithm works,” she said. “But they do know that the more outrageous the content is, the more views.”

    People inside YouTube knew about this dynamic. Over the years, there were many tortured debates about what to do with troublesome videos—those that don’t violate its content policies and so remain on the site. Some software engineers have nicknamed the problem “bad virality.”

    Yonatan Zunger, a privacy engineer at Google, recalled a suggestion he made to YouTube staff before he left the company in 2016. He proposed a third tier: Videos that were allowed to stay on YouTube, but, because they were “close to the line” of the takedown policy, would be removed from recommendations. “Bad actors quickly get very good at understanding where the bright lines are and skating as close to those lines as possible,” Zunger said.

    His proposal, which went to the head of YouTube policy, was turned down. “I can say with a lot of confidence that they were deeply wrong,” he said.

    Rather than revamp its recommendation engine, YouTube doubled down. The neural network described in the 2016 research went into effect in YouTube recommendations starting in 2015. By the measures available, it has achieved its goal of keeping people on YouTube.

    “It’s an addiction engine,” said Francis Irving, a computer scientist who has written critically about YouTube’s AI system.

    Wojcicki and her lieutenants drew up a plan. YouTube called it Project Bean or, at times, “Boil The Ocean,” to indicate the enormity of the task. (Sometimes they called it BTO3 – a third dramatic overhaul for YouTube, after initiatives to boost mobile viewing and subscriptions.) The plan was to rewrite YouTube’s entire business model, according to three former senior staffers who worked on it.

    It centered on a way to pay creators that isn’t based on the ads their videos hosted. Instead, YouTube would pay on engagement—how many viewers watched a video and how long they watched. A special algorithm would pool incoming cash, then divvy it out to creators, even if no ads ran on their videos. The idea was to reward video stars shorted by the system, such as those making sex education and music videos, which marquee advertisers found too risqué to endorse.

    Coders at YouTube labored for at least a year to make the project workable. But company managers failed to appreciate how the project could backfire: paying based on engagement risked making its “bad virality” problem worse since it could have rewarded videos that achieved popularity achieved by outrage. One person involved said that the algorithms for doling out payments were tightly guarded. If it went into effect then, this person said, it’s likely that someone like Alex Jones—the Infowars creator and conspiracy theorist with a huge following on the site, before YouTube booted him last August—would have suddenly become one of the highest paid YouTube stars.

    In February of 2018, the video calling the Parkland shooting victims “crisis actors” went viral on YouTube’s trending page. Policy staff suggested soon after limiting recommendations on the page to vetted news sources. YouTube management rejected the proposal, according to a person with knowledge of the event. The person didn’t know the reasoning behind the rejection, but noted that YouTube was then intent on accelerating its viewing time for videos related to news.

    #YouTube #Economie_attention #Engagement #Viralité

  • Data is the New Oil
    https://hackernoon.com/data-is-the-new-oil-1227197762b2?source=rss----3a8144eabfe3---4

    “Data is the new oil. It’s valuable, but if unrefined it cannot really be used. It has to be changed into gas, plastic, chemicals, etc to create a valuable entity that drives profitable activity; so must data be broken down, analyzed for it to have value.”— Clive HumbyDeep Learning is a revolutionary field, but for it to work as intended, it requires data. The area related to these big datasets is known as Big Data, which stands for the abundance of digital data. Data is as important for Deep Learning algorithms as the architecture of the network itself, i.e., the software. Acquiring and cleaning the data is one of the most valuable aspects of the work. Without data, the neural networks cannot learn.Most of the time, researchers can use the data given to them directly, but there are many (...)

    #machine-learning #feifei-li #data-science #imagenet

  • Dueling Neural Networks
    https://hackernoon.com/dueling-neural-networks-a063af14f62e?source=rss----3a8144eabfe3---4

    “What I cannot create, I do not understand.”— Richard FeynmanGANs generated by a computerThe above images look real, but more than that, they look familiar. They resemble a famous actress that you may have seen on television or in the movies. They are not real, however. A new type of neural network created them.Generative Adversarial Networks (GANs), sometimes called generative networks, generated these fake images. The NVIDIA research team used this new technique by feeding thousands of photos of celebrities to a neural network. The neural network produced thousands of pictures, like the ones above, that resembled the famous faces. They look real, but machines created them. #gans allow researchers to build images that look like the real ones that share many of the features the neural (...)

    #birthday-paradox #deep-learning #generative-adversarial #machine-learning

  • #perceptron — Deep Learning Basics
    https://hackernoon.com/perceptron-deep-learning-basics-3a938c5f84b6?source=rss----3a8144eabfe3-

    Perceptron — Deep Learning BasicsAn upgrade to McCulloch-Pitts Neuron.Perceptron is a fundamental unit of the neural network which takes weighted inputs, process it and capable of performing binary classifications. In this post, we will discuss the working of the Perceptron Model. This is a follow-up blog post to my previous post on McCulloch-Pitts Neuron.In 1958 Frank Rosenblatt proposed the perceptron, a more generalized computational model than the McCulloch-Pitts Neuron. The important feature in the Rosenblatt proposed perceptron was the introduction of weights for the inputs. Later in 1960s Rosenblatt’s Model was refined and perfected by Minsky and Papert. Rosenblatt’s model is called as classical perceptron and the model analyzed by Minsky and Papert is called perceptron.Disclaimer: (...)

    #neurons #artificial-intelligence #deep-learning #deep-learning-basics

  • Can #blockchain with Artificial Intelligence Fight Deep Fake?
    https://hackernoon.com/can-blockchain-with-artificial-intelligence-fight-deep-fake-9b899b4d45e7

    Truth has been the subject of discussion in its own rights, objectively and independently of the ways we think about it or describe it, for many ages. Philosophical theories about truth may have many relative grounds but in mathematics there exist absolute truth.Can truth shapeshift? In an emotion based market, truth is subjective to the intellectual spectrum of people’s belief and opinion. The deepfake video of Barack Obama’s speech created by BuzzFeed using power face swapping neural networking technology is one such example.https://medium.com/media/d1eb9049368b3a8bf7a4dd9b5a92a8c2/hrefSo what is a deep fake?“Deepfake, a portmanteau of “deep learning” and “fake”,[1] is an artificial intelligence-based human image synthesis technique. It is used to combine and superimpose existing images and (...)

    #machine-learning #artificial-intelligence #deep-learning #venture-capital

  • Building a Neural Network Only Using NumPy
    https://hackernoon.com/building-a-neural-network-only-using-numpy-7ba75da60ec0?source=rss----3a

    Using Andrew Ng’s Project Structure to Build a Neural Net in PythonIntroductionAfter having completed the deeplearning.ai Deep Learning specialization taught by Andrew Ng, I have decided to work through some of the assignments of the specialization and try to figure out the code myself without only filling in certain parts of it. Doing so, I want to deepen my understanding of neural networks and help others gain intuition by documenting my progress in articles. The complete notebook is available here.In this article, I’m going to build a neural network in #python only using NumPy based on the project structure proposed in the deeplearning.ai Deep Learning specialization:Define the structure of the neural network2. Initialize the parameters of the neural network defined in step one3. Loop (...)

    #deep-learning #machine-learning #artificial-intelligence #data-science

  • How we used #ai to hybridize humans with cartoon animals and made a business out of it.
    https://hackernoon.com/how-we-used-ai-to-hybridize-humans-with-cartoon-animals-and-made-a-busin

    Have you ever imagined yourself as a cartoon character? Well, now this is more than real.We are a team of 20 engineers and art designers who have developed a machine learning technology that morphs human faces with animated characters.The process starts by constructing a user’s 3D face model from just a single selfie shot. Importantly, our technology even works with older, regular smartphone cameras. With this single photo, our neural network builds a 3D mesh of the user’s head that looks like this:The neural network regresses a 3D model from a 2D photoNext, 3 other neural networks swing into action. The first draws eyebrows, the second detects and matches eye color, and the third detects and draws glasses if the user is wearing them. When these elements are ready, we morph the user with (...)

    #machine-learning #artificial-intelligence #ar #startup

  • GIPSA-lab invite Pablo JENSEN, directeur de recherche CNRS au Laboratoire de Physique de l’ENS de LYON pour un séminaire exceptionnel le 10 janvier 2019 à 10h30.

    The unexpected link between neural nets and liberalism

    Sixty years ago, Arthur Rosenblatt, a psychologist working for the army invented the perceptron, the first neural network capable of learning. Unexpectedly, Rosenblatt cites, as a major source of inspiration, an economist: Friedrich Hayek. He is well-known for his 1974 Nobel prize… and by his ultra-liberal stances, justifying the Pinochet coup in a Chilean newspaper: «Personally, I prefer a liberal dictator to a democratic government that lacks liberalism». This talk presents ongoing work on the link between Hayek’s ideology and neural networks.

    After a PhD on experimental condensed-matter physics, Pablo JENSEN worked for 15 years on the modeling of nanostructure growth. This lead to major publications in top journals, including Nature, Phys Rev Lett and a widely cited review in Rev Mod Phys. After these achievements, he decided to follow an unconventional path and switch to the modeling of social systems. It takes time to become familiar with social science topics and literature, but it is mandatory to establish serious interdisciplinary connections. During that period, he also had national responsibilities at CNRS, to improve communication of physics. This investment has now started to pay, as shown by recent publications in major interdisciplinary or social science (geography, economics, sociology) journals, including PNAS, J Pub Eco and British J Sociology. His present work takes advantage of the avalanche of social data available on the Web to improve our understanding of society. To achieve this, he collaborate with hard scientists to develop appropriate analysis tools and with social scientists to find relevant questions and interpretations.
    His last book : Pourquoi la société ne se laisse pas mettre en équations, Pablo Jensen, Seuil, coll. “Science ouverte”, mars 2018
    Personal Web page : http://perso.ens-lyon.fr/pablo.jensen

    Lieu du séminaire : Laboratoire GIPSA-lab, 11 rue des Mathématiques, Campus de Saint Martin d’Hères, salle Mont-Blanc (bâtiment Ampère D, 1er étage)

    #grenoble #neural_net #liberalism

  • Interview. How Neural Networks And Machine Learning Are Making #games More Interesting
    https://hackernoon.com/interview-how-neural-networks-and-machine-learning-are-making-games-more

    [Interview] How Neural Networks And Machine Learning Are Making Games More InterestingImage credit: UnsplashMachine learning and neural networks are hot topics in many tech areas, and the game dev is one of them. There such new technologies are used to make games more interesting.How this is achieved, what companies are now leaders in new tech adoption and research when we as users will see any notable results of this research and lots more to be discussed today. We will talk to Vladimir Ivanov, the leading ML in gaming expert.The first question is: what do you mean when talking that games are “not interesting” and the new tech could fix this?Well, the thing is pretty simple: if we are talking not about human vs. human game mode, you need to compete with bots. Often this is not that (...)

    #gamedev #machine-learning #game-development #artificial-intelligence

  • Preprocess Keras Model for TensorSpace
    https://hackernoon.com/preprocess-keras-model-for-tensorspace-ed5e4db9a2a1?source=rss----3a8144

    How to preprocess Keras model to be TensorSpace compatible for neural network 3D visualizationTensorSpace & Keras“TensorSpace is a neural network 3D visualization framework. — TensorSpace.org”“Keras is a high-level neural network API. — keras.io ”IntroductionYou may learn about TensorSpace can be used to 3D visualize the neural networks. You might have read my previous introduction about TensorSpace. Maybe you find it is a little complicated on the model preprocess.Hence today, I want to talk about the model preprocess of TensorSpace for more details. To be more specific, how to preprocess the deep learning model built by Keras to be TensorSpace compatible.Fig. 1 — Use TensorSpace to visualize an LeNet built by KerasWhat we should have?To make a model built by Keras to be TensorSpace compatible, (...)

    #python #data-visualization #machine-learning #technology #javascript

  • How Artists Can Set Up Their Own Neural Network — Part 3 — Image Generation
    https://hackernoon.com/how-artists-can-set-up-their-own-neural-network-part-3-image-generation-

    How Artists Can Set Up Their Own Neural Network — Part 3 — Image GenerationAlright, so we’ve installed linux and the neural network now it’s time to actually run it!First though I want to apologize for the delay in getting these last two parts of the #tutorial series out. As I explained in my Skonk Works post, I’ve been learning so fast that it’s actually been kind of hard to catch time to digest and write any of it down.For instance, this tutorial series began with teaching you how to install Ubuntu 16.04, but support for Ubuntu 16.04 has just ended and you really should install Ubuntu 18.04, which is what I did after wiping my desktop and turning it into a fulltime personal cloud server! This is good because now I have a completely dedicated Linux machine to run neural network batch jobs on (...)

    #artist #artist-neural-network #neural-networks #setup-neural-network

  • How to optimize C and C++ code in 2018—Iurii Krasnoshchok
    http://isocpp.org/feeder/?FeederAction=clicked&feed=All+Posts&seed=http%3A%2F%2Fisocpp.org%2Fblog%2F2

    Are you aware?

    How to optimize C and C++ code in 2018 by Iurii Krasnoshchok

    From the article:

    We are still limited by our current hardware. There are numerous areas where it just not good enough: neural networks and virtual reality to name a few. There are plenty of devices where battery life is crucial, and we must count every single CPU tick. Even when we’re talking about clouds and microservices and lambdas, there are enormous data centers that consume vast amounts of electricity. Even boring tests routine may quietly start to take 5 hours to run. And this is tricky. Program performance doesn‘t matter, only until it does. A modern way to squeeze performance out of silicon is to make hardware more and more (...)

    #News,Articles&_Books,

  • Fake fingerprints can imitate real ones in biometric systems – research
    https://www.theguardian.com/technology/2018/nov/15/fake-fingerprints-can-imitate-real-fingerprints-in-biometric-systems-re

    DeepMasterPrints created by a machine learning technique have error rate of only one in five Researchers have used a neural network to generate artificial fingerprints that work as a “master key” for biometric identification systems and prove fake fingerprints can be created. According to a paper presented at a security conference in Los Angeles, the artificially generated fingerprints, dubbed “DeepMasterPrints” by the researchers from New York University, were able to imitate more than one (...)

    #fraude #biométrie #empreintes

    https://i.guim.co.uk/img/media/132ddbcc93e3444767f5a1d170ca1b8273f9d665/0_0_1079_647/master/1079.png

  • In the Age of A.I., Is Seeing Still Believing ? | The New Yorker
    https://www.newyorker.com/magazine/2018/11/12/in-the-age-of-ai-is-seeing-still-believing

    In a media environment saturated with fake news, such technology has disturbing implications. Last fall, an anonymous Redditor with the username Deepfakes released a software tool kit that allows anyone to make synthetic videos in which a neural network substitutes one person’s face for another’s, while keeping their expressions consistent. Along with the kit, the user posted pornographic videos, now known as “deepfakes,” that appear to feature various Hollywood actresses. (The software is complex but comprehensible: “Let’s say for example we’re perving on some innocent girl named Jessica,” one tutorial reads. “The folders you create would be: ‘jessica; jessica_faces; porn; porn_faces; model; output.’ ”) Around the same time, “Synthesizing Obama,” a paper published by a research group at the University of Washington, showed that a neural network could create believable videos in which the former President appeared to be saying words that were really spoken by someone else. In a video voiced by Jordan Peele, Obama seems to say that “President Trump is a total and complete dipshit,” and warns that “how we move forward in the age of information” will determine “whether we become some kind of fucked-up dystopia.”

    “People have been doing synthesis for a long time, with different tools,” he said. He rattled off various milestones in the history of image manipulation: the transposition, in a famous photograph from the eighteen-sixties, of Abraham Lincoln’s head onto the body of the slavery advocate John C. Calhoun; the mass alteration of photographs in Stalin’s Russia, designed to purge his enemies from the history books; the convenient realignment of the pyramids on the cover of National Geographic, in 1982; the composite photograph of John Kerry and Jane Fonda standing together at an anti-Vietnam demonstration, which incensed many voters after the Times credulously reprinted it, in 2004, above a story about Kerry’s antiwar activities.

    “In the past, anybody could buy Photoshop. But to really use it well you had to be highly skilled,” Farid said. “Now the technology is democratizing.” It used to be safe to assume that ordinary people were incapable of complex image manipulations. Farid recalled a case—a bitter divorce—in which a wife had presented the court with a video of her husband at a café table, his hand reaching out to caress another woman’s. The husband insisted it was fake. “I noticed that there was a reflection of his hand in the surface of the table,” Farid said, “and getting the geometry exactly right would’ve been really hard.” Now convincing synthetic images and videos were becoming easier to make.

    The acceleration of home computing has converged with another trend: the mass uploading of photographs and videos to the Web. Later, when I sat down with Efros in his office, he explained that, even in the early two-thousands, computer graphics had been “data-starved”: although 3-D modellers were capable of creating photorealistic scenes, their cities, interiors, and mountainscapes felt empty and lifeless. True realism, Efros said, requires “data, data, data” about “the gunk, the dirt, the complexity of the world,” which is best gathered by accident, through the recording of ordinary life.

    Today, researchers have access to systems like ImageNet, a site run by computer scientists at Stanford and Princeton which brings together fourteen million photographs of ordinary places and objects, most of them casual snapshots posted to Flickr, eBay, and other Web sites. Initially, these images were sorted into categories (carrousels, subwoofers, paper clips, parking meters, chests of drawers) by tens of thousands of workers hired through Amazon Mechanical Turk. Then, in 2012, researchers at the University of Toronto succeeded in building neural networks capable of categorizing ImageNet’s images automatically; their dramatic success helped set off today’s neural-networking boom. In recent years, YouTube has become an unofficial ImageNet for video. Efros’s lab has overcome the site’s “platform bias”—its preference for cats and pop stars—by developing a neural network that mines, from “life style” videos such as “My Spring Morning Routine” and “My Rustic, Cozy Living Room,” clips of people opening packages, peering into fridges, drying off with towels, brushing their teeth. This vast archive of the uninteresting has made a new level of synthetic realism possible.

    In 2016, the Defense Advanced Research Projects Agency (DARPA) launched a program in Media Forensics, or MediFor, focussed on the threat that synthetic media poses to national security. Matt Turek, the program’s manager, ticked off possible manipulations when we spoke: “Objects that are cut and pasted into images. The removal of objects from a scene. Faces that might be swapped. Audio that is inconsistent with the video. Images that appear to be taken at a certain time and place but weren’t.” He went on, “What I think we’ll see, in a couple of years, is the synthesis of events that didn’t happen. Multiple images and videos taken from different perspectives will be constructed in such a way that they look like they come from different cameras. It could be something nation-state driven, trying to sway political or military action. It could come from a small, low-resource group. Potentially, it could come from an individual.”

    As with today’s text-based fake news, the problem is double-edged. Having been deceived by a fake video, one begins to wonder whether many real videos are fake. Eventually, skepticism becomes a strategy in itself. In 2016, when the “Access Hollywood” tape surfaced, Donald Trump acknowledged its accuracy while dismissing his statements as “locker-room talk.” Now Trump suggests to associates that “we don’t think that was my voice.”

    “The larger danger is plausible deniability,” Farid told me. It’s here that the comparison with counterfeiting breaks down. No cashier opens up the register hoping to find counterfeit bills. In politics, however, it’s often in our interest not to believe what we are seeing.

    As alarming as synthetic media may be, it may be more alarming that we arrived at our current crises of misinformation—Russian election hacking; genocidal propaganda in Myanmar; instant-message-driven mob violence in India—without it. Social media was enough to do the job, by turning ordinary people into media manipulators who will say (or share) anything to win an argument. The main effect of synthetic media may be to close off an escape route from the social-media bubble. In 2014, video of the deaths of Michael Brown and Eric Garner helped start the Black Lives Matter movement; footage of the football player Ray Rice assaulting his fiancée catalyzed a reckoning with domestic violence in the National Football League. It seemed as though video evidence, by turning us all into eyewitnesses, might provide a path out of polarization and toward reality. With the advent of synthetic media, all that changes. Body cameras may still capture what really happened, but the aesthetic of the body camera—its claim to authenticity—is also a vector for misinformation. “Eyewitness video” becomes an oxymoron. The path toward reality begins to wash away.

    #Fake_news #Image #Synthèse

  • Deep learning of aftershock patterns following large earthquakes | Nature
    https://www.nature.com/articles/s41586-018-0438-y

    we use a deep-learning approach to identify a static-stress-based criterion that forecasts aftershock locations without prior assumptions about fault orientation. We show that a neural network trained on more than 131,000 mainshock–aftershock pairs can predict the locations of aftershocks in an independent test dataset of more than 30,000 mainshock–aftershock pairs more accurately (area under curve of 0.849) than can classic Coulomb failure stress change (area under curve of 0.583). We find that the learned aftershock pattern is physically interpretable

  • Forecasting Market Movements Using #tensorflow
    https://hackernoon.com/forecasting-market-movements-using-tensorflow-fb73e614cd06?source=rss---

    Photo by jesse orrico on UnsplashMulti-Layer Perceptron for ClassificationIs it possible to create a neural network for predicting daily market movements from a set of standard trading indicators?In this post we’ll be looking at a simple model using Tensorflow to create a framework for testing and development, along with some preliminary results and suggested improvements.The ML Task and Input FeaturesTo keep the basic design simple, it’s setup for a binary classification task, predicting whether the next day’s close is going to be higher or lower than the current, corresponding to a prediction to either go long or short for the next time period. In reality, this could be applied to a bot which calculates and executes a set of positions at the start of a trading day to capture the day’s (...)

    #forecasting-tensorflow #machine-learning #market-movement

  • How to Initialize weights in a neural net so it performs well?
    https://hackernoon.com/how-to-initialize-weights-in-a-neural-net-so-it-performs-well-3e9302d449

    How to Initialize weights in a neural net so it performs well? — Super fast explanation for Xavier’s Random Weight Initializationhttp://www.mdpi.com/1099-4300/19/3/101We know that in a neural network, weights are initialized usually randomly and that kind of initialization takes fair / significant amount of repetitions to converge to the least loss and reach to the ideal weight matrix. The problem is, this kind of initialization is prone to vanishing or exploding gradient problems.One way to reduce this problem is carefully choosing the random weight initialization. Xavier’s random weight initialization aka Xavier’s algorithm factors into the equation the size of the network (number of input and output neurons) and addresses these problems.Xavier Glorot and Yoshua Bengio are the (...)

    #andrew-ng #deep-learning #deep-neural-networks #machine-learning #neural-networks