Articles repérés par Hervé Le Crosnier

Je prend ici des notes sur mes lectures. Les citations proviennent des articles cités.

  • How an A.I. ‘Cat-and-Mouse Game’ Generates Believable Fake Photos - The New York Times
    https://www.nytimes.com/interactive/2018/01/02/technology/ai-generated-photos.html

    At a lab in Finland, a small team of Nvidia researchers recently built a system that can analyze thousands of (real) celebrity snapshots, recognize common patterns, and create new images that look much the same — but are still a little different. The system can also generate realistic images of horses, buses, bicycles, plants and many other common objects.

    The project is part of a vast and varied effort to build technology that can automatically generate convincing images — or alter existing images in equally convincing ways. The hope is that this technology can significantly accelerate and improve the creation of computer interfaces, games, movies and other media, eventually allowing software to create realistic imagery in moments rather than the hours — if not days — it can now take human developers.

    In recent years, thanks to a breed of algorithm that can learn tasks by analyzing vast amounts of data, companies like Google and Facebook have built systems that can recognize faces and common objects with an accuracy that rivals the human eye. Now, these and other companies, alongside many of the world’s top academic A.I. labs, are using similar methods to both recognize and create.

    As it built a system that generates new celebrity faces, the Nvidia team went a step further in an effort to make them far more believable. It set up two neural networks — one that generated the images and another that tried to determine whether those images were real or fake. These are called generative adversarial networks, or GANs. In essence, one system does its best to fool the other — and the other does its best not to be fooled.

    “The computer learns to generate these images by playing a cat-and-mouse game against itself,” said Mr. Lehtinen.

    A second team of Nvidia researchers recently built a system that can automatically alter a street photo taken on a summer’s day so that it looks like a snowy winter scene. Researchers at the University of California, Berkeley, have designed another that learns to convert horses into zebras and Monets into Van Goghs. DeepMind, a London-based A.I. lab owned by Google, is exploring technology that can generate its own videos. And Adobe is fashioning similar machine learning techniques with an eye toward pushing them into products like Photoshop, its popular image design tool.

    Trained designers and engineers have long used technology like Photoshop and other programs to build realistic images from scratch. This is what movie effects houses do. But it is becoming easier for machines to learn how to generate these images on their own, said Durk Kingma, a researcher at OpenAI, the artificial intelligence lab founded by Tesla chief executive Elon Musk and others, who specializes in this kind of machine learning.

    “We now have a model that can generate faces that are more diverse and in some ways more realistic than what we could program by hand,” he said, referring to Nvidia’s work in Finland.

    But new concerns come with the power to create this kind of imagery.

    With so much attention on fake media these days, we could soon face an even wider range of fabricated images than we do today.

    “The concern is that these techniques will rise to the point where it becomes very difficult to discern truth from falsity,” said Tim Hwang, who previously oversaw A.I. policy at Google and is now director of the Ethics and Governance of Artificial Intelligence Fund, an effort to fund ethical A.I. research. “You might believe that accelerates problems we already have.”

    But many of us still put a certain amount of trust in photos and videos that we don’t necessarily put in text or word of mouth. Mr. Hwang believes the technology will evolve into a kind of A.I. arms race pitting those trying to deceive against those trying to identify the deception.

    Mr. Lehtinen downplays the effect his research will have on the spread of misinformation online. But he does say that, as a time goes on, we may have to rethink the very nature of imagery. “We are approaching some fundamental questions,” he said.

    #Image #Fake_news #Post_truth #Intelligence_artificielle #AI_war #Désinformation