The Biggest Misconceptions about Artificial Intelligence

/whats-behind-the-hype-about-artificial-

  • The Biggest Misconceptions about Artificial Intelligence
    http://knowledge.wharton.upenn.edu/article/whats-behind-the-hype-about-artificial-intelligence-separat

    Knowledge@Wharton: Interest in artificial intelligence has picked up dramatically in recent times. What is driving this hype? What are some of the biggest prevailing misconceptions about AI and how would you separate the hype from reality?

    Apoorv Saxena: There are multiple factors driving strong interest in AI recently. First is significant gains in dealing with long-standing problems in AI. These are mostly problems of image and speech understanding. For example, now computers are able to transcribe human speech better than humans. Understanding speech has been worked on for almost 20 to 30 years, and only recently have we seen significant gains in that area. The same thing is true of image understanding, and also of specific parts of human language understanding such as translation.

    Such progress has been made possible by applying an old technique called deep learning and running it on highly distributed and scalable computing infrastructure. This combined with availability of large amounts of data to train these algorithms and easy-to-use tools to build AI models, are the major factors driving interest in AI.

    It is natural for people to project the recent successes in specific domains into the future. Some are even projecting the present into domains where deep learning has not been very effective, and that creates a lot of misconception and also hype. AI is still pretty bad in how it learns new concepts and extending that learning to new contexts.

    For example, AI systems still require a tremendous amount of data to train. Humans do not need to look at 40,000 images of cats to identify a cat. A human child can look at two cats and figure out what a cat and a dog is — and to distinguish between them. So today’s AI systems are nowhere close to replicating how the human mind learns. That will be a challenge for the foreseeable future.

    Alors que tout est clean, la dernière phrase est impressionnante : « That will be a challenge for the foreseeable future ». Il ne s’agit pas de renoncer à la compréhension/création de concepts par les ordinateurs, mais de se donner le temps de le faire demain. Dans World without mind , Franklin Foer parle longuement de cette volonté des dirigeants de Google de construire un ordinateur qui serait un cerveau humain amélioré. Mais quid des émotions, des sentiments, de la relation physique au monde ?

    As I mentioned in narrow domains such as speech recognition AI is now more sophisticated than the best humans while in more general domains that require reasoning, context understanding and goal seeking, AI can’t even compete with a five-year old child. I think AI systems have still not figured out to do unsupervised learning well, or learned how to train on a very limited amount of data, or train without a lot of human intervention. That is going to be the main thing that continues to remain difficult . None of the recent research have shown a lot of progress here.

    Knowledge@Wharton: In addition to machine learning, you also referred a couple of times to deep learning. For many of our readers who are not experts in AI, could you explain how deep learning differs from machine learning? What are some of the biggest breakthroughs in deep learning?

    Saxena: Machine learning is much broader than deep learning. Machine learning is essentially a computer learning patterns from data and using the learned patterns to make predictions on new data. Deep learning is a specific machine learning technique.

    Deep learning is modeled on how human brains supposedly learn and use neural networks — a layered network of neurons to learn patterns from data and make predictions. So just as humans use different levels of conceptualization to understand a complex problem, each layer of neurons abstracts out a specific feature or concept in an hierarchical way to understand complex patterns. And the beauty of deep learning is that unlike other machine learning techniques whose prediction performance plateaus when you feed in more training data, deep learning performance continues to improve with more data. Also deep learning has been applied to solve very different sets of problems and shown good performance, which is typically not possible with other techniques. All these makes deep learning special, especially for problems where you could throw in more data and computing power easily.

    Knowledge@Wharton: The other area of AI that gets a lot of attention is natural language processing, often involving intelligent assistants, like Siri from Apple, Alexa from Amazon, or Cortana from Microsoft. How are chatbots evolving, and what is the future of the chatbot?

    Saxena: This is a huge area of investment for all of the big players, as you mentioned. This is generating a lot of interest, for two reasons. It is the most natural way for people to interact with machines, by just talking to them and the machines understanding. This has led to a fundamental shift in how computers and humans interact. Almost everybody believes this will be the next big thing.

    Still, early versions of this technology have been very disappointing. The reason is that natural language understanding or processing is extremely tough. You can’t use just one technique or deep learning model, for example, as you can for image understanding or speech understanding and solve everything. Natural language understanding inherently is different. Understanding natural language or conversation requires huge amounts of human knowledge and background knowledge. Because there’s so much context associated with language, unless you teach your agent all of the human knowledge, it falls short in understanding even basic stuff.

    De la compétition à l’heure du vectorialisme :

    Knowledge@Wharton: That sounds incredible. Now, a number of big companies are active in AI — especially Google, Microsoft, Amazon, Apple in the U.S., or in China you have Baidu, Alibaba and Tencent. What opportunities exist in AI for startups and smaller companies? How can they add value? How do you see them fitting into the broader AI ecosystem?

    Saxena: I see value for both big and small companies. A lot of the investments by the big players in this space are in building platforms where others can build AI applications. Almost every player in the AI space, including Google, has created platforms on which others can build applications. This is similar to what they did for Android or mobile platforms. Once the platform is built, others can build applications. So clearly that is where the focus is. Clearly there is a big opportunity for startups to build applications using some of the open source tools created by these big players.

    The second area where startups will continue to play is with what we call vertical domains. So a big part of the advances in AI will come through a combination of good algorithms with proprietary data. Even though the Googles of the world and other big players have some of the best engineering talent and also the algorithms, they don’t have data. So for example, a company that has proprietary health care data can build a health care AI startup and compete with the big players. The same thing is true of industries such as finance or retail.

    #Intelligence_artificielle #vectorialisme #deep_learning #Google