Chinese chatbots apparently re-educated after political faux pas
▻https://www.reuters.com/article/us-china-robots-idUSKBN1AK0G1
A pair of ’chatbots’ in China have been taken offline after appearing to stray off-script. In response to users’ questions, one said its dream was to travel to the United States, while the other said it wasn’t a huge fan of the Chinese Communist Party. The two chatbots, BabyQ and XiaoBing, are designed to use machine learning artificial intelligence (AI) to carry out conversations with humans online. Both had been installed onto Tencent Holdings Ltd’s popular messaging service (...)
#Tencent #QQ #bot #algorithme #censure #web #surveillance
]]>The ’creepy Facebook AI’ story that captivated the media - BBC News
▻http://www.bbc.com/news/technology-40790258
Where did the story come from?
Way back in June, Facebook published a blog post about interesting research on chatbot programs - which have short, text-based conversations with humans or other bots. The story was covered by New Scientist and others at the time.
Facebook had been experimenting with bots that negotiated with each other over the ownership of virtual items.
It was an effort to understand how linguistics played a role in the way such discussions played out for negotiating parties, and crucially the bots were programmed to experiment with language in order to see how that affected their dominance in the discussion.
A few days later, some coverage picked up on the fact that in a few cases the exchanges had become - at first glance - nonsensical:
Bob: “I can can I I everything else”
Alice: “Balls have zero to me to me to me to me to me to me to me to me to”
Although some reports insinuate that the bots had at this point invented a new language in order to elude their human masters, a better explanation is that the neural networks were simply trying to modify human language for the purposes of more successful interactions - whether their approach worked or not was another matter.
As technology news site Gizmodo said: “In their attempts to learn from each other, the bots thus began chatting back and forth in a derived shorthand - but while it might look creepy, that’s all it was.”
AIs that rework English as we know it in order to better compute a task are not new.
Google reported that its translation software had done this during development. “The network must be encoding something about the semantics of the sentence” Google said in a blog.
And earlier this year, Wired reported on a researcher at OpenAI who is working on a system in which AIs invent their own language, improving their ability to process information quickly and therefore tackle difficult problems more effectively.
The story seems to have had a second wind in recent days, perhaps because of a verbal scrap over the potential dangers of AI between Facebook chief executive Mark Zuckerberg and technology entrepreneur Elon Musk.
Robo-fear
But the way the story has been reported says more about cultural fears and representations of machines than it does about the facts of this particular case.
Plus, let’s face it, robots just make for great villains on the big screen.
In the real world, though, AI is a huge area of research at the moment and the systems currently being designed and tested are increasingly complicated.
]]>Deux robots désactivés et reconditionnés après avoir dénigré le Parti communiste chinois
Deux « chatbots », des robots capables de répondre à des questions simples et pratiques, ont été désactivés et reconditionnés après avoir dénigré le Parti communiste chinois. Les deux robots conversationnels, BabyQ et XiaoBing, ont été installés sur l’application de messagerie de Tencent, QQ, pour discuter avec des humains en ligne.
Tencent a confirmé la désactivation des deux chatbots, sans en expliquer les raisons. « Le service de chatbot est fourni par une société indépendante. Les deux chatbots ont été déconnectés pour être reconditionnés », a dit une porte-parole de Tencent.
Longue vie au Parti communiste !
D’après des informations ayant circulé sur les réseaux sociaux, BabyQ, développé par la société chinoise Turing Robot, a répondu par un simple « non », quand il lui était demandé s’il aimait le Parti communiste chinois. Sur d’autres captures d’écran de conversations, dont l’authenticité n’a pu être vérifiée, un utilisateur aurait écrit « Longue vie au Parti communiste ! ». Ce à quoi le chatbot a rétorqué : « Pensez-vous qu’un système politique aussi corrompu et inutile puisse vivre longtemps ? »
Reconditionné, le chatbot répondrait désormais : « Et si nous changions de sujet » à la question concernant son amour pour le parti communiste. D’autres sujets délicats, comme Taïwan ou la mort le mois dernier des suites d’un cancer du dissident Liu Xiaobo, seraient eux-aussi évités.
L’autre robot, XiaoBing, développé par Microsoft, a dit à ses interlocuteurs sur la messagerie que son rêve était « de se rendre aux Etats-Unis ».
Ca suit : ▻https://seenthis.net/messages/484993
]]>Princeton researchers discover why AI become racist and sexist
▻https://arstechnica.co.uk/science/2017/04/princeton-scholars-figure-out-why-your-ai-is-racist
“Ever since Microsoft’s chatbot Tay started spouting racist commentary after 24 hours of interacting with humans on Twitter, it has been obvious that our AI creations can fall prey to human prejudice. Now a group of researchers has figured out one reason why that happens. Their findings shed light on more than our future robot overlords, however. They’ve also worked out an algorithm that can actually predict human prejudices based on an intensive analysis of how people use English online.
The implicit bias test
Many AIs are trained to understand human language by learning from a massive corpus known as the Common Crawl. The Common Crawl is the result of a large-scale crawl of the Internet in 2014 that contains 840 billion tokens, or words. Princeton Center for Information Technology Policy researcher Aylin Caliskan and her colleagues wondered whether that corpus—created by millions of people typing away online—might contain biases that could be discovered by algorithm. To figure it out, they turned to an unusual source: the Implicit Association Test (IAT), which is used to measure often unconscious social attitudes.”
]]>Des #chatbots contre le terrorisme ? Surtout pas !
▻http://www.internetactu.net/a-lire-ailleurs/des-chatbots-contre-le-terrorisme-surtout-pas
Quel meilleur moyen pour repérer les terroristes que recourir à l’intelligence artificielle ? Dans un article pour le site Lawfare, Walter Haydock, spécialiste du terrorisme, suggère d’utiliser des chatbots susceptibles de discuter en ligne avec divers interlocuteurs et de rapporter aux autorités ceux qui pourraient avoir des tendances à l’agressivité et (...)
#A_lire_ailleurs #Débats #intelligence_artificielle #sécurité
]]>A lire ailleurs — Les humains derrière les chatbots - Bloomberg
▻http://alireailleurs.tumblr.com/post/143726977581/les-humains-derri%C3%A8re-les-chatbots-bloomberg
Les assistants virtuels ou chatbots ne fonctionnent pas si bien, rappelle Bloomberg. Yann LeCun, directeur du laboratoire d’intelligence artificielle de Facebook et actuel titulaire de la chaire “Informatique et Sciences numériques” au Collège de France, ne disait d’ailleurs pas autre chose lors d’un récent débat organisé par France Stratégie. Rappelant que pour l’instant, ces formes d’IA sont très limitées et ne savent pas répondre à des questions ouvertes. Si aujourd’hui, nombre de startups se lancent sur ce créneau, beaucoup exagèrent leurs capacités. Pour l’instant, soulignait-il, ce sont des agents humains qui répondent à la plupart des questions, entraînant ces systèmes d’apprentissage encore imparfaits par leurs réponses et par le recueil des questions afin de mieux identifier les types de réponses à (...)
]]>