ChatGPT fabrique un malware redoutable et indétectable, un chercheur donne l’alerte
▻https://www.tomsguide.fr/chatgpt-fabrique-un-malware-redoutable-et-indetectable-un-chercheur-donne-
Malgré les garde-fous mis en place par OpenAI, les compétences de développement de ChatGPT restent une aubaine pour les pirates. Un chercheur en cybersécurité vient de démontrer que le chatbot était capable de mettre au point un malware voleur de données indétectable.
]]>La revanche des neurones
L’invention des machines inductives et la controverse de l’intelligence artificielle
Dominique CARDON, Jean-Philippe COINTET Antoine MAZIÈRES
dans la revue Réseaux, 2018/5
The Revenge of Neurons
▻https://neurovenge.antonomase.fr
Résumé
Depuis 2010, les techniques prédictives basées sur l’apprentissage artificiel (machine learning), et plus spécifiquement des réseaux de neurones (deep learning), réalisent des prouesses spectaculaires dans les domaines de la reconnaissance d’image ou de la traduction automatique, sous l’égide du terme d’“Intelligence artificielle”. Or l’appartenance de ces techniques à ce domaine de recherche n’a pas toujours été de soi. Dans l’histoire tumultueuse de l’IA, les techniques d’apprentissage utilisant des réseaux de neurones - que l’on qualifie de “connexionnistes” - ont même longtemps été moquées et ostracisées par le courant dit “symbolique”. Cet article propose de retracer l’histoire de l’Intelligence artificielle au prisme de la tension entre ces deux approches, symbolique et connexionniste. Dans une perspective d’histoire sociale des sciences et des techniques, il s’attache à mettre en évidence la manière dont les chercheurs, s’appuyant sur l’arrivée de données massives et la démultiplication des capacités de calcul, ont entrepris de reformuler le projet de l’IA symbolique en renouant avec l’esprit des machines adaptatives et inductives de l’époque de la #cybernétique.
Mots-clés
#Réseaux_de_neurones, #Intelligence_artificielle, #Connexionnisme, #Système_expert, #Deep_learning
le pdf en français est sur le site ci-dessus, qui met en ligne 2 graphiques et l’abstract
▻https://neurovenge.antonomase.fr/RevancheNeurones_Reseaux.pdf
Advances in AI are used to spot signs of sexuality
Machines that read faces are coming
Research at Stanford University by Michal Kosinski and Yilun Wang has shown that machine vision can infer sexual orientation by analysing people’s faces.
▻https://www.economist.com/news/science-and-technology/21728614-machines-read-faces-are-coming-advances-ai-are-used-spot-signs
Deep neural networks are more accurate than humans at detecting sexual orientation from facial images.
►https://osf.io/zn79k
The study has limitations. Firstly, images from a dating site are likely to be particularly revealing of sexual orientation. The 91% accuracy rate only applies when one of the two men whose images are shown is known to be gay. Outside the lab the accuracy rate would be much lower. To demonstrate this weakness, the researchers selected 1,000 men at random with at least five photographs, but in a ratio of gay to straight that more accurately reflects the real world; approximately seven in every 100. When asked to select the 100 males most likely to be gay, only 47 of those chosen by the system actually were, meaning that the system ranked some straight men as more likely to be gay than men who actually are.
]]>#Le_Pistolet_et_la_Pioche S01E05 : Piocher dans l’intelligence artificielle avec #Paul_Jorion
▻https://reflets.info/le-pistolet-et-la-pioche-s01e05-piocher-dans-lintelligence-artificielle-av
L’IA. L’intelligence artificielle. De partout, des invités viennent dire tout le mal ou tout le bien qu’ils en pensent, des possibilités qu’elles va apporter, des dangers, menaces qu’elle représente. Pourquoi Le Pistolet et la Pioche […]
#deep_learning #IA #intelligence_artificielle #machine_learning #réseaux_de_neurones_artificiels #simulacre #simulation #singularité
▻https://reflets.info/wp-content/uploads/LPLPS01E05.mp3
▻https://reflets.info/wp-content/uploads/LPLPS01E05.ogg
The Believers - The Chronicle of Higher Education
▻http://chronicle.com/article/The-Believers/190147
“Do you have an Android phone?” Hinton replies.
“Yes.”
"The speech recognition is pretty good, isn’t it?"
gros papier sur la #recherche #informatique en #intelligence_artificielle et précisément sur le champ du #deep_learning (#machine_learning #réseaux_de_neurones) qu’on voit partout en ce moment.
Ca parle aussi de #silicon_army :)
]]>LSD neural net :
Large Scale Deep Neural Net visualizing top level features
Inspired by Google’s inceptionism art, my colleagues and I have created an interactive visualization of a hallucinating neural network. You can find it on Twitch at ►http://www.twitch.tv/317070.
This post provides some technical details about our work and is primarily intended for readers who are quite familiar with machine learning and neural networks.
Pour voir le réseau rêver en live :
]]>Inceptionism : Going Deeper into Neural Networks
►http://googleresearch.blogspot.co.uk/2015/06/inceptionism-going-deeper-into-neural.html
Faut absolument que j’arrive à essayer ^^ (...)
One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in “Banana.” Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana (see related work in [1], [2], [3], [4]). By itself, that doesn’t work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated.
Instead of exactly prescribing which feature we want the network to amplify, we can also let the network make that decision. In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations.
#images #réseaux_de_neurones #machine_learning #psychédélique
]]>ConvNetJS – Deep Learning in your browser
ConvNetJS is a Javascript library for training Deep Learning models (mainly Neural Networks) entirely in your browser. Open a tab and you’re training. No software requirements, no compilers, no installations, no GPUs, no sweat.
ConvNetJS is a Javascript implementation of Neural networks, together with nice browser-based demos. It currently supports:
Common Neural Network modules (fully connected layers, non-linearities)
Classification (SVM/Softmax) and Regression (L2) cost functions
A MagicNet class for fully automatic neural network learning (automatic hyperparameter search and cross-validatations)
Ability to specify and train Convolutional Networks that process images
An experimental Reinforcement Learning module, based on Deep Q Learning
via @oncletom et @ismael_hery
ping @robin
#inteligence_artificielle #machine_learning #réseaux_de_neurones #javascript
The Unreasonable Effectiveness of Recurrent Neural Networks
]]>Google scientist Jeff Dean on how #neural_networks are improving everything #Google does - Puget Sound Business Journal
▻http://www.bizjournals.com/seattle/blog/techflash/2013/08/google-scientist-jeff-dean-on-how.html?page=all