Le directeur d’#OpenAI : ‘Une percée énergétique s’avère nécessaire pour l’avenir de l’intelligence artificielle’ - Data News
▻https://datanews.levif.be/actualite/innovation/le-directeur-dopenai-une-percee-energetique-savere-necessaire-pour-la
‘Il n’y a aucun moyen d’y parvenir sans une avancée décisive’, a-t-il déclaré à propos de l’avenir de l’IA. ‘Cela nous motive à investir davantage dans la #fusion nucléaire.’ En 2021 déjà, Altman avait personnellement procuré 375 millions de dollars à la société américaine de #fusion_nucléaire #Helion Energy. Cette dernière avait ensuite signé un accord pour fournir de l’énergie à #Microsoft, le principal bailleur de fonds d’OpenAI.
Je partageais hier l’info, déjà. Là, on constate que le monsieur participe aussi à une startup de la fusion nucléaire.
►https://seenthis.net/messages/1037047
La fusion, c’est pour au bas mot dans 15 ans, ce qui finalement, est presque proche... mais d’ici là... pour alimenter nos super-intelligences qui vont nous rendre la vie plus facile, et nous permettre à tous de profiter de la vie (au chômage ►https://seenthis.net/messages/1037216), pour les alimenter, donc, il va falloir de l’énergie... et cette énergie, c’est dans la fission qu’on va la trouver, pendant encore au moins 15 ans.
La SF qui imagine que les derniers électrons produits par l’humanité le seront pour faire fonctionner la dernière IA avant extinction finale, elle n’est pas à côté de la plaque ?
]]>#OpenAI CEO Sam Altman Says Future Of AI Depends On #Nuclear #Fusion Breakthrough - News18
▻https://www.news18.com/tech/openai-ceo-sam-altman-says-future-of-ai-depends-on-nuclear-fusion-breakthrou
OpenAI’s CEO Sam Altman on Tuesday said an energy breakthrough is necessary for future artificial intelligence, which will consume vastly more power than people have expected.
Coco, tu vois, on va mettre 40% de la planète au chômage ►https://seenthis.net/messages/1037216, et en plus, cette fois, on ne va pas pouvoir leur faire faire autre chose à la place de ce qu’ils faisaient.
Mais en plus, il va falloir être capable de produire encore plus d’énergie, parce que sinon, les promesses annoncées ne seront pas tenables.
Pire, peut-être que si on est incapable de maintenir le niveau d’énergie requis par les super-intelligences qui doivent nous remplacer, on devra faire travailler des humains alors qu’ils ne seront plus capables de faire mieux que d’acheter des Nike ou des Reebok en édition spéciale Black Friday. Ça sera la fin des haricots et il faudra aller se réarmer chez Macron.
A.I. Belongs to the Capitalists Now (nytimes.com)
#OPENAI, la société spécialisée dans l’intelligence artificielle qui a lancé #ChatGPT, vient d’annoncer une mise à jour de ses applications mobiles ChatGPT pour #iOS et #Android permettant à une personne de formuler ses questions au #chatbot et de l’entendre répondre avec sa propre voix de synthèse.
▻https://michelcampillo.com/blog/2917.html
OpenAI releases third version of DALL-E - The Verge
▻https://www.theverge.com/2023/9/20/23881241/openai-dalle-third-version-generative-ai
OpenAI announced the third version of its generative AI visual art platform DALL-E, which now lets users use ChatGPT to create prompts and includes more safety options.
DALL-E converts text prompts to images. But even DALL-E 2 got things wrong, often ignoring specific wording. The latest version, OpenAI researchers said, understands context much better.
A new feature of DALL-E 3 is integration with ChatGPT. By using ChatGPT, someone doesn’t have to come up with their own detailed prompt to guide DALL-E 3; they can just ask ChatGPT to come up with a prompt, and the chatbot will write out a paragraph (DALL-E works better with longer sentences) for DALL-E 3 to follow. Other users can still use their own prompts if they have specific ideas for DALL-E.
This connection with the chatbot, OpenAI said, allows more people to create AI art because they don’t have to be very good at coming up with a prompt.
OpenAI, possibly to avoid lawsuits, will also allow artists to opt their art out of future versions of text-to-image AI models. Creators can submit an image that they own the rights to and request its removal in a form on its website. A future version of DALL-E can then block results that look similar to the artist’s image and style. Artists sued DALL-E competitors Stability AI and Midjourney, along with art website DeviantArt, for allegedly using their copyrighted work to train their text-to-image models.
]]>The plan for AI to eat the world - POLITICO
▻https://www.politico.com/newsletters/digital-future-daily/2023/09/06/the-plan-for-ai-to-eat-the-world-00114310
▻https://static.politico.com/59/d4/f444c07c4c97b0a60ad9c1b62cf7/https-delivery-gettyimages.com/downloads/1258197310
Les aerticles de Politico sur l’Intelligence artificielle sont toujours très intéressants.
If “artificial general intelligence” ever arrives — an AI that surpasses human intelligence and capability — what will it actually do to society, and how can we prepare ourselves for it?
That’s the big, long-term question looming over the effort to regulate this new technological force.
Tech executives have tried to reassure Washington that their new AI products are tools for harmonious progress and not scary techno-revolution. But if you read between the lines of a new, exhaustive profile of OpenAI — published yesterday in Wired — the implications of the company’s takeover of the global tech conversation become stark, and go a long way toward answering those big existential questions.
Veteran tech journalist Steven Levy spent months with the company’s leaders, employees and former engineers, and came away convinced that Sam Altman and his team don’t only believe that artificial general intelligence, or AGI, is inevitable, but that it’s likely to transform the world entirely.
That makes their mission a political one, even if it doesn’t track easily along our current partisan boundaries, and they’re taking halting, but deliberate, steps toward achieving it behind closed doors in San Francisco. They expect AGI to change society so much that the company’s bylaws contain written provisions for an upended, hypothetical version of the future where our current contracts and currencies have no value.
“Somewhere in the restructuring documents is a clause to the effect that, if the company does manage to create AGI, all financial arrangements will be reconsidered,” Levy notes. “After all, it will be a new world from that point on.”
Sandhini Agarwal, an OpenAI policy researcher, put a finer point on how he sees the company’s mission at this point in time: “Look back at the industrial revolution — everyone agrees it was great for the world… but the first 50 years were really painful… We’re trying to think how we can make the period before adaptation of AGI as painless as possible.”
There’s an immediately obvious laundry list of questions that OpenAI’s race to AGI raises, most of them still unanswered: Who will be spared the pain of this “period before adaptation of AGI,” for example? Or how might it transform civic and economic life? And just who decided that Altman and his team get to be the ones to set its parameters, anyway?
The biggest players in the AI world see the achievement of OpenAI’s mission as a sort of biblical Jubilee, erasing all debts and winding back the clock to a fresh start for our social and political structures.
So if that’s really the case, how is it possible that the government isn’t kicking down the doors of OpenAI’s San Francisco headquarters like the faceless space-suited agents in “E.T.”?
In a society based on principles of free enterprise, of course, Altman and his employees are as legally entitled to do what they please in this scenario as they would be if they were building a dating app or Uber competitor. They’ve also made a serious effort to demonstrate their agreement with the White House’s own stated principles for AI development. Levy reported on how democratic caution was a major concern in releasing progressively more powerful GPT models, with chief technology officer Mira Murati telling him they “did a lot of work with misinformation experts and did some red-teaming” and that “there was a lot of discussion internally on how much to release” around the 2019 release of GPT-2.
Those nods toward social responsibility are a key part of OpenAI’s business model and media stance, but not everyone is satisfied with them. That includes some of the company’s top executives, who split to found Anthropic in 2019. That company’s CEO, Dario Amodei, told the New York Times this summer that his company’s entire goal isn’t to make money or usher in AGI necessarily, but to set safety standards with which other top competitors will feel compelled to comply.
The big questions about AI changing the world all might seem theoretical. But those within the AI community, and increasing numbers of watchdogs and politicians, are already taking them deadly seriously (despite a steadfast chorus of computer scientists still entirely skeptical about the possibility of AGI at all).
Just take a recent jeremiad from Foundation for American Innovation senior economist Samuel Hammond, who in a series of blog posts has tackled the political implications of AGI boosters’ claims if taken at face value, and the implications of a potential response from government:
“The moment governments realize that AI is a threat to their sovereignty, they will be tempted to clamp down in a totalitarian fashion,” Hammond writes. “It’s up to liberal democracies to demonstrate institutional co-evolution as a third-way between degenerate anarchy and an AI Leviathan.”
For now, that’s a far-fetched future scenario. But as Levy’s profile of OpenAI reveals, it’s one that the people with the most money, computing power and public sway in the AI world hold as gospel truth. Should the AGI revolution put politicians across the globe on their back foot, or out of power entirely, they won’t be able to say they didn’t have a warning.
]]>L’Italie bloque l’usage de #ChatGPT
▻https://www.france24.com/fr/%C3%A9co-tech/20230331-l-italie-bloque-l-usage-de-l-intelligence-artificielle-chatgpt
Dans un communiqué, l’Autorité italienne de protection des données personnelles prévient que sa décision a un « effet immédiat » et accuse le robot conversationnel de ne pas respecter la réglementation européenne et de ne pas vérifier l’âge des usagers mineurs.
]]>Selon Goldman Sachs, #ChatGPT et l’#automatisation liée à l’IA générative menacent 300 millions d’emplois dans le monde et pourraient contribuer à faire progresser de 7 % le #PIB annuel
Dans le détail, le rapport indique qu’environ deux tiers des emplois actuels sont exposés à un certain degré d’automatisation par l’IA, tandis qu’elle pourrait remplacer jusqu’à un quart du travail actuel. Les #cols_blancs sont parmi les plus susceptibles d’être affectés par ces nouveaux outils.
Le rapport souligne aussi qu’aux Etats-Unis, les métiers du juridique ainsi que du support et de l’administratif sont particulièrement menacés par ces nouvelles #technologies. En Europe, les #cadres et les métiers liés à l’administratif sont aussi les plus en danger.
#Goldman_Sachs suggère également que si l’IA générative est largement adoptée, elle pourrait entraîner d’importantes économies de coûts de main-d’oeuvre et la création de nouveaux emplois. […]
Une étude réalisée conjointement par #OpenAI et l’université de Pennsylvanie a ainsi calculé de son côté que 80 % des employés américains seraient affectés par l’#IA générative pour au moins 10 % de leurs tâches et que 19 % d’entre eux seraient touchés pour plus de la moitié de leurs tâches. L’étude note que les plus diplômés doivent se préparer à davantage d’ajustements que les moins diplômés.
(Les Échos)
]]>Comprendre ChatGPT (avec DefendIntelligence)
▻https://www.youtube.com/watch?v=j3fvoM5Er2k
Mieux comprendre ChatGPT, sans pour autant l’excuser pour ses fakes éhontés. Pour tout comprendre aux IA génératives.
__________________________
00:00 Introduction
03:45 Un peu de contexte
05:06 Les modèles de langage
05:37 L’énigme
06:45 La chambre chinoise
12:05 Comment ça fonctionne ?
17:12 L’exposition médiatique
22:50 Bien interroger ChatGPT
26:39 Bien vérifier ce que dit ChatGPT
28:01 Détecter des textes générés par IA
33:45 Problématiques sur les données
39:24 À venir dans les moteurs de recherche
46:43 Conclusion
___________________________
ERREURS SIGNALEES
– à 13min : selon OpenAI le modèle GPT3 a été entraîné à partir de 570 Go de textes, pas juste 50Go (ça c’est la taille des données Wikipedia)
– à 48min : la citation n’est pas de Saint Thomas d’Aquin, mais bien de Saint-Thomas, l’apôtre.
]]>#Google et son robot pipoteur(*), selon #Doctorow
▻https://framablog.org/2023/03/03/google-et-son-robot-pipoteur-selon-doctorow
Source de commentaires alarmants ou sarcastiques, les robots conversationnels qui reposent sur l’apprentissage automatique ne provoquent pas seulement l’intérêt du grand public, mais font l’objet d’une course de vitesse chez les GAFAM. Tout récemment, peut-être pour ne pas être à … Lire la suite
#G.A.F.A.M. #Traductions #Alice #Bing #chatbot #chatGPT #Chiang #G+ #Gmail #IA #Lewis_Carroll #Microsoft #openAI #Sadowsky #yahoo
]]>Démystifier les conneries sur l’IA – Une #Interview
▻https://framablog.org/2023/02/22/demystifier-les-conneries-sur-lia-une-interview
Cet article a été publié à l’origine par THE #Markup, il a été traduit et republié selon les termes de la licence Creative Commons Attribution-NonCommercial-NoDerivatives Démystifier le buzz autour de l’IA Un entretien avec Arvind #Narayanan par JULIA #Angwin Si … Lire la suite
#Communs_culturels #Traductions #AI #Bullshit #chatGPT #CNET #Dall-E #IA #Intelligence_articielle #JuliaAngwin #Kapoor #Meta #openAI
]]>Google Announced “Bard” AI in Search to Counter ChatGPT
▻https://debugpointnews.com/google-bard-announced
Un assistant dopé à l’IA pour programmer un peu à notre place, avec OpenAI et entraîné sur des milliards de lignes de code par microsoft github. Et on dirait que ça marche : le codeur rédige un prototype de fonction et le commentaire qui décrit ce qu’elle fait (dans Visual Studio ...) et l’assistant rédige le code. Si on lui demande, il propose d’autres versions.
▻https://copilot.github.com
#programmation #IA #deep_learning #github #visualstudio #openAI #text_generation
]]>Twenty minutes into the future with OpenAI’s Deep Fake Text AI
▻https://arstechnica.com/information-technology/2019/02/twenty-minutes-into-the-future-with-openais-deep-fake-text-ai
In 1985, the TV film Max Headroom : 20 Minutes into the Future presented a science fictional cyberpunk world where an evil media company tried to create an artificial intelligence based on a reporter’s brain to generate content to fill airtime. There were somewhat unintended results. Replace “reporter” with “redditors,” "evil media company" with “well meaning artificial intelligence researchers,” and “airtime” with “a very concerned blog post,” and you’ve got what Ars reported about last week (...)
]]>OpenAI a créé un générateur de texte tellement intelligent qu’il en devient dangereux
▻https://www.numerama.com/tech/464605-openai-a-cree-un-generateur-de-texte-tellement-intelligent-quil-en-
L’organisation OpenAI a décidé de ne pas publier tous les résultats de ses recherches, de peur que des utilisateurs malintentionnés détournent son nouveau générateur de texte à des fins malveillantes. La démo mise en ligne est impressionnante. Dans un tweet publié le 14 février 2019, OpenAI a présenté GPT-2, la deuxième version de son générateur automatique de texte si performant qu’il ne sera, pour l’instant, pas diffusé au grand public librement. OpenAI est une organisation à but non lucratif soutenue à (...)
//c1.lestechnophiles.com/www.numerama.com/content/uploads/2019/02/openai-langage.png
]]>Text Generation for Char LSTM models
▻https://hackernoon.com/text-generation-for-char-lstm-models-685dc186e319?source=rss----3a8144ea
Train a character-level language model on a corpus of jokes.I decided to experiment with approaches to this problem, which I found on #openai’s Request for Research blog. You can have a look at the code here. This is written in Pytorch, and is heavily inspired by Fast.ai’s fantastic lesson on implementing RNN’s from scratch.Data preparation I started off using the dataset provided by OpenAI. The data was converted to lowercase and for an initial run, I selected the top rated jokes, with a word length of less than 200. Here’s an example of all the tokens encountered:Explicit words ahead! This particular dataset has explicit words/content, so those come up in the output predictions of the model. Another interesting problem to work on would be to filter out inappropriate words from the output (...)
#programming #machine-learning #artificial-intelligence #python
]]> The AI Takeover Is Coming. Let’s Embrace It.
▻https://backchannel.com/the-ai-takeover-is-coming-lets-embrace-it-d764d61f83a
[T]he White House released a chilling report on AI and the economy. It began by positing that “it is to be expected that machines will continue to reach and exceed human performance on more and more tasks,” and it warned of massive job losses.
Yet to counter this threat, the government makes a recommendation that may sound absurd: we have to increase investment in AI. The risk to productivity and the US’s competitive advantage is too high to do anything but double down on it.
[...]
In September, Google announced an enormous upgrade in the performance of Google Translate, using a system it’s calling Google Neural Machine Translation (GNMT). Google’s Pereira called the jump in translation quality “something I never thought I’d see in my working life.”
“We’d been making steady progress,” he added. “This is not steady progress. This is radical.”
#Apprentissage_profond #Google_Neural_Machine_Translation #Intelligence_artificielle #Numérique #OpenAI #Économie
]]> Elon Musk’s Billion-Dollar AI Plan Is About Far More Than Saving the World
▻http://www.wired.com/2015/12/elon-musks-billion-dollar-ai-plan-is-about-far-more-than-saving-the-world
Elon Musk and Sam Altman worry that artificial intelligence will take over the world. So, the two entrepreneurs are creating a billion-dollar not-for-profit company that will maximize the power of AI—and then share it with anyone who wants it.
At least, this is the message that Musk, the founder of electric car company Tesla Motors, and Altman, the president of startup incubator Y Combinator, delivered in announcing their new endeavor, an unprecedented outfit called OpenAI. In an interview with Steven Levy of Backchannel, timed to the company’s launch, Altman said they expect this decades-long project to surpass human intelligence. But they believe that any risks will be mitigated because the technology will be “usable by everyone instead of usable by, say, just Google.”
#Elon_Musk #Google #Intelligence_artificielle #Open_source #OpenAI #Sam_Altman #Tesla_Motors #Y_Combinator
]]>