• Que vont changer Bard et le nouveau Bing au Web ?
    https://www.ladn.eu/tech-a-suivre/changement-bing-bard-web

    Nos hypothèses : un internet plus intuitif et (encore) moins fiable, une relation à la machine plus ambiguë où le Web devient notre copilote, et le dévissage des audiences.

    Cette semaine Microsoft et Google ont annoncé l’intégration d’une intelligence artificielle génératrice de texte à leur moteur de recherche. Bing sera augmenté d’une version plus avancée de ChatGPT. Et Google Search de Bard, un chatbot similaire. Cette course à l’échalote nous projette-t-elle dans « un nouveau paradigme », comme le suggère Satya Nadella, PDG de Microsoft ? Pensez à un changement équivalent à l’arrivée du smartphone.

    Méfions-nous des discours grandiloquents. Après tout, on attend encore la grande révolution du Web promise il y a deux ans par les NFT, et il y a quelques mois par le métavers. Mais il est certain que si Google, la porte d’accès au Web de plus de 90 % des internautes, se métamorphose ou se fait remplacer par Bing, cela changera la donne.
    Bonjour, ceci est Bing. Je peux vous aider : )

    En gros : plutôt que de taper dans votre moteur de recherche préféré une requête, puis de faire votre tri naviguant de lien en lien, vous vous adresserez à un chatbot qui fournira une réponse déjà toute faite en langage naturel. (Les liens ne disparaîtront pas, mais la réponse des chatbots sera mise en avant).

    #Synthetic_medias #Intelligence_artificielle #Bing #Google

    • Le cauchemar. S’il n’y a plus qu’une réponse à chaque requête, il n’y aura donc plus à véritablement choisir. Or, s’il n’y a plus à comparer, à sélectionner, à choisir, à quoi bon maintenir un sens critique ?

      Et si, dans le fond, la victoire de l’IA, faute de pouvoir encore nous surpasser en intelligence, ne procedait pas d’abord de l’abrutissement généralisé ?

      Une nouvelle étape dans la fabrique du crétin digital ?

  • Deepfakes have got Congress panicking. This is what it needs to do. - MIT Technology Review
    https://www.technologyreview.com/s/613676/deepfakes-ai-congress-politics-election-facebook-social

    In response, the House of Representatives will hold its first dedicated hearing tomorrow on deepfakes, the class of synthetic media generated by AI. In parallel, Representative Yvette Clarke will introduce a bill on the same subject. A new research report released by a nonprofit this week also highlights a strategy for coping when deepfakes and other doctored media proliferate.

    The deepfake bill
    The draft bill, a product of several months of discussion with computer scientists, disinformation experts, and human rights advocates, will include three provisions. The first would require companies and researchers who create tools that can be used to make deepfakes to automatically add watermarks to forged creations.

    The second would require social-media companies to build better manipulation detection directly into their platforms. Finally, the third provision would create sanctions, like fines or even jail time, to punish offenders for creating malicious deepfakes that harm individuals or threaten national security. In particular, it would attempt to introduce a new mechanism for legal recourse if people’s reputations are damaged by synthetic media.

    “This issue doesn’t just affect politicians,” says Mutale Nkonde, a fellow at the Data & Society Research Institute and an advisor on the bill. “Deepfake videos are much more likely to be deployed against women, minorities, people from the LGBT community, poor people. And those people aren’t going to have the resources to fight back against reputational risks.”

    But the technology has advanced at a rapid pace, and the amount of data required to fake a video has dropped dramatically. Two weeks ago, Samsung demonstrated that it was possible to create an entire video out of a single photo; this week university and industry researchers demoed a new tool that allows users to edit someone’s words by typing what they want the subject to say.

    It’s thus only a matter of time before deepfakes proliferate, says Sam Gregory, the program director of Witness. “Many of the ways that people would consider using deepfakes—to attack journalists, to imply corruption by politicians, to manipulate evidence—are clearly evolutions of existing problems, so we should expect people to try on the latest ways to do those effectively,” he says.

    The report outlines a strategy for how to prepare for such an impending future. Many of the recommendations and much of the supporting evidence also aligns with the proposals that will appear in the House bill.

    The report found that current investments by researchers and tech companies into deepfake generation far outweigh those into deepfake detection. Adobe, for example, has produced many tools to make media alterations easier, including a recent feature for removing objects in videos; it has not, however, provided a foil to them.

    The result is a mismatch between the real-world nature of media manipulation and the tools available to fight it. “If you’re creating a tool for synthesis or forgery that is seamless to the human eye or the human ear, you should be creating tools that are specifically designed to detect that forgery,” says Gregory. The question is how to get toolmakers to redress that imbalance.

    #Deepfake #Fake_news #Synthetic_media #Médias_de_synthèse #Projet_loi