technology:ai algorithms

  • Google employees are lining up to trash Google’s AI ethics council - MIT Technology Review
    https://www.technologyreview.com/s/613253/googles-ai-council-faces-blowback-over-a-conservative-member

    un élément intéressant et à prendre en compte : les deux personnes visées sont également les deux seules femmes de ce comité d’experts. Choisies stratégiquement par Google pour faire jouer l’avantage genre, ou cibles plus évidentes des protestataires parce que femmes ?

    En tout cas, la place que la Heritage Foundation (droite dure et néo-management) prend dans l’espace mental des Etats-Unis, notamment dans le domaine technologique, est à suivre de près.

    Almost a thousand Google staff, academic researchers, and other tech industry figures have signed a letter protesting the makeup of an independent council that Google created to guide the ethics of its AI projects.
    Recommended for You

    Hackers trick a Tesla into veering into the wrong lane
    A new type of airplane wing that adapts midflight could change air travel
    DeepMind has made a prototype product that can diagnose eye diseases
    Watching Boston Dynamics’ new robot stack boxes is weirdly mesmerizing
    NASA has been testing the helicopter that will head to Mars next year

    The search giant announced the creation of the council last week at EmTech Digital, MIT Technology Review’s event in San Francisco. Known as the Advanced Technology External Advisory Council (ATEAC), it has eight members including economists, philosophers, policymakers, and technologists with expertise in issues like algorithmic bias. It is meant to hold four meetings a year, starting this month, and write reports designed to provide feedback on projects at the company that use artificial intelligence.

    But two of those members proved controversial. One, Dyan Gibbens, is CEO of Trumbull, a company that develops autonomous systems for the defense industry—a contentious choice given that thousands of Google employees protested the company’s decision to supply the US Air Force with AI for drone imaging. The greatest outrage, though, has come over the inclusion of Kay Coles James, president of the Heritage Foundation, a think tank that opposes regulating carbon emissions, takes a hard line on immigration, and has argued against the protection of LGBTQ rights.

    One member of the council, Alessandro Acquisti, a professor at Carnegie Mellon University who specializes in digital privacy issues, announced on March 30th that he wouldn’t be taking up the role. “While I’m devoted to research grappling with key ethical issues of fairness, rights & inclusion in AI, I don’t believe this is the right forum for me to engage in this important work," he tweeted.

    The creation of ATEAC—and the inclusion of Gibbens and James—may in fact have been designed to appease Google’s right-wing critics. At roughly the same time the council was announced, Sundar Pichai, Google’s CEO, was meeting with President Donald Trump. Trump later tweeted: “He stated strongly that he is totally committed to the U.S. Military, not the Chinese Military. [We] also discussed political fairness and various things that Google can do for our Country. Meeting ended very well!”

    But one Google employee involved with drafting the protest letter, who spoke on condition of anonymity, said that James is more than just a conservative voice on the council. “She is a reactionary who denies trans people exist, who endorses radically anti-immigrant positions, and endorses anti-climate-change, anti-science positions.”

    Some noted AI algorithms can reinforce biases already seen in society; some have been shown to misidentify transgender people, for example. In that context, “the fact that [James] was included is pretty shocking,” the employee said. “These technologies are shaping our social institutions, our lives, and access to resources. When AI fails, it doesn’t fail for rich white men working at tech companies. It fails for exactly the populations that the Heritage Foundation’s policies are already aiming to harm.”

    Messages posted to a Google internal communications platform criticized the appointment of James especially. According to one post, earlier reported by the Verge and confirmed by the employee, James “doesn’t deserve a Google-legitimized platform, and certainly doesn’t belong in any conversation about how Google tech should be applied to the world.”

    As of 5:30 pm US Eastern time today the public letter, posted to Medium, had been signed by 855 Google employees and 143 other people, including a number of prominent academics. “Not only are James’ views counter to Google’s stated values,” the letter states, “but they are directly counter to the project of ensuring that the development and application of AI prioritizes justice over profit. Such a project should instead place representatives from vulnerable communities at the center of decision-making.”

    #Google #Intelligence_artificielle #Ethique #Politique_USA

  • Understanding How Artificial Intelligence Can Make #blockchain Safer and Smarter
    https://hackernoon.com/understanding-how-artificial-intelligence-can-make-blockchain-safer-and-

    In the AI field, you can build smart Machine Learning algorithms or impressive Neural Networks, but this powerful technology could be trustable or could generate intelligent responses based on the data you use when you train it.As I wrote in my article: Understanding The Gold Rush of Scalable and Validated Data powered by Blockchain and Decentralized AI for Hackernoon:The best results in the AI field are in closed and well-defined ecosystems, such as video games, where AI algorithms have beaten every world champions, even in DOTA 2, considered one of the most complex video game in the industry… …In open environments like social media or big data, AI’s algorithms have performed less, or sometimes AI’s results are dangerously wrong.Wait but why?In scripted environments like video games, you (...)

    #smart-contracts #artificial-intelligence #blockchain-ai

  • What worries me about AI – François Chollet – Medium
    https://medium.com/@francois.chollet/what-worries-me-about-ai-ed9df072b704

    This data, in theory, allows the entities that collect it to build extremely accurate psychological profiles of both individuals and groups. Your opinions and behavior can be cross-correlated with that of thousands of similar people, achieving an uncanny understanding of what makes you tick — probably more predictive than what yourself could achieve through mere introspection (for instance, Facebook “likes” enable algorithms to better assess your personality that your own friends could). This data makes it possible to predict a few days in advance when you will start a new relationship (and with whom), and when you will end your current one. Or who is at risk of suicide. Or which side you will ultimately vote for in an election, even while you’re still feeling undecided. And it’s not just individual-level profiling power — large groups can be even more predictable, as aggregating data points erases randomness and individual outliers.
    Digital information consumption as a psychological control vector

    Passive data collection is not where it ends. Increasingly, social network services are in control of what information we consume. What see in our newsfeeds has become algorithmically “curated”. Opaque social media algorithms get to decide, to an ever-increasing extent, which political articles we read, which movie trailers we see, who we keep in touch with, whose feedback we receive on the opinions we express.

    In short, social network companies can simultaneously measure everything about us, and control the information we consume. And that’s an accelerating trend. When you have access to both perception and action, you’re looking at an AI problem. You can start establishing an optimization loop for human behavior, in which you observe the current state of your targets and keep tuning what information you feed them, until you start observing the opinions and behaviors you wanted to see. A large subset of the field of AI — in particular “reinforcement learning” — is about developing algorithms to solve such optimization problems as efficiently as possible, to close the loop and achieve full control of the target at hand — in this case, us. By moving our lives to the digital realm, we become vulnerable to that which rules it — AI algorithms.

    From an information security perspective, you would call these vulnerabilities: known exploits that can be used to take over a system. In the case of the human minds, these vulnerabilities never get patched, they are just the way we work. They’re in our DNA. The human mind is a static, vulnerable system that will come increasingly under attack from ever-smarter AI algorithms that will simultaneously have a complete view of everything we do and believe, and complete control of the information we consume.

    The issue is not AI itself. The issue is control.

    Instead of letting newsfeed algorithms manipulate the user to achieve opaque goals, such as swaying their political opinions, or maximally wasting their time, we should put the user in charge of the goals that the algorithms optimize for. We are talking, after all, about your news, your worldview, your friends, your life — the impact that technology has on you should naturally be placed under your own control. Information management algorithms should not be a mysterious force inflicted on us to serve ends that run opposite to our own interests; instead, they should be a tool in our hand. A tool that we can use for our own purposes, say, for education and personal instead of entertainment.

    Here’s an idea — any algorithmic newsfeed with significant adoption should:

    Transparently convey what objectives the feed algorithm is currently optimizing for, and how these objectives are affecting your information diet.
    Give you intuitive tools to set these goals yourself. For instance, it should be possible for you to configure your newsfeed to maximize learning and personal growth — in specific directions.
    Feature an always-visible measure of how much time you are spending on the feed.
    Feature tools to stay control of how much time you’re spending on the feed — such as a daily time target, past which the algorithm will seek to get you off the feed.

    Augmenting ourselves with AI while retaining control

    We should build AI to serve humans, not to manipulate them for profit or political gain.

    You may be thinking, since a search engine is still an AI layer between us and the information we consume, could it bias its results to attempt to manipulate us? Yes, that risk is latent in every information-management algorithm. But in stark contrast with social networks, market incentives in this case are actually aligned with users needs, pushing search engines to be as relevant and objective as possible. If they fail to be maximally useful, there’s essentially no friction for users to move to a competing product. And importantly, a search engine would have a considerably smaller psychological attack surface than a social newsfeed. The threat we’ve profiled in this post requires most of the following to be present in a product:

    Both perception and action: not only should the product be in control of the information it shows you (news and social updates), it should also be able to “perceive” your current mental states via “likes”, chat messages, and status updates. Without both perception and action, no reinforcement learning loop can be established. A read-only feed would only be dangerous as a potential avenue for classical propaganda.
    Centrality to our lives: the product should be a major source of information for at least a subset of its users, and typical users should be spending several hours per day on it. A feed that is auxiliary and specialized (such as Amazon’s product recommendations) would not be a serious threat.
    A social component, enabling a far broader and more effective array of psychological control vectors (in particular social reinforcement). An impersonal newsfeed has only a fraction of the leverage over our minds.
    Business incentives set towards manipulating users and making users spend more time on the product.

    Most AI-driven information-management products don’t meet these requirements. Social networks, on the other hand, are a frightening combination of risk factors.

    #Intelligence_artificielle #Manipulation #Médias_sociaux

    • This is made all the easier by the fact that the human mind is highly vulnerable to simple patterns of social manipulation. Consider, for instance, the following vectors of attack:

      Identity reinforcement: this is an old trick that has been leveraged since the first very ads in history, and still works just as well as it did the first time, consisting of associating a given view with markers that you identify with (or wish you did), thus making you automatically siding with the target view. In the context of AI-optimized social media consumption, a control algorithm could make sure that you only see content (whether news stories or posts from your friends) where the views it wants you to hold co-occur with your own identity markers, and inversely for views the algorithm wants you to move away from.
      Negative social reinforcement: if you make a post expressing a view that the control algorithm doesn’t want you to hold, the system can choose to only show your post to people who hold the opposite view (maybe acquaintances, maybe strangers, maybe bots), and who will harshly criticize it. Repeated many times, such social backlash is likely to make you move away from your initial views.
      Positive social reinforcement: if you make a post expressing a view that the control algorithm wants to spread, it can choose to only show it to people who will “like” it (it could even be bots). This will reinforce your belief and put you under the impression that you are part of a supportive majority.
      Sampling bias: the algorithm may also be more likely to show you posts from your friends (or the media at large) that support the views it wants you to hold. Placed in such an information bubble, you will be under the impression that these views have much broader support than they do in reality.
      Argument personalization: the algorithm may observe that exposure to certain pieces of content, among people with a psychological profile close to yours, has resulted in the sort of view shift it seeks. It may then serve you with content that is expected to be maximally effective for someone with your particular views and life experience. In the long run, the algorithm may even be able to generate such maximally-effective content from scratch, specifically for you.

      From an information security perspective, you would call these vulnerabilities: known exploits that can be used to take over a system. In the case of the human minds, these vulnerabilities never get patched, they are just the way we work. They’re in our DNA. The human mind is a static, vulnerable system that will come increasingly under attack from ever-smarter AI algorithms that will simultaneously have a complete view of everything we do and believe, and complete control of the information we consume.

  • #greeneum Renewable Energy #blockchain
    https://hackernoon.com/greeneum-renewable-energy-blockchain-c686c55a380b?source=rss----3a8144ea

    Crypto Disrupted Episode 21: An interview with the founders of Greeneum.netAn interview with the CEO and COO of Greeneum, a green renewable energy blockchain. In the episode we discuss green energy, blockchain, and AI algorithms for detecting renewable energy.https://medium.com/media/88168aae6ed76ccc469eecdc576e3ff3/hrefAlso available on iTunes.For more episodes of Crypto Disrupted subscribe on YouTube, or subscribe and listen on iTunes.Greeneum Renewable Energy Blockchain was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this (...)

    #renewable-energy #cryptodisrupted #green-energy

  • Rich Data, Poor Data: What the Data Rich Do - That the Data Poor and the Data Middle Class Do Not! - Shelly Palmer
    http://www.shellypalmer.com/2016/05/rich-data-poor-data-data-rich-data-poor-data-middle-class-not

    Generally speaking, there are two kinds of companies in the world: data rich and data poor. The richest of the data rich are easy to name: Google, Facebook, Amazon, Apple. But you don’t need to be at the top of this list to use data to create value. You need to have the tools in place to turn information (data) into action. That’s what the data rich do that the data poor and the data middle class do not.

    – The Data Rich Treat Data Like Cash
    – Data Is More Powerful in the Presence of Other Data
    – The data rich have data councils that meet regularly to evaluate and modify corporate data governance policies.
    – Data rich companies have data science departments that are tasked with turning information (data) into action. These departments are built on three foundational skills: computer science, math and domain expertise.
    – The data rich make their data actionable by using very sophisticated machine learning and AI algorithms.
    – Data-driven thinking is the key to acting data rich.

    #open_data #propriété_intellectuelle #monopoles #data