movie:a.i

  • A Machine May Not Take Your Job, but One Could Become Your Boss
    THe Neww York Times, 23 juin 2019, Kevin Roose
    https://www.nytimes.com/2019/06/23/technology/artificial-intelligence-ai-workplace.html

    The goal of automation has always been efficiency. What if artificial intelligence sees humanity itself as the thing to be optimized?

    Cogito is one of several A.I. programs used in call centers and other workplaces. The goal, according to Joshua Feast, Cogito’s chief executive, is to make workers more effective by giving them real-time feedback.

    Amazon uses complex algorithms to track worker productivity in its fulfillment centers, and can automatically generate the paperwork to fire workers who don’t meet their targets, as The Verge uncovered this year. (Amazon has disputed that it fires workers without human input, saying that managers can intervene in the process.)
    [The Verge’s article : https://www.theverge.com/2019/4/25/18516004/amazon-warehouse-fulfillment-centers-productivity-firing-terminations]

    There were no protests at MetLife’s call center. Instead, the employees I spoke with seemed to view their Cogito software as a mild annoyance at worst. Several said they liked getting pop-up notifications during their calls, although some said they had struggled to figure out how to get the “empathy” notification to stop appearing. (Cogito says the A.I. analyzes subtle differences in tone between the worker and the caller and encourages the worker to try to mirror the customer’s mood.)

    MetLife, which uses the software with 1,500 of its call center employees, says using the app has increased its customer satisfaction by 13 percent.

    ANd TheNewYorker little comment on tech :

    Using A.I. to correct for human biases is a good thing. But as more A.I. enters the workplace, executives will have to resist the temptation to use it to tighten their grip on their workers and subject them to constant surveillance and analysis. If that happens, it won’t be the robots staging an uprising.

    [emphasis is mine]

    On arrête psa le progrès. Nous sommes en 2019 et le vieil adage mortifère continue de sévir allégrement (même dans un article qui se voudrait critique..

  • The “Drunk Pelosi” video shows that cheapfakes can be as damaging as deepfakes.
    https://slate.com/technology/2019/06/drunk-pelosi-deepfakes-cheapfakes-artificial-intelligence-disinformation.html

    The A.I.-generated “deepfake” video implicitly but unmistakably calls for Facebook to make a public statement on its content moderation polices. The platform has long been criticized for permitting the spread of disinformation and harassment, but it became particularly acute recently, when the company said that it would not remove the “Drunk Pelosi” video.

    On Thursday, the House Permanent Select Committee on Intelligence will hold an open hearing on A.I. and the potential threat of deepfake technology to Americans. Many technology researchers believe that deepfakes—realistic-looking content developed using machine learning algorithms—will herald a new era of information warfare. But as the “Drunk Pelosi” video shows, slight edits of original videos may be even more difficult to detect and debunk, creating a cascade of benefits for those willing to use these digital dirty tricks.

    The video, posted to a self-described news Facebook page with a fan base of about 35,000, depicted Nancy Pelosi slurring her words and sounding intoxicated. However, when compared with another video from the same event, it was clear even to nonexperts that it had been slowed down to produce the “drunken” effect. Call it a “cheapfake”—it was modified only very slightly. While the altered video garnered some significant views on Facebook, it was only after it was amplified by President Donald Trump and other prominent Republicans on Twitter that it became a newsworthy issue. The heightened drama surrounding this video raises interesting questions not only about platform accountability but also about how to spot disinformation in the wild.

    “Cheapfakes” rely on free software that allows manipulation through easy conventional editing techniques like speeding, slowing, and cutting, as well as nontechnical manipulations like restaging or recontextualizing existing footage that are already causing problems. Cheapfakes already call into question the methods of evidence that scientists, courts, and newsrooms traditionally use to call for accountability

    Many will never know the video was a fake, but the advantages it gave to pundits will echo into the future. It’s a recent example of what legal theorists Bobby Chesney and Danielle Citron call the liar’s dividend . Those wishing to deny the truth can create disinformation to support their lie, while those caught behaving badly can write off the evidence of bad behavior as disinformation. In a new survey from Pew Research Center, 63 percent of respondents said that they believe altered video and images are a significant source of confusion when it comes to interpreting news quality. That loss of trust works in favor of those willing to lie, defame, and harass to gain attention.

    As Daniel Kreiss and others have pointed out, people don’t just share content because they believe it. They do it for a host of reasons, not the least of which is simply because a message speaks to what users see as an implicit truth of the world even as they know it is not factually true. Researchers have found that creating and sharing hateful, false, or faked content is often rewarded on platforms like Facebook.

    The looming threat of the deepfake is worth attention—from politicians, like at the upcoming hearing; from journalists; from researchers; and especially from the public that will ultimately be the audience for these things. But make no mistake: Disinformation doesn’t have to be high tech to cause serious damage.

    #Fake_news #Deep_fake #Cheap_fake #Nancy_Pelosi #Médias_sociaux

  • Siri and Alexa Reinforce Gender Bias, U.N. Finds - The New York Times
    https://www.nytimes.com/2019/05/22/world/siri-alexa-ai-gender-bias.html

    Why do most virtual assistants that are powered by artificial intelligence — like Apple’s Siri and Amazon’s Alexa system — by default have female names, female voices and often a submissive or even flirtatious style?

    The problem, according to a new report released this week by Unesco, stems from a lack of diversity within the industry that is reinforcing problematic gender stereotypes.

    “Obedient and obliging machines that pretend to be women are entering our homes, cars and offices,” Saniye Gulser Corat, Unesco’s director for gender equality, said in a statement. “The world needs to pay much closer attention to how, when and whether A.I. technologies are gendered and, crucially, who is gendering them.”

    One particularly worrying reflection of this is the “deflecting, lackluster or apologetic responses” that these assistants give to insults.

    The report borrows its title — “I’d Blush if I Could” — from a standard response from Siri, the Apple voice assistant, when a user hurled a gendered expletive at it. When a user tells Alexa, “You’re hot,” her typical response has been a cheery, “That’s nice of you to say!”

    Siri’s response was recently altered to a more flattened “I don’t know how to respond to that,” but the report suggests that the technology remains gender biased, arguing that the problem starts with engineering teams that are staffed overwhelmingly by men.

    “Siri’s ‘female’ obsequiousness — and the servility expressed by so many other digital assistants projected as young women — provides a powerful illustration of gender biases coded into technology products,” the report found.

    Amazon’s Alexa, named for the ancient library of Alexandria, is unmistakably female. Microsoft’s Cortana was named after an A.I. character in the Halo video game franchise that projects itself as a sensuous, unclothed woman. Apple’s Siri is a Norse name that means “beautiful woman who leads you to victory.” The Google Assistant system, also known as Google Home, has a gender-neutral name, but the default voice is female.

    Baked into their humanized personalities, though, are generations of problematic perceptions of women. These assistants are putting a stamp on society as they become common in homes across the world, and can influence interactions with real women, the report warns. As the report puts it, “The more that culture teaches people to equate women with assistants, the more real women will be seen as assistants — and penalized for not being assistant-like.”

    #Assistants_vocaux #Genre #Féminisme #IA #Ingtelligence_artificielle #Voix

  • Warnings of a Dark Side to A.I. in Health Care - The New York Times
    https://www.nytimes.com/2019/03/21/science/health-medicine-artificial-intelligence.html

    Similar forms of artificial intelligence are likely to move beyond hospitals into the computer systems used by health care regulators, billing companies and insurance providers. Just as A.I. will help doctors check your eyes, lungs and other organs, it will help insurance providers determine reimbursement payments and policy fees.

    Ideally, such systems would improve the efficiency of the health care system. But they may carry unintended consequences, a group of researchers at Harvard and M.I.T. warns.

    In a paper published on Thursday in the journal Science, the researchers raise the prospect of “adversarial attacks” — manipulations that can change the behavior of A.I. systems using tiny pieces of digital data. By changing a few pixels on a lung scan, for instance, someone could fool an A.I. system into seeing an illness that is not really there, or not seeing one that is.

    _ Software developers and regulators must consider such scenarios, as they build and evaluate A.I. technologies in the years to come, the authors argue. The concern is less that hackers might cause patients to be misdiagnosed, although that potential exists. More likely is that doctors, hospitals and other organizations could manipulate the A.I. in billing or insurance software in an effort to maximize the money coming their way. _

    In turn, changing such diagnoses one way or another could readily benefit the insurers and health care agencies that ultimately profit from them. Once A.I. is deeply rooted in the health care system, the researchers argue, business will gradually adopt behavior that brings in the most money.

    The end result could harm patients, Mr. Finlayson said. Changes that doctors make to medical scans or other patient data in an effort to satisfy the A.I. used by insurance companies could end up on a patient’s permanent record and affect decisions down the road.

    Already doctors, hospitals and other organizations sometimes manipulate the software systems that control the billions of dollars moving across the industry. Doctors, for instance, have subtly changed billing codes — for instance, describing a simple X-ray as a more complicated scan — in an effort to boost payouts.

    Hamsa Bastani, an assistant professor at the Wharton Business School at the University of Pennsylvania, who has studied the manipulation of health care systems, believes it is a significant problem. “Some of the behavior is unintentional, but not all of it,” she said.

    #Intelligence_Artificielle #Médecine #Manipulation #Economie_santé

  • How To Stay Relevant in the Age of A.I.
    https://hackernoon.com/how-to-stay-relevant-in-the-age-of-a-i-7c3c3eba195b?source=rss----3a8144

    Knowledge isn’t power.Almost everything we know is either currently on the internet, or will be soon. If we reach a stage when every person and machine has access to the same information, what will set you apart from the pack?Your power is through connection.One way you’ll stand out is by cultivating an ability to communicate knowledge in a more compelling way than other people or machines can do it. We yearn for human connection, yet few people develop their skills in this area.Presenting and speaking and telling stories is going to have a far greater application in the future than just knowing facts.For example:1. Your CV will get you the interview, but your #communication skills land the job.The person with the best CV doesn’t always get hired. If they did we could cancel all job (...)

    #public-speaking #artificial-intelligence #technology #business

  • The Rise of Computational #sensemaking
    https://hackernoon.com/the-rise-of-computational-sensemaking-bad0d0ff7bea?source=rss----3a8144e

    Overcoming the barrier of meaning in artificial intelligence by deploying #sensors in the real worldPhoto credit: Simon JowettNo selfrespecting artificial intelligence researcher would claim A.I. is going to take over the role of humans in this world any time soon. There are still many fundamental A.I. challenges that stop the rise of superhuman computers. These challenges will not disappear, not even in this data — or if you will “A.I. revolution” — that we are facing right now.Through the example of an upcoming new field of science we are exploring at the computational sensemaking lab run by my colleague Martin Atzmüller I want to show what work is actually been done on the edge of fundamental and applied A.I. and why this is relevant for science, industry and society.In this post I will argue (...)

    #computational-sensemaking #ai #artificial-intelligence

  • Small Business Can’t Compete on Social Media. We’ve Built an A.I. Solution to Fix That
    https://hackernoon.com/small-business-cant-compete-on-social-media-we-ve-built-an-a-i-solution-

    Artificial intelligence and social media are two of the most powerful technologies that exist today. A.I. brings the power of machine learning to increasingly complex tasks, and social media connects people across a fragmented world.Each will have a profound impact on both marketing and our society as a whole in the near future. That’s why I founded Sensai, an A.I.-powered social media marketing platform for small businesses and creatives.Our goal is to provide businesses with a powerful social media marketing solution that offers valuable insights to help them better reach target audiences in an ever-changing social media landscape.The path that brought me to Sensai is winding (I’ve helped build companies on three continents), unconventional (I went from being an environmental lawyer (...)

    #social-media #artificial-intelligence #small-business #small-business-marketing #social-media-marketing

  • In the Age of A.I., Is Seeing Still Believing ? | The New Yorker
    https://www.newyorker.com/magazine/2018/11/12/in-the-age-of-ai-is-seeing-still-believing

    In a media environment saturated with fake news, such technology has disturbing implications. Last fall, an anonymous Redditor with the username Deepfakes released a software tool kit that allows anyone to make synthetic videos in which a neural network substitutes one person’s face for another’s, while keeping their expressions consistent. Along with the kit, the user posted pornographic videos, now known as “deepfakes,” that appear to feature various Hollywood actresses. (The software is complex but comprehensible: “Let’s say for example we’re perving on some innocent girl named Jessica,” one tutorial reads. “The folders you create would be: ‘jessica; jessica_faces; porn; porn_faces; model; output.’ ”) Around the same time, “Synthesizing Obama,” a paper published by a research group at the University of Washington, showed that a neural network could create believable videos in which the former President appeared to be saying words that were really spoken by someone else. In a video voiced by Jordan Peele, Obama seems to say that “President Trump is a total and complete dipshit,” and warns that “how we move forward in the age of information” will determine “whether we become some kind of fucked-up dystopia.”

    “People have been doing synthesis for a long time, with different tools,” he said. He rattled off various milestones in the history of image manipulation: the transposition, in a famous photograph from the eighteen-sixties, of Abraham Lincoln’s head onto the body of the slavery advocate John C. Calhoun; the mass alteration of photographs in Stalin’s Russia, designed to purge his enemies from the history books; the convenient realignment of the pyramids on the cover of National Geographic, in 1982; the composite photograph of John Kerry and Jane Fonda standing together at an anti-Vietnam demonstration, which incensed many voters after the Times credulously reprinted it, in 2004, above a story about Kerry’s antiwar activities.

    “In the past, anybody could buy Photoshop. But to really use it well you had to be highly skilled,” Farid said. “Now the technology is democratizing.” It used to be safe to assume that ordinary people were incapable of complex image manipulations. Farid recalled a case—a bitter divorce—in which a wife had presented the court with a video of her husband at a café table, his hand reaching out to caress another woman’s. The husband insisted it was fake. “I noticed that there was a reflection of his hand in the surface of the table,” Farid said, “and getting the geometry exactly right would’ve been really hard.” Now convincing synthetic images and videos were becoming easier to make.

    The acceleration of home computing has converged with another trend: the mass uploading of photographs and videos to the Web. Later, when I sat down with Efros in his office, he explained that, even in the early two-thousands, computer graphics had been “data-starved”: although 3-D modellers were capable of creating photorealistic scenes, their cities, interiors, and mountainscapes felt empty and lifeless. True realism, Efros said, requires “data, data, data” about “the gunk, the dirt, the complexity of the world,” which is best gathered by accident, through the recording of ordinary life.

    Today, researchers have access to systems like ImageNet, a site run by computer scientists at Stanford and Princeton which brings together fourteen million photographs of ordinary places and objects, most of them casual snapshots posted to Flickr, eBay, and other Web sites. Initially, these images were sorted into categories (carrousels, subwoofers, paper clips, parking meters, chests of drawers) by tens of thousands of workers hired through Amazon Mechanical Turk. Then, in 2012, researchers at the University of Toronto succeeded in building neural networks capable of categorizing ImageNet’s images automatically; their dramatic success helped set off today’s neural-networking boom. In recent years, YouTube has become an unofficial ImageNet for video. Efros’s lab has overcome the site’s “platform bias”—its preference for cats and pop stars—by developing a neural network that mines, from “life style” videos such as “My Spring Morning Routine” and “My Rustic, Cozy Living Room,” clips of people opening packages, peering into fridges, drying off with towels, brushing their teeth. This vast archive of the uninteresting has made a new level of synthetic realism possible.

    In 2016, the Defense Advanced Research Projects Agency (DARPA) launched a program in Media Forensics, or MediFor, focussed on the threat that synthetic media poses to national security. Matt Turek, the program’s manager, ticked off possible manipulations when we spoke: “Objects that are cut and pasted into images. The removal of objects from a scene. Faces that might be swapped. Audio that is inconsistent with the video. Images that appear to be taken at a certain time and place but weren’t.” He went on, “What I think we’ll see, in a couple of years, is the synthesis of events that didn’t happen. Multiple images and videos taken from different perspectives will be constructed in such a way that they look like they come from different cameras. It could be something nation-state driven, trying to sway political or military action. It could come from a small, low-resource group. Potentially, it could come from an individual.”

    As with today’s text-based fake news, the problem is double-edged. Having been deceived by a fake video, one begins to wonder whether many real videos are fake. Eventually, skepticism becomes a strategy in itself. In 2016, when the “Access Hollywood” tape surfaced, Donald Trump acknowledged its accuracy while dismissing his statements as “locker-room talk.” Now Trump suggests to associates that “we don’t think that was my voice.”

    “The larger danger is plausible deniability,” Farid told me. It’s here that the comparison with counterfeiting breaks down. No cashier opens up the register hoping to find counterfeit bills. In politics, however, it’s often in our interest not to believe what we are seeing.

    As alarming as synthetic media may be, it may be more alarming that we arrived at our current crises of misinformation—Russian election hacking; genocidal propaganda in Myanmar; instant-message-driven mob violence in India—without it. Social media was enough to do the job, by turning ordinary people into media manipulators who will say (or share) anything to win an argument. The main effect of synthetic media may be to close off an escape route from the social-media bubble. In 2014, video of the deaths of Michael Brown and Eric Garner helped start the Black Lives Matter movement; footage of the football player Ray Rice assaulting his fiancée catalyzed a reckoning with domestic violence in the National Football League. It seemed as though video evidence, by turning us all into eyewitnesses, might provide a path out of polarization and toward reality. With the advent of synthetic media, all that changes. Body cameras may still capture what really happened, but the aesthetic of the body camera—its claim to authenticity—is also a vector for misinformation. “Eyewitness video” becomes an oxymoron. The path toward reality begins to wash away.

    #Fake_news #Image #Synthèse

  • Facebook Plans Camera-Equipped TV Device
    https://cheddar.com/videos/facebook-plans-camera-tv-device-project-ripley

    Facebook is developing hardware for the TV, Cheddar has learned. The world’s largest social network is building a camera-equipped device that sits atop a TV and allows video calling along with entertainment services like Facebook’s YouTube competitor, according to people familiar with the matter. The project, internally codenamed “Ripley,” uses the same core technology as Facebook’s recently announced Portal video chat device for the home. Portal begins shipping next month and uses A.I. to (...)

    #Facebook #algorithme #CCTV #Portal #mouvement #vidéo-surveillance #Ripley

    https://cheddar.imgix.net/media/53b60744-e57d-46db-aa7b-20ce1d78a0c3.png

  • The Fake-News Fallacy | The New Yorker
    https://www.newyorker.com/magazine/2017/09/04/the-fake-news-fallacy

    Not so very long ago, it was thought that the tension between commercial pressure and the public interest would be one of the many things made obsolete by the Internet. In the mid-aughts, during the height of the Web 2.0 boom, the pundit Henry Jenkins declared that the Internet was creating a “participatory culture” where the top-down hegemony of greedy media corporations would be replaced by a horizontal network of amateur “prosumers” engaged in a wonderfully democratic exchange of information in cyberspace—an epistemic agora that would allow the whole globe to come together on a level playing field. Google, Facebook, Twitter, and the rest attained their paradoxical gatekeeper status by positioning themselves as neutral platforms that unlocked the Internet’s democratic potential by empowering users. It was on a private platform, Twitter, where pro-democracy protesters organized, and on another private platform, Google, where the knowledge of a million public libraries could be accessed for free. These companies would develop into what the tech guru Jeff Jarvis termed “radically public companies,” which operate more like public utilities than like businesses.

    But there has been a growing sense among mostly liberal-minded observers that the platforms’ championing of openness is at odds with the public interest. The image of Arab Spring activists using Twitter to challenge repressive dictators has been replaced, in the public imagination, by that of ISIS propagandists luring vulnerable Western teen-agers to Syria via YouTube videos and Facebook chats. The openness that was said to bring about a democratic revolution instead seems to have torn a hole in the social fabric. Today, online misinformation, hate speech, and propaganda are seen as the front line of a reactionary populist upsurge threatening liberal democracy. Once held back by democratic institutions, the bad stuff is now sluicing through a digital breach with the help of irresponsible tech companies. Stanching the torrent of fake news has become a trial by which the digital giants can prove their commitment to democracy. The effort has reignited a debate over the role of mass communication that goes back to the early days of radio.

    The debate around radio at the time of “The War of the Worlds” was informed by a similar fall from utopian hopes to dystopian fears. Although radio can seem like an unremarkable medium—audio wallpaper pasted over the most boring parts of your day—the historian David Goodman’s book “Radio’s Civic Ambition: American Broadcasting and Democracy in the 1930s” makes it clear that the birth of the technology brought about a communications revolution comparable to that of the Internet. For the first time, radio allowed a mass audience to experience the same thing simultaneously from the comfort of their homes. Early radio pioneers imagined that this unprecedented blurring of public and private space might become a sort of ethereal forum that would uplift the nation, from the urban slum dweller to the remote Montana rancher. John Dewey called radio “the most powerful instrument of social education the world has ever seen.” Populist reformers demanded that radio be treated as a common carrier and give airtime to anyone who paid a fee. Were this to have come about, it would have been very much like the early online-bulletin-board systems where strangers could come together and leave a message for any passing online wanderer. Instead, in the regulatory struggles of the twenties and thirties, the commercial networks won out.

    Corporate networks were supported by advertising, and what many progressives had envisaged as the ideal democratic forum began to seem more like Times Square, cluttered with ads for soap and coffee. Rather than elevating public opinion, advertisers pioneered techniques of manipulating it. Who else might be able to exploit such techniques? Many saw a link between the domestic on-air advertising boom and the rise of Fascist dictators like Hitler abroad.

    Today, when we speak about people’s relationship to the Internet, we tend to adopt the nonjudgmental language of computer science. Fake news was described as a “virus” spreading among users who have been “exposed” to online misinformation. The proposed solutions to the fake-news problem typically resemble antivirus programs: their aim is to identify and quarantine all the dangerous nonfacts throughout the Web before they can infect their prospective hosts. One venture capitalist, writing on the tech blog Venture Beat, imagined deploying artificial intelligence as a “media cop,” protecting users from malicious content. “Imagine a world where every article could be assessed based on its level of sound discourse,” he wrote. The vision here was of the news consumers of the future turning the discourse setting on their browser up to eleven and soaking in pure fact. It’s possible, though, that this approach comes with its own form of myopia. Neil Postman, writing a couple of decades ago, warned of a growing tendency to view people as computers, and a corresponding devaluation of the “singular human capacity to see things whole in all their psychic, emotional and moral dimensions.” A person does not process information the way a computer does, flipping a switch of “true” or “false.” One rarely cited Pew statistic shows that only four per cent of American Internet users trust social media “a lot,” which suggests a greater resilience against online misinformation than overheated editorials might lead us to expect. Most people seem to understand that their social-media streams represent a heady mixture of gossip, political activism, news, and entertainment. You might see this as a problem, but turning to Big Data-driven algorithms to fix it will only further entrench our reliance on code to tell us what is important about the world—which is what led to the problem in the first place. Plus, it doesn’t sound very fun.

    In recent times, Donald Trump supporters are the ones who have most effectively applied Grierson’s insight to the digital age. Young Trump enthusiasts turned Internet trolling into a potent political tool, deploying the “folk stuff” of the Web—memes, slang, the nihilistic humor of a certain subculture of Web-native gamer—to give a subversive, cyberpunk sheen to a movement that might otherwise look like a stale reactionary blend of white nationalism and anti-feminism. As crusaders against fake news push technology companies to “defend the truth,” they face a backlash from a conservative movement, retooled for the digital age, which sees claims for objectivity as a smoke screen for bias.

    For conservatives, the rise of online gatekeepers may be a blessing in disguise. Throwing the charge of “liberal media bias” against powerful institutions has always provided an energizing force for the conservative movement, as the historian Nicole Hemmer shows in her new book, “Messengers of the Right.” Instead of focussing on ideas, Hemmer focusses on the galvanizing struggle over the means of distributing those ideas. The first modern conservatives were members of the America First movement, who found their isolationist views marginalized in the lead-up to the Second World War and vowed to fight back by forming the first conservative media outlets. A “vague claim of exclusion” sharpened into a “powerful and effective ideological arrow in the conservative quiver,” Hemmer argues, through battles that conservative radio broadcasters had with the F.C.C. in the nineteen-fifties and sixties. Their main obstacle was the F.C.C.’s Fairness Doctrine, which sought to protect public discourse by requiring controversial opinions to be balanced by opposing viewpoints. Since attacks on the mid-century liberal consensus were inherently controversial, conservatives found themselves constantly in regulators’ sights. In 1961, a watershed moment occurred with the leak of a memo from labor leaders to the Kennedy Administration which suggested using the Fairness Doctrine to suppress right-wing viewpoints. To many conservatives, the memo proved the existence of the vast conspiracy they had long suspected. A fund-raising letter for a prominent conservative radio show railed against the doctrine, calling it “the most dastardly collateral attack on freedom of speech in the history of the country.” Thus was born the character of the persecuted truthteller standing up to a tyrannical government—a trope on which a billion-dollar conservative-media juggernaut has been built.

    The online tumult of the 2016 election fed into a growing suspicion of Silicon Valley’s dominance over the public sphere. Across the political spectrum, people have become less trusting of the Big Tech companies that govern most online political expression. Calls for civic responsibility on the part of Silicon Valley companies have replaced the hope that technological innovation alone might bring about a democratic revolution. Despite the focus on algorithms, A.I., filter bubbles, and Big Data, these questions are political as much as technical.

    #Démocratie #Science_information #Fake_news #Regulation

  • Can the Manufacturer of Tasers Provide the Answer to Police Abuse ? | The New Yorker
    https://www.newyorker.com/magazine/2018/08/27/can-the-manufacturer-of-tasers-provide-the-answer-to-police-abuse

    Tasers are carried by some six hundred thousand law-enforcement officers around the world—a kind of market saturation that also presents a problem. “One of the challenges with Taser is: where do you go next, what’s Act II?” Smith said. “For us, luckily, Act II is cameras.” He began adding cameras to his company’s weapons in 2006, to defend against allegations of abuse, and in the process inadvertently opened a business line that may soon overshadow the Taser. In recent years, body cameras—the officer’s answer to bystander cell-phone video—have become ubiquitous, and Smith’s company, now worth four billion dollars, is their largest manufacturer, holding contracts with more than half the major police departments in the country.

    The cameras have little intrinsic value, but the information they collect is worth a fortune to whoever can organize and safeguard it. Smith has what he calls an iPod/iTunes opportunity—a chance to pair a hardware business with an endlessly recurring and expanding data-storage subscription plan. In service of an intensifying surveillance state and the objectives of police as they battle the public for control of the story, Smith is building a network of electrical weapons, cameras, drones, and someday, possibly, robots, connected by a software platform called Evidence.com. In the process, he is trying to reposition his company in the public imagination, not as a dubious purveyor of stun guns but as a heroic seeker of truth.

    A year ago, Smith changed Taser’s name to Axon Enterprise, referring to the conductive fibre of a nerve cell. Taser was founded in Scottsdale, Arizona, where Smith lives; to transform into Axon, he opened an office in Seattle, hiring designers and engineers from Uber, Google, and Apple. When I met him at the Seattle office this spring, he wore a company T-shirt that read “Expect Candor” and a pair of leather sneakers in caution yellow, the same color as Axon’s logo: a delta symbol—for change—which also resembles the lens of a surveillance camera.

    Already, Axon’s servers, at Microsoft, store nearly thirty petabytes of video—a quarter-million DVDs’ worth—and add approximately two petabytes each month. When body-camera footage is released—say, in the case of Stephon Clark, an unarmed black man killed by police in Sacramento, or of the mass shooting in Las Vegas, this past fall—Axon’s logo is often visible in the upper-right corner of the screen. The company’s stock is up a hundred and thirty per cent since January.

    The original Taser was the invention of an aerospace engineer named Jack Cover, inspired by the sci-fi story “Tom Swift and His Electric Rifle,” about a boy inventor whose long gun fires a five-thousand-volt charge. Early experiments were comical: Cover wired the family couch to shock his sister and her boyfriend as they were on the brink of making out. Later, he discovered that he could fell buffalo when he hit them with electrified darts. In 1974, Cover got a patent and began to manufacture an electric gun. That weapon was similar to today’s Taser: a Glock-shaped object that sends out two live wires, loaded with fifty thousand volts of electricity and ending in barbed darts that attach to a target. When the hooks connect, they create a charged circuit, which causes muscles to contract painfully, rendering the subject temporarily incapacitated. More inventor than entrepreneur, Cover designed the Taser to propel its darts with an explosive, leading the Bureau of Alcohol, Tobacco and Firearms to classify it a Title II weapon (a category that also includes sawed-off shotguns), which required an arduous registration process and narrowed its appeal.

    A few years after Tasers went on the market, Rick Smith added a data port to track each trigger pull. The idea, he told me, came from the Baltimore Police Department, which was resisting Tasers out of a concern that officers would abuse people with them. In theory, with a data port, cops would use their Tasers more conscientiously, knowing that each deployment would be recorded and subject to review. But in Baltimore it didn’t work out that way. Recent reports in the Sun revealed that nearly sixty per cent of people Tased by police in Maryland between 2012 and 2014—primarily black and living in low-income neighborhoods—were “non-compliant and non-threatening.”

    Act II begins in the nauseous summer of 2014, when Eric Garner died after being put in a choke hold by police in Staten Island and Michael Brown was shot by Darren Wilson, of the Ferguson Police. After a grand jury decided not to indict Wilson—witness statements differed wildly, and no footage of the shooting came to light—Brown’s family released a statement calling on the public to “join with us in our campaign to ensure that every police officer working the streets in this country wears a body camera.”

    In the fall of 2014, Taser débuted the Officer Safety Plan, which now costs a hundred and nine dollars a month and includes Tasers, cameras, and a sensor that wirelessly activates all the cameras in its range whenever a cop draws his sidearm. This feature is described on the Web site as a prudent hedge in chaotic times: “In today’s online culture where videos go viral in an instant, officers must capture the truth of a critical event. But the intensity of the moment can mean that hitting ‘record’ is an afterthought. Both officers and communities facing confusion and unrest have asked for a solution that turns cameras on reliably, leaving no room for dispute.” According to White’s review of current literature, half of the randomized controlled studies show a substantial or statistically significant reduction in use of force following the introduction of body cameras. The research into citizen complaints is more definitive: cameras clearly reduce the number of complaints from the public.

    The practice of “testi-lying”—officers lying under oath—is made much more difficult by the presence of video.

    Even without flagrant dissimulation, body-camera footage is often highly contentious. Michael White said, “The technology is the easy part. The human use of the technology really is making things very complex.” Policies on how and when cameras should be used, and how and when and by whom footage can be accessed, vary widely from region to region. Jay Stanley, who researches technology for the American Civil Liberties Union, said that the value of a body camera to support democracy depends on those details. “When is it activated? When is it turned off? How vigorously are those rules enforced? What happens to the video footage, how long is it retained, is it released to the public?” he said. “These are the questions that shape the nature of the technology and decide whether it just furthers the police state.”

    Increasingly, civil-liberties groups fear that body cameras will do more to amplify police officers’ power than to restrain their behavior. Black Lives Matter activists view body-camera programs with suspicion, arguing that communities of color need better educational and employment opportunities, environmental justice, and adequate housing, rather than souped-up robo-cops. They also argue that video has been ineffectual: many times, the public has watched the police abuse and kill black men without facing conviction. Melina Abdullah, a professor of Pan-African studies at Cal State Los Angeles, who is active in Black Lives Matter, told me, “Video surveillance, including body cameras, are being used to bolster police claims, to hide what police are doing, and engage in what we call the double murder of our people. They kill the body and use the footage to increase accusations around the character of the person they just killed.” In her view, police use video as a weapon: a black man shown in a liquor store in a rough neighborhood becomes a suspect in the public mind. Video generated by civilians, on the other hand, she sees as a potential check on abuses. She stops to record with her cell phone almost every time she witnesses a law-enforcement interaction with a civilian.

    Bringing in talented engineers is crucial to Smith’s vision. The public-safety nervous system that he is building runs on artificial intelligence, software that can process and analyze an ever-expanding trove of video evidence. The L.A.P.D. alone has already made some five million videos, and adds more than eleven thousand every day. At the moment, A.I. is used for redaction, and Axon technicians at a special facility in Scottsdale are using data from police departments to train the software to detect and blur license plates and faces.

    Facial recognition, which techno-pessimists see as the advent of the Orwellian state, is not far behind. Recently, Smith assembled an A.I. Ethics Board, to help steer Axon’s decisions. (His lead A.I. researcher, recruited from Uber, told him that he wouldn’t be able to hire the best engineers without an ethics board.) Smith told me, “I don’t want to wake up like the guy Nobel, who spent his life making things that kill people, and then, at the end of his life, it’s, like, ‘O.K., I have to buy my way out of this.’ ”

    #Taser #Intelligence_artificielle #Caméras #Police #Stockage_données

  • The rise of ’pseudo-AI’ : how tech firms quietly use humans to do bots’ work
    https://www.theguardian.com/technology/2018/jul/06/artificial-intelligence-ai-humans-bots-tech-companies

    Using what one expert calls a ‘Wizard of Oz technique’, some companies keep their reliance on humans a secret from investors It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans. “Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it (...)

    #Google #Amazon #AmazonMechanicalTurk #Facebook #algorithme #bot #manipulation #terms (...)

    ##travail
    https://i.guim.co.uk/img/media/dd988ca35ee7f58bbc6217148c8a1492785aed4e/0_95_5758_3454/master/5758.jpg

  • Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So par Steve Lorh, New York Times
    https://www.nytimes.com/2018/06/20/technology/deep-learning-artificial-intelligence.html

    “There is no real intelligence there,” said Michael I. Jordan, a professor at the University of California, Berkeley, and the author of an essay published in April intended to temper the lofty expectations surrounding A.I. “And I think that trusting these brute force algorithms too much is a faith misplaced.”

  • Silicon Valley’s Sixty-Year Love Affair with the Word “Tool” | The New Yorker
    https://www.newyorker.com/tech/elements/silicon-valleys-sixty-year-love-affair-with-the-word-tool

    In the written remarks that Mark Zuckerberg, the C.E.O. of Facebook, submitted in advance of his testimony on Capitol Hill this week, he used the word “tool” eleven times. “As Facebook has grown, people everywhere have gotten a powerful new tool to stay connected to the people they love, make their voices heard, and build communities and businesses,” Zuckerberg wrote. “We have a responsibility to not just build tools, but to make sure those tools are used for good.” Later, he added, “I don’t want anyone to use our tools to undermine democracy.” In his testimony before the Senate Judiciary and Commerce Committees on Tuesday, Zuckerberg referred to “these tools,” “those tools, “any tool,” “technical tools,” and—thirteen times—“A.I. tools.” On Wednesday, at a separate hearing of the House Energy and Commerce Committee, a congressman from Florida told Zuckerberg, “Work on those tools as soon as possible, please.”

    What’s in a tool? The Oxford English Dictionary will tell you that the English word is more than a thousand years old and that, since the mid-sixteenth century, it has been used as the slur that we’re familiar with today.

    In Silicon Valley, according to Siva Vaidhyanathan, a professor at the University of Virginia whose book about Facebook, “Antisocial Media,” is due out in September, “Tools are technologies that generate other technologies.” When I asked an engineer friend who builds “developer tools” for his definition, he noted that a tool is distinct from a product, since a product is “experienced rather than used.” The iTunes Store, he said, is a product: “there are lots of songs you can download, but it’s just a static list.” A Web browser, by contrast, is a tool, because “the last mile of its use is underspecified.”

    Yesterday was not Zuckerberg’s first time being called in and interrogated about a Web site that he created. In the fall of 2003, when he was a sophomore at Harvard, a disciplinary body called the Ad Board summoned him to answer questions about Facemash, the Facebook precursor that he had just released. Using I.D. photos of female undergraduates scraped from the university’s online directories, Facemash presented users with pairs of women and asked them to rank who was “hotter.” (“Were we let in for our looks? No,” the site proclaimed. “Will we be judged on them? Yes.”) By 10 P.M. on the day Facemash launched, some four hundred and fifty visitors had cast at least twenty-two thousand votes. Several student groups, including Fuerza Latina and the Harvard Association of Black Women, led an outcry. But Zuckerberg insisted to the Ad Board that he had not intended to “insult” anyone. As the student newspaper, the Crimson, reported, “The programming and algorithms that made the site function were Zuckerberg’s primary interest in creating it.” The point of Facemash was to make a tool. The fact that it got sharpened on the faces of fellow-students was incidental.

    The exaltation of tools has a long history in the Bay Area, going back to the late nineteen-sixties, when hippie counterculture intersected with early experiments in personal computing. In particular, the word got its cachet from the “Whole Earth Catalog,” a compendium of product reviews for commune dwellers that appeared several times a year, starting in 1968, and then sporadically after 1972. Its slogan: “Access to tools.” The publisher of the “Catalog,” Stewart Brand—a Stanford-trained biologist turned hippie visionary and entrepreneur—would later call it “the first instance of desktop publishing.” Steve Jobs, in his 2005 commencement address at Stanford, described it as “one of the bibles of my generation.” The “Catalog,” Jobs said, was “Google in paperback form, thirty-five years before Google came along. It was idealistic, and overflowing with neat tools and notions.” Jobs’s biographer, Walter Isaacson, quotes Brand as saying that the Apple co-founder was a kindred spirit; in designing products, Jobs “got the notion of tools for human use.” With the rise of personal computing, the term “tools” migrated from communes to software. The generation of tech leaders who grew up taking P.C.s and the World Wide Web for granted nevertheless inherited an admiration for Brand. In 2016, for instance, Facebook’s head of product, Chris Cox, joined him onstage at the Aspen Ideas Festival to give a talk titled “Connecting the Next Billion.”

    Tool talk encodes an entire attitude to politics—namely, a rejection of politics in favor of tinkering. In the sixties, Brand and the “Whole Earth Catalog” presented tools as an alternative to activism. Unlike his contemporaries in the antiwar, civil-rights, and women’s movements, Brand was not interested in gender, race, class, or imperialism. The transformations that he sought were personal, not political. In defining the purpose of the “Catalog,” he wrote, “a realm of intimate, personal power is developing—power of the individual to conduct his own education, find his own inspiration, shape his own environment, and share his adventure with whoever is interested.” Like Zuckerberg, Brand saw tools as a neutral means to engage any and every user. “Whole Earth eschewed politics and pushed grassroots direct power—tools and skills,” he later wrote. If people got good enough tools to build the communities they wanted, politics would take care of itself.

    #Facebook #Fred_Turner #Stewart_Brand #Tools
    This idea became highly influential in the nineties, as the Stanford historian Fred Turner demonstrates in his book “From Counterculture to Cyberculture.” Through Wired magazine, which was founded by Brand’s collaborator Kevin Kelly, the message reached not just Silicon Valley but also Washington. The idea that tools were preferable to politics found a ready audience in a decade of deregulation. The sense that the Web was somehow above or beyond politics justified laws that privatized Internet infrastructure and exempted sites from the kinds of oversight that governed traditional publishers. In other words, Brand’s philosophy helped create the climate in which Facebook, Google, and Twitter could become the vast monopolies that they are today—a climate in which dubious political ads on these platforms, and their casual attitudes toward sharing user data, could pass mostly unnoticed. As Turner put it in a recent interview with Logic magazine (of which I am a co-founder), Brand and Wired persuaded lawmakers that Silicon Valley was the home of the future. “Why regulate the future?” Turner asked. “Who wants to do that?”

  • Will This “Neural Lace” Brain Implant Help Us Compete with AI? - Facts So Romantic
    http://nautil.us/blog/-will-this-neural-lace-brain-implant-help-us-compete-with-ai

    Smarter artificial intelligence is certainly being developed, but how far along are we on producing a neural lace?Photograph by Ars Electronica / FlickrSolar-powered self-driving cars, reusable space ships, Hyperloop transportation, a mission to colonize Mars: Elon Musk is hell-bent on turning these once-far-fetched fantasies into reality. But none of these technologies has made him as leery as artificial intelligence. At Code Conference 2016, Musk stated publicly that given the current rate of A.I. advancement, humans could ultimately expect to be left behind—cognitively, intellectually—“by a lot.” His solution to this unappealing fate is a novel brain-computer interface similar to the implantable “neural lace” described by the Scottish novelist Iain M. Banks in Look to Windward, part of (...)

  • Pentagon Wants Silicon Valley’s Help on A.I. - The New York Times
    https://www.nytimes.com/2018/03/15/technology/military-artificial-intelligence.html

    The military and intelligence communities have long played a big role in the technology industry and had close ties with many of Silicon Valley’s early tech giants. David Packard, Hewlett-Packard’s co-founder, even served as the deputy secretary of defense under President Richard M. Nixon.

    #silicon_valley #Pentagone #IA

  • How creative jobs will change in 2018: throwing the doors open to A.I.
    https://hackernoon.com/how-creative-jobs-will-change-in-2018-throwing-the-doors-open-to-a-i-869

    The creative industry has always been faced with a rapidly changing landscape, but in 2018 we face some of the largest, and most unknown challenges yet. The way people interact with computers, how we go about doing our jobs, and the fundamental nature of ‘creative work’ is shifting faster than ever before.This is a look at the state of the creative world and where I think it’s headed from here — a compendium for the next big things in the industry, and the forces that are driving these shifts.It’s an incredibly exciting time to be in our industry, and as a writer, I can’t wait to see where we’re headed next, but at the same moment it’s a period of anxiety as we’re faced with the realities of a changing landscape, and the impact of automation.Throwing the doors of the idea creation openFor years, (...)

    #artificial-intelligence #deep-learning #ai #future-of-work #design

  • How Google Arts and Culture’s Face Match A.I. Actually Works | Inverse
    https://www.inverse.com/article/40177-google-arts-and-culture-technology

    That’s part of what made the viral Google Arts and Culture feature, allowing users to compare their faces with a work of art, so fun. It played up our natural vanity, for sure, but it also gave us a chance to test out what A.I. is capable of.

    Some users took the opportunity to punk the A.I. to hilarious effect:

    It sounds like an easy process, but in fact, there is a lot of learning that the machine has to do first. After identifying faces in an image, it may have to reorient or resize it for a better reading — we’ve all been in cases where a selfie taken from too close looks distorted from reality.

    Then, once the A.I. has resized and reoriented the face, it creates a “faceprint,” a set of characteristics that uniquely identify one person’s face. This could include the distance between facial features, such as eyes, or shapes and sizes of noses.

    Faceprints can then be compared with an individual photo or to databases of many images.

    In the case of Google’s museum selfie feature, each selfie that is uploaded is compared with its database of over 70,000 works of art.

    According to the Post, users currently have to opt into facial recognition on Google Photos (but not on Facebook).

    But, by playing around with this selfie feature, that’s essentially what we’re doing, so we are actively consenting to making Google’s A.I. smarter.

    #Google #Reconnaissance_faciale #Intelligence_artificielle #Digital_labour

  • How an A.I. ‘Cat-and-Mouse Game’ Generates Believable Fake Photos - The New York Times
    https://www.nytimes.com/interactive/2018/01/02/technology/ai-generated-photos.html

    At a lab in Finland, a small team of Nvidia researchers recently built a system that can analyze thousands of (real) celebrity snapshots, recognize common patterns, and create new images that look much the same — but are still a little different. The system can also generate realistic images of horses, buses, bicycles, plants and many other common objects.

    The project is part of a vast and varied effort to build technology that can automatically generate convincing images — or alter existing images in equally convincing ways. The hope is that this technology can significantly accelerate and improve the creation of computer interfaces, games, movies and other media, eventually allowing software to create realistic imagery in moments rather than the hours — if not days — it can now take human developers.

    In recent years, thanks to a breed of algorithm that can learn tasks by analyzing vast amounts of data, companies like Google and Facebook have built systems that can recognize faces and common objects with an accuracy that rivals the human eye. Now, these and other companies, alongside many of the world’s top academic A.I. labs, are using similar methods to both recognize and create.

    As it built a system that generates new celebrity faces, the Nvidia team went a step further in an effort to make them far more believable. It set up two neural networks — one that generated the images and another that tried to determine whether those images were real or fake. These are called generative adversarial networks, or GANs. In essence, one system does its best to fool the other — and the other does its best not to be fooled.

    “The computer learns to generate these images by playing a cat-and-mouse game against itself,” said Mr. Lehtinen.

    A second team of Nvidia researchers recently built a system that can automatically alter a street photo taken on a summer’s day so that it looks like a snowy winter scene. Researchers at the University of California, Berkeley, have designed another that learns to convert horses into zebras and Monets into Van Goghs. DeepMind, a London-based A.I. lab owned by Google, is exploring technology that can generate its own videos. And Adobe is fashioning similar machine learning techniques with an eye toward pushing them into products like Photoshop, its popular image design tool.

    Trained designers and engineers have long used technology like Photoshop and other programs to build realistic images from scratch. This is what movie effects houses do. But it is becoming easier for machines to learn how to generate these images on their own, said Durk Kingma, a researcher at OpenAI, the artificial intelligence lab founded by Tesla chief executive Elon Musk and others, who specializes in this kind of machine learning.

    “We now have a model that can generate faces that are more diverse and in some ways more realistic than what we could program by hand,” he said, referring to Nvidia’s work in Finland.

    But new concerns come with the power to create this kind of imagery.

    With so much attention on fake media these days, we could soon face an even wider range of fabricated images than we do today.

    “The concern is that these techniques will rise to the point where it becomes very difficult to discern truth from falsity,” said Tim Hwang, who previously oversaw A.I. policy at Google and is now director of the Ethics and Governance of Artificial Intelligence Fund, an effort to fund ethical A.I. research. “You might believe that accelerates problems we already have.”

    But many of us still put a certain amount of trust in photos and videos that we don’t necessarily put in text or word of mouth. Mr. Hwang believes the technology will evolve into a kind of A.I. arms race pitting those trying to deceive against those trying to identify the deception.

    Mr. Lehtinen downplays the effect his research will have on the spread of misinformation online. But he does say that, as a time goes on, we may have to rethink the very nature of imagery. “We are approaching some fundamental questions,” he said.

    #Image #Fake_news #Post_truth #Intelligence_artificielle #AI_war #Désinformation

  • How to Regulate Artificial Intelligence - The New York Times
    https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html

    It’s natural to ask whether we should develop A.I. at all.

    I believe the answer is yes. But shouldn’t we take steps to at least slow down progress on A.I., in the interest of caution? The problem is that if we do so, then nations like China will overtake us. The A.I. horse has left the barn, and our best bet is to attempt to steer it. A.I. should not be weaponized, and any A.I. must have an impregnable “off switch.”

    Trois « lois » pour l’IA :
    – le propriétaire est responsable des actes de l’IA
    – une IA et un robot doivent s’annoncer comme tels
    – une IA ne peut partager des informations qu’avec l’accord de son propriétaire

    #Intelligence_artificielle #Réglementation

  • Why A.I. Is Just Not Funny - Facts So Romantic
    http://nautil.us/blog/why-ai-is-just-not-funny

    Although A.I. robots can pick up on jokes, they have a lot to learn about telling them.Queen Mary University of London / YouTubeIn the 2004 film I, Robot, Detective Del Spooner asks an A.I. named Sonny: “Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?” Sonny responds: “Can you?” Scientists have been working on answering Spooner’s question for the last decade with striking results. Researchers from Rutgers University, Facebook, and the College of Charleston have developed a system for generating original art called C.A.N. (Creative Adversarial Network). They “trained” C.A.N. on more than 81,000 paintings from 1,119 artists ranging from the 15th century to the 20th century. The A.I. experts wrote algorithms for C.A.N. to emulate painting styles such as (...)

  • A Computer Just Clobbered Four Pros At Poker | FiveThirtyEight
    https://fivethirtyeight.com/features/a-computer-just-clobbered-four-pros-at-poker
    https://espnfivethirtyeight.files.wordpress.com/2017/01/roeder-poker-update-1.png?quality=90&strip=all&

    About three weeks ago, I was in a Pittsburgh casino for the beginning of a 20-day man-versus-machine poker battle. Four top human pros were beginning to take on a state-of-the-art artificial intelligence program running on a brand new supercomputer in a game called heads-up no-limit Texas Hold ’em. The humans’ spirits were high as they played during the day and dissected the bot’s strategy over short ribs and glasses of wine late into the evening.

    On Monday evening, however, the match ended and the human pros were in the hole about $1.8 million. For some context, the players (four men and the machine, named Libratus) began each of the 120,000 hands with $20,000 in play money, and posted blinds of $50 and $100.

    ...

    Tuomas Sandholm, a Carnegie Mellon computer scientist who created the program with his Ph.D. student Noam Brown, was giddy last week on the match’s livestream, at one point cheering for his bot as it turned over a full house versus human pro Jason Les’s flush in a huge pot, and proudly comparing Libratus’s triumph to Deep Blue’s monumental win over Garry Kasparov in chess.

    And, indeed, some robot can now etch heads-up no-limit Texas Hold ‘em (2017) alongside checkers (1995), chess (1997), Othello (1997), Scrabble (c. 2006), limit Hold ‘em (2008), Jeopardy! (2011) and Go (2016) into the marble cenotaph of human-dominated intellectual pursuits.

    Brown told me that he was keen to tackle other versions of poker with his A.I. algorithms. What happens when a bot like this sits down at a table with many other players, rather than just a one-on-one foe, for example? Sandholm, on the other hand, is quick to say that this isn’t really about poker at all. “The AI’s algorithms are not for poker: they are game independent,” his daily email updates read. The other “games” the algorithms may be applied to in the future: “negotiation, cybersecurity, military setting, auctions, finance, strategic pricing, as well as steering evolution and biological adaptation.”

    #transhumanisme #singularité #jeux

  • How Do You Feel ? Affectiva’s AI Can Tell
    http://www.pcmag.com/news/349956/how-do-you-feel-affectivas-ai-can-tell

    Imagine powering up your digital device and—after a quick scan of your facial expression—having it respond with, “Hey there, what’s going on ?” Massachusetts-based Affectiva is working on this type of “socio-emotive A.I.,” and PCMag met the company’s director of market development, Jim Deal, at Unity Technologies’ Unite 2016 conference recently. "Our CEO and co-founder, Dr. Rana el Kaliouby, always had a deep interest in building emotionally aware machines. After she got her PhD at Cambridge (...)

    #Affectiva #algorithme #émotions #facial #profiling

  • Will This “Neural Lace” Brain Implant Help Us Compete with AI? - Facts So Romantic
    http://nautil.us/blog/with-this-neural-lace-brain-implant-we-can-stay-as-smart-as-ai

    Solar-powered self-driving cars, reusable space ships, Hyperloop transportation, a mission to colonize Mars: Elon Musk is hell-bent on turning these once-far-fetched fantasies into reality. But none of these technologies has made him as leery as artificial intelligence. Earlier this summer at Code Conference 2016, Musk stated publicly that given the current rate of A.I. advancement, humans could ultimately expect to be left behind—cognitively, intellectually—“by a lot.” His solution to this unappealing fate is a novel brain-computer interface similar to the implantable “neural lace” described by the Scottish novelist Iain M. Banks in Look to Windward, part of his “Culture series” books. Along with serving as a rite of passage, it upgrades the human brain to be more competitive against A.I.’s (...)

  • The 100 Greatest Movie Robots of All Time :: Movies :: Lists :: Paste
    http://www.pastemagazine.com/articles/2015/11/the-100-greatest-movie-robots-of-all-time.html?a=1

    Before we begin, some ground rules:

    “Robots,” for the purposes of this list, fall into the following categories: Androids, cyborgs and intelligent automatons in general. When it comes to cyborgs, we’ve decided to err on the side of “mostly robot.” That means, despite Obi Wan’s protestations that Darth Vader is “more machine than man,” for the purposes of this list, he’s a smidge too human.

    With apologies to HAL, J.A.R.V.I.S., MOTHER and the like, no disembodied, purely A.I. entities. The robot must have some kind of body—typically humanoid in shape (though minor exceptions regarding shape for especially awesome robots may appear).

    The entries must have appeared in a theatrically released movie. With additional apologies to all the Benders and cylons in pop culture, the focus here is on iconic film robots.

    Now let’s take a glimpse into cinema past and imagine the future that might have been… and may yet become.

    #cinema #robot #classement