/2020

  • Why 2020 was a pivotal, contradictory year for facial recognition
    https://www.technologyreview.com/2020/12/29/1015563/why-2020-was-a-pivotal-contradictory-year-for-facial-recognition

    The racial justice movement pushed problems with the technology into public consciousness—but despite scandals and bans, its growth isn’t slowing. America’s first confirmed wrongful arrest by facial recognition technology happened in January 2020. Robert Williams, a Black man, was arrested in his driveway just outside Detroit, with his wife and young daughter watching. He spent the night in jail. The next day in the questioning room, a detective slid a picture across the table to Williams of (...)

    #algorithme #CCTV #biométrie #racisme #facial #reconnaissance #vidéo-surveillance #BlackLivesMatter #discrimination #surveillance #Clearview #Microsoft #IBM #Amazon #lobbying (...)

    ##ACLU

  • How our data encodes systematic racism
    https://www.technologyreview.com/2020/12/10/1013617/racism-data-science-artificial-intelligence-ai-opinion

    Technologists must take responsibility for the toxic ideologies that our data sets and algorithms reflect. I’ve often been told, “The data does not lie.” However, that has never been my experience. For me, the data nearly always lies. Google Image search results for “healthy skin” show only light-skinned women, and a query on “Black girls” still returns pornography. The CelebA face data set has labels of “big nose” and “big lips” that are disproportionately assigned to darker-skinned female faces (...)

    #algorithme #racisme #données #biais #discrimination

  • Inside NSO, Israel’s billion-dollar spyware giant
    https://www.technologyreview.com/2020/08/19/1006458/nso-spyware-controversy-pegasus-human-rights

    The world’s most notorious surveillance company says it wants to clean up its act. Go on, we’re listening.

    Maâti Monjib speaks slowly, like a man who knows he’s being listened to.

    It’s the day of his 58th birthday when we speak, but there’s little celebration in his voice. “The surveillance is hellish,” Monjib tells me. “It is really difficult. It controls everything I do in my life.”

    A history professor at the University of Mohammed V in Rabat, Morocco, Monjib vividly remembers the day in 2017 when his life changed. Charged with endangering state security by the government he has fiercely and publicly criticized, he was sitting outside a courtroom when his iPhone suddenly lit up with a series of text messages from numbers he didn’t recognize. They contained links to salacious news, petitions, and even Black Friday shopping deals.

    A month later, an article accusing him of treason appeared on a popular national news site with close ties to Morocco’s royal rulers. Monjib was used to attacks, but now it seemed his harassers knew everything about him: another article included information about a pro-democracy event he was set to attend but had told almost no one about. One story even proclaimed that the professor “has no secrets from us.”

    He’d been hacked. The messages had all led to websites that researchers say were set up as lures to infect visitors’ devices with Pegasus, the most notorious spyware in the world.

    Pegasus is the blockbuster product of NSO Group, a secretive billion-dollar Israeli surveillance company. It is sold to law enforcement and intelligence agencies around the world, which use the company’s tools to choose a human target, infect the person’s phone with the spyware, and then take over the device. Once Pegasus is on your phone, it is no longer your phone.

    NSO sells Pegasus with the same pitch arms dealers use to sell conventional weapons, positioning it as a crucial aid in the hunt for terrorists and criminals. In an age of ubiquitous technology and strong encryption, such “lawful hacking” has emerged as a powerful tool for public safety when law enforcement needs access to data. NSO insists that the vast majority of its customers are European democracies, although since it doesn’t release client lists and the countries themselves remain silent, that has never been verified.

    Monjib’s case, however, is one of a long list of incidents in which Pegasus has been used as a tool of oppression. It has been linked to cases including the murder of Saudi journalist Jamal Khashoggi, the targeting of scientists and campaigners pushing for political reform in Mexico, and Spanish government surveillance of Catalan separatist politicians. Mexico and Spain have denied using Pegasus to spy on opponents, but accusations that they have done so are backed by substantial technical evidence.

    NSO’s basic argument is that it is the creator of a technology that governments use, but that since it doesn’t attack anyone itself, it can’t be held responsible.

    Some of that evidence is contained in a lawsuit filed last October in California by WhatsApp and its parent company, Facebook, alleging that Pegasus manipulated WhatsApp’s infrastructure to infect more than 1,400 cell phones. Investigators at Facebook found more than 100 human rights defenders, journalists, and public figures among the targets, according to court documents. Each call that was picked up, they discovered, sent malicious code through WhatsApp’s infrastructure and caused the recipient’s phone to download spyware from servers owned by NSO. This, WhatsApp argued, was a violation of American law.

    NSO has long faced such accusations with silence. Claiming that much of its business is an Israeli state secret, it has offered precious little public detail about its operations, customers, or safeguards.

    Now, though, the company suggests things are changing. In 2019, NSO, which was owned by a private equity firm, was sold back to its founders and another private equity firm, Novalpina, for $1 billion. The new owners decided on a fresh strategy: emerge from the shadows. The company hired elite public relations firms, crafted new human rights policies, and developed new self-­governance documents. It even began showing off some of its other products, such as a covid-19 tracking system called Fleming, and Eclipse, which can hack drones deemed a security threat.

    Over several months, I’ve spoken with NSO leadership to understand how the company works and what it says it is doing to prevent human rights abuses carried out using its tools. I have spoken to its critics, who see it as a danger to democratic values; to those who urge more regulation of the hacking business; and to the Israeli regulators responsible for governing it today. The company’s leaders talked about NSO’s future and its policies and procedures for dealing with problems, and it shared documents that detail its relationship with the agencies to which it sells Pegasus and other tools. What I found was a thriving arms dealer—inside the company, employees acknowledge that Pegasus is a genuine weapon—struggling with new levels of scrutiny that threaten the foundations of its entire industry.Retour ligne automatique
    “A difficult task”

    From the first day Shmuel Sunray joined NSO as its general counsel, he faced one international incident after another. Hired just days after WhatsApp’s lawsuit was filed, he found other legal problems waiting on his desk as soon as he arrived. They all centered on the same basic accusation: NSO Group’s hacking tools are sold to, and can be abused by, rich and repressive regimes with little or no accountability.

    Sunray had plenty of experience with secrecy and controversy: his previous job was as vice president of a major weapons manufacturer. Over several conversations, he was friendly as he told me that he’s been instructed by the owners to change NSO’s culture and operations, making it more transparent and trying to prevent human rights abuses from happening. But he was also obviously frustrated by the secrecy that he felt prevented him from responding to critics.

    “It’s a difficult task,” Sunray told me over the phone from the company’s headquarters in Herzliya, north of Tel Aviv. “We understand the power of the tool; we understand the impact of misuse of the tool. We’re trying to do the right thing. We have real challenges dealing with government, intelligence agencies, confidentiality, operational necessities, operational limitations. It’s not a classic case of human rights abuse by a company, because we don’t operate the systems—we’re not involved in actual operations of the systems—but we understand there is a real risk of misuse from the customers. We’re trying to find the right balance.”

    This underpins NSO’s basic argument, one that is common among weapons manufacturers: the company is the creator of a technology that governments use, but it doesn’t attack anyone itself, so it can’t be held responsible.

    Still, according to Sunray, there are several layers of protection in place to try to make sure the wrong people don’t have access.Retour ligne automatique
    Making a sale

    Like most other countries, Israel has export controls that require weapons manufacturers to be licensed and subject to government oversight. In addition, NSO does its own due diligence, says Sunray: its staff examine a country, look at its human rights record, and scrutinize its relationship with Israel. They assess the specific agency’s track record on corruption, safety, finance, and abuse—as well as factoring in how much it needs the tool.

    Sometimes negatives are weighed against positives. Morocco, for example, has a worsening human rights record but a lengthy history of cooperating with Israel and the West on security, as well as a genuine terrorism problem, so a sale was reportedly approved. By contrast, NSO has said that China, Russia, Iran, Cuba, North Korea, Qatar, and Turkey are among 21 nations that will never be customers.

    Finally, before a sale is made, NSO’s governance, risk, and compliance committee has to sign off. The company says the committee, made up of managers and shareholders, can decline sales or add conditions, such as technological restrictions, that are decided case by case. Retour ligne automatique
    Preventing abuse

    Once a sale is agreed to, the company says, technological guardrails prevent certain kinds of abuse. For example, Pegasus does not allow American phone numbers to be infected, NSO says, and infected phones cannot even be physically located in the United States: if one does find itself within American borders, the Pegasus software is supposed to self-destruct.

    NSO says Israeli phone numbers are among others also protected, though who else gets protection and why remains unclear.

    When a report of abuse comes in, an ad hoc team of up to 10 NSO employees is assembled to investigate. They interview the customer about the allegations, and they request Pegasus data logs. These logs don’t contain the content the spyware extracted, like chats or emails—NSO insists it never sees specific intelligence—but do include metadata such as a list of all the phones the spyware tried to infect and their locations at the time.

    According to one recent contract I obtained, customers must “use the system only for the detection, prevention, and investigation of crimes and terrorism and ensure the system will not be used for human rights violations.” They must notify the company of potential misuse. NSO says it has terminated three contracts in the past for infractions including abuse of Pegasus, but it refuses to say which countries or agencies were involved or who the victims were.

    “We’re not naïve”

    Lack of transparency is not the only problem: the safeguards have limits. While the Israeli government can revoke NSO’s license for violations of export law, the regulators do not take it on themselves to look for abuse by potential customers and aren’t involved in the company’s abuse investigations.

    Many of the other procedures are merely reactive as well. NSO has no permanent internal abuse team, unlike almost any other billion-dollar tech firm, and most of its investigations are spun up only when an outside source such as Amnesty International or Citizen Lab claims there has been malfeasance. NSO staff interview the agencies and customers under scrutiny but do not talk to the alleged victims, and while the company often disputes the technical reports offered as evidence, it also claims that both state secrecy and business confidentiality prevent it from sharing more information.

    The Pegasus logs that are crucial to any abuse inquiry also raise plenty of questions. NSO Group’s customers are hackers who work for spy agencies; how hard would it be for them to tamper with the logs? In a statement, the company insisted this isn’t possible but declined to offer details.

    If the logs aren’t disputed, NSO and its customers will decide together whether targets are legitimate, whether genuine crimes have been committed, and whether surveillance was done under due process of law or whether autocratic regimes spied on opponents.

    Sunray, audibly exasperated, says he feels as if secrecy is forcing him to operate with his hands tied behind his back.

    “It’s frustrating,” he told me. “We’re not naïve. There have been misuses. There will be misuses. We sell to many governments. Even the US government—no government is perfect. Misuse can happen, and it should be addressed.”

    But Sunray also returns to the company’s standard response, the argument that underpins its defense in the WhatsApp lawsuit: NSO is a manufacturer, but it’s not the operator of the spyware. We built it but they did the hacking—and they are sovereign nations.

    That’s not enough for many critics. “No company that believes it can be the independent watchdog of their own products ever convinces me,” says Marietje Schaake, a Dutch politician and former member of the European Parliament. “The whole idea that they have their own mechanisms while they have no problem selling commercial spyware to whoever wants to buy it, knowing that it’s used against human rights defenders and journalists—I think it shows the lack of responsibility on the part of this company more than anything.”

    So why the internal push for more transparency now? Because the deluge of technical reports from human rights groups, the WhatsApp lawsuit, and increasing governmental scrutiny threaten NSO’s status quo. And if there is going to be a new debate over how the industry gets regulated, it pays to have a powerful voice. Retour ligne automatique
    Growing scrutiny

    Lawful hacking and cyber-espionage have grown enormously as a business over the past decade, with no signs of retreat. NSO Group’s previous owners bought the company in 2014 for $130 million, less than one-seventh of the valuation it was sold for last year. The rest of the industry is expanding too, profiting from the spread of communications technology and deepening global instability. “There’s no doubt that any state has the right to buy this technology to fight crime and terrorism,” says Amnesty International’s deputy director, Danna Ingleton. “States are rightfully and lawfully able to use these tools. But that needs to be accompanied more with a regulatory system that prevents abuses and provides an accountability mechanism when abuse has happened.” Shining a much brighter light on the hacking industry, she argues, will allow for better regulation and more accountability.

    Earlier this year Amnesty International was in court in Israel arguing that the Ministry of Defense should revoke NSO’s license because of abuses of Pegasus. But just as the case was starting, officials from Amnesty and 29 other petitioners were told to leave the courtroom: a gag order was being placed on the proceedings at the ministry’s urging. Then, in July, a judge rejected the case outright.

    “I do not believe as a matter of principle and as a matter of law that NSO can claim a complete lack of responsibility for the way their tools are being used,” says United Nations special rapporteur Agnès Callamard. “That’s not how it works under international law.”

    Callamard advises the UN on extrajudicial executions and has been vocal about NSO Group and the spyware industry ever since it emerged that Pegasus was being used to spy on friends and associates of Khashoggi shortly before he was murdered. For her, the issue has life-or-death consequences.

    If NSO loses the WhatsApp case, one lawyer says, it calls into question all those companies that make their living by finding flaws in software and exploiting them.

    “We’re not calling for something radically new,” says Callamard. “We are saying that what’s in place at the moment is proving insufficient, and therefore governments or regulatory agencies need to move into a different gear quickly. The industry is expanding, and it should expand on the basis of the proper framework to regulate misuse. It’s important for global peace.”

    There have been calls for a temporary moratorium on sales until stronger regulation is enacted, but it’s not clear what that legal framework would look like. Unlike conventional arms, which are subject to various international laws, cyber weapons are currently not regulated by any worldwide arms control agreement. And while nonproliferation treaties have been suggested, there is little clarity on how they would measure existing capabilities, how monitoring or enforcement would work, or how the rules would keep up with rapid technological developments. Instead, most scrutiny today is happening at the national legal level.

    In the US, both the FBI and Congress are looking into possible hacks of American targets, while an investigation led by Senator Ron Wyden’s office wants to find out whether any Americans are involved in exporting surveillance technology to authoritarian governments. A recent draft US intelligence bill would require a government report on commercial spyware and surveillance technology.

    The WhatsApp lawsuit, meanwhile, has taken aim close to the heart of NSO’s business. The Silicon Valley giant argues that by targeting California residents—that is, WhatsApp and Facebook—NSO has given the court in San Francisco jurisdiction, and that the judge in the case can bar the Israeli company from future attempts to misuse WhatsApp’s and Facebook’s networks. That opens the door to an awful lot of possibilities: Apple, whose iPhone has been a paramount NSO target, could feasibly mount a similar legal attack. Google, too, has spotted NSO targeting Android devices.

    And financial damages are not the only sword hanging over NSO’s head. Such lawsuits also bring with them the threat of courtroom discovery, which has the potential to bring details of NSO’s business deals and customers into the public eye.

    “A lot depends on exactly how the court rules and how broadly it characterizes the violation NSO is alleged to have committed here,” says Alan Rozenshtein, a former Justice Department lawyer now at the University of Minnesota Law School. “At a minimum, if NSO loses this case, it calls into question all of those companies that make their products or make their living by finding flaws in messaging software and providing services exploiting those flaws. This will create enough legal uncertainty that I would imagine these would-be clients would think twice before contracting with them. You don’t know if the company will continue to operate, if they’ll get dragged to court, if your secrets will be exposed.” NSO declined to comment on the alleged WhatsApp hack, since it is still an active case. Retour ligne automatique
    “We are always spied on”

    In Morocco, Maâti Monjib was subjected to at least four more hacking attacks throughout 2019, each more advanced than the one before. At some point, his phone browser was invisibly redirected to a suspicious domain that researchers suspect was used to silently install malware. Instead of something like a text message that can raise the alarm and leaves a visible trace, this one was a much quieter network injection attack, a tactic valued because it’s almost imperceptible except to expert investigators.

    On September 13, 2019, Monjib had lunch at home with his friend Omar Radi, a Moroccan journalist who is one of the regime’s sharpest critics. That very day, an investigation later found, Radi was hit with the same kind of network injection attacks that had snared Monjib. The hacking campaign against Radi lasted at least into January 2020, Amnesty International researchers said. He’s been subject to regular police harassment ever since.

    At least seven more Moroccans received warnings from WhatsApp about Pegasus being used to spy on their phones, including human rights activists, journalists, and politicians. Are these the kinds of legitimate spying targets—the terrorists and criminals—laid out in the contract that Morocco and all NSO customers sign?

    In December, Monjib and the other victims sent a letter to Morocco’s data protection authority asking for an investigation and action. Nothing formally came of it, but one of the men, the pro-democracy economist Fouad Abdelmoumni, says his friends high up at the agency told him the letter was hopeless and urged him to drop the matter. The Moroccan government, meanwhile, has responded by threatening to expel Amnesty International from the country.

    What’s happening in Morocco is emblematic of what’s happening around the world. While it’s clear that democracies are major beneficiaries of lawful hacking, a long and growing list of credible, detailed, technical, and public investigations shows Pegasus being misused by authoritarian regimes with long records of human rights abuse.

    “Morocco is a country under an authoritarian regime who believe people like Monjib and myself have to be destroyed,” says Abdelmoumni. “To destroy us, having access to all information is key. We always consider that we are spied on. All of our information is in the hands of the palace.”

    #Apple #NSO #Facebook #WhatsApp #iPhone #Pegasus #smartphone #spyware #activisme #journalisme #écoutes #hacking #surveillance #Amnesty (...)

    ##CitizenLab

  • Inside China’s unexpected quest to protect data privacy
    https://www.technologyreview.com/2020/08/19/1006441/china-data-privacy-hong-yanqing-gdpr

    A new privacy law would look a lot like Europe’s GDPR—but will it restrict state surveillance?

    Late in the summer of 2016, Xu Yuyu received a call that promised to change her life. Her college entrance examination scores, she was told, had won her admission to the English department of the Nanjing University of Posts and Telecommunications. Xu lived in the city of Linyi in Shandong, a coastal province in China, southeast of Beijing. She came from a poor family, singularly reliant on her father’s meager income. But her parents had painstakingly saved for her tuition; very few of her relatives had ever been to college.

    A few days later, Xu received another call telling her she had also been awarded a scholarship. To collect the 2,600 yuan ($370), she needed to first deposit a 9,900 yuan “activation fee” into her university account. Having applied for financial aid only days before, she wired the money to the number the caller gave her. That night, the family rushed to the police to report that they had been defrauded. Xu’s father later said his greatest regret was asking the officer whether they might still get their money back. The answer—“Likely not”—only exacerbated Xu’s devastation. On the way home she suffered a heart attack. She died in a hospital two days later.

    An investigation determined that while the first call had been genuine, the second had come from scammers who’d paid a hacker for Xu’s number, admissions status, and request for financial aid.

    For Chinese consumers all too familiar with having their data stolen, Xu became an emblem. Her death sparked a national outcry for greater data privacy protections. Only months before, the European Union had adopted the General Data Protection Regulation (GDPR), an attempt to give European citizens control over how their personal data is used. Meanwhile, Donald Trump was about to win the American presidential election, fueled in part by a campaign that relied extensively on voter data. That data included details on 87 million Facebook accounts, illicitly obtained by the consulting firm Cambridge Analytica. Chinese regulators and legal scholars followed these events closely.

    In the West, it’s widely believed that neither the Chinese government nor Chinese people care about privacy. US tech giants wield this supposed indifference to argue that onerous privacy laws would put them at a competitive disadvantage to Chinese firms. In his 2018 Senate testimony after the Cambridge Analytica scandal, Facebook’s CEO, Mark Zuckerberg, urged regulators not to clamp down too hard on technologies like face recognition. “We still need to make it so that American companies can innovate in those areas,” he said, “or else we’re going to fall behind Chinese competitors and others around the world.”

    In reality, this picture of Chinese attitudes to privacy is out of date. Over the last few years the Chinese government, seeking to strengthen consumers’ trust and participation in the digital economy, has begun to implement privacy protections that in many respects resemble those in America and Europe today.

    Even as the government has strengthened consumer privacy, however, it has ramped up state surveillance. It uses DNA samples and other biometrics, like face and fingerprint recognition, to monitor citizens throughout the country. It has tightened internet censorship and developed a “social credit” system, which punishes behaviors the authorities say weaken social stability. During the pandemic, it deployed a system of “health code” apps to dictate who could travel, based on their risk of carrying the coronavirus. And it has used a slew of invasive surveillance technologies in its harsh repression of Muslim Uighurs in the northwestern region of Xinjiang.

    This paradox has become a defining feature of China’s emerging data privacy regime, says Samm Sacks, a leading China scholar at Yale and New America, a think tank in Washington, DC. It raises a question: Can a system endure with strong protections for consumer privacy, but almost none against government snooping? The answer doesn’t affect only China. Its technology companies have an increasingly global footprint, and regulators around the world are watching its policy decisions.

    November 2000 arguably marks the birth of the modern Chinese surveillance state. That month, the Ministry of Public Security, the government agency that oversees daily law enforcement, announced a new project at a trade show in Beijing. The agency envisioned a centralized national system that would integrate both physical and digital surveillance using the latest technology. It was named Golden Shield.

    Eager to cash in, Western companies including American conglomerate Cisco, Finnish telecom giant Nokia, and Canada’s Nortel Networks worked with the agency on different parts of the project. They helped construct a nationwide database for storing information on all Chinese adults, and developed a sophisticated system for controlling information flow on the internet—what would eventually become the Great Firewall. Much of the equipment involved had in fact already been standardized to make surveillance easier in the US—a consequence of the Communications Assistance for Law Enforcement Act of 1994.

    Despite the standardized equipment, the Golden Shield project was hampered by data silos and turf wars within the Chinese government. Over time, the ministry’s pursuit of a singular, unified system devolved into two separate operations: a surveillance and database system, devoted to gathering and storing information, and the social-credit system, which some 40 government departments participate in. When people repeatedly do things that aren’t allowed—from jaywalking to engaging in business corruption—their social-credit score falls and they can be blocked from things like buying train and plane tickets or applying for a mortgage.

    In the same year the Ministry of Public Security announced Golden Shield, Hong Yanqing entered the ministry’s police university in Beijing. But after seven years of training, having received his bachelor’s and master’s degrees, Hong began to have second thoughts about becoming a policeman. He applied instead to study abroad. By the fall of 2007, he had moved to the Netherlands to begin a PhD in international human rights law, approved and subsidized by the Chinese government.

    Over the next four years, he familiarized himself with the Western practice of law through his PhD research and a series of internships at international organizations. He worked at the International Labor Organization on global workplace discrimination law and the World Health Organization on road safety in China. “It’s a very legalistic culture in the West—that really strikes me. People seem to go to court a lot,” he says. “For example, for human rights law, most of the textbooks are about the significant cases in court resolving human rights issues.”

    Hong found this to be strangely inefficient. He saw going to court as a final resort for patching up the law’s inadequacies, not a principal tool for establishing it in the first place. Legislation crafted more comprehensively and with greater forethought, he believed, would achieve better outcomes than a system patched together through a haphazard accumulation of case law, as in the US.

    After graduating, he carried these ideas back to Beijing in 2012, on the eve of Xi Jinping’s ascent to the presidency. Hong worked at the UN Development Program and then as a journalist for the People’s Daily, the largest newspaper in China, which is owned by the government.

    Xi began to rapidly expand the scope of government censorship. Influential commentators, or “Big Vs”—named for their verified accounts on social media—had grown comfortable criticizing and ridiculing the Chinese Communist Party. In the fall of 2013, the party arrested hundreds of microbloggers for what it described as “malicious rumor-mongering” and paraded a particularly influential one on national television to make an example of him.

    The moment marked the beginning of a new era of censorship. The following year, the Cyberspace Administration of China was founded. The new central agency was responsible for everything involved in internet regulation, including national security, media and speech censorship, and data protection. Hong left the People’s Daily and joined the agency’s department of international affairs. He represented it at the UN and other global bodies and worked on cybersecurity cooperation with other governments.

    By July 2015, the Cyberspace Administration had released a draft of its first law. The Cybersecurity Law, which entered into force in June of 2017, required that companies obtain consent from people to collect their personal information. At the same time, it tightened internet censorship by banning anonymous users—a provision enforced by regular government inspections of data from internet service providers.

    In the spring of 2016, Hong sought to return to academia, but the agency asked him to stay. The Cybersecurity Law had purposely left the regulation of personal data protection vague, but consumer data breaches and theft had reached unbearable levels. A 2016 study by the Internet Society of China found that 84% of those surveyed had suffered some leak of their data, including phone numbers, addresses, and bank account details. This was spurring a growing distrust of digital service providers that required access to personal information, such as ride-hailing, food-delivery, and financial apps. Xu Yuyu’s death poured oil on the flames.

    The government worried that such sentiments would weaken participation in the digital economy, which had become a central part of its strategy for shoring up the country’s slowing economic growth. The advent of GDPR also made the government realize that Chinese tech giants would need to meet global privacy norms in order to expand abroad.

    Hong was put in charge of a new task force that would write a Personal Information Protection Specification (PIPS) to help solve these challenges. The document, though nonbinding, would tell companies how regulators intended to implement the Cybersecurity Law. In the process, the government hoped, it would nudge them to adopt new norms for data protection by themselves.

    Hong’s task force set about translating every relevant document they could find into Chinese. They translated the privacy guidelines put out by the Organization for Economic Cooperation and Development and by its counterpart, the Asia-Pacific Economic Cooperation; they translated GDPR and the California Consumer Privacy Act. They even translated the 2012 White House Consumer Privacy Bill of Rights, introduced by the Obama administration but never made into law. All the while, Hong met regularly with European and American data protection regulators and scholars.

    Bit by bit, from the documents and consultations, a general choice emerged. “People were saying, in very simplistic terms, ‘We have a European model and the US model,’” Hong recalls. The two approaches diverged substantially in philosophy and implementation. Which one to follow became the task force’s first debate.

    At the core of the European model is the idea that people have a fundamental right to have their data protected. GDPR places the burden of proof on data collectors, such as companies, to demonstrate why they need the data. By contrast, the US model privileges industry over consumers. Businesses define for themselves what constitutes reasonable data collection; consumers only get to choose whether to use that business. The laws on data protection are also far more piecemeal than in Europe, divvied up among sectoral regulators and specific states.

    At the time, without a central law or single agency in charge of data protection, China’s model more closely resembled the American one. The task force, however, found the European approach compelling. “The European rule structure, the whole system, is more clear,” Hong says.

    But most of the task force members were representatives from Chinese tech giants, like Baidu, Alibaba, and Huawei, and they felt that GDPR was too restrictive. So they adopted its broad strokes—including its limits on data collection and its requirements on data storage and data deletion—and then loosened some of its language. GDPR’s principle of data minimization, for example, maintains that only necessary data should be collected in exchange for a service. PIPS allows room for other data collection relevant to the service provided.

    PIPS took effect in May 2018, the same month that GDPR finally took effect. But as Chinese officials watched the US upheaval over the Facebook and Cambridge Analytica scandal, they realized that a nonbinding agreement would not be enough. The Cybersecurity Law didn’t have a strong mechanism for enforcing data protection. Regulators could only fine violators up to 1,000,000 yuan ($140,000), an inconsequential amount for large companies. Soon after, the National People’s Congress, China’s top legislative body, voted to begin drafting a Personal Information Protection Law within its current five-year legislative period, which ends in 2023. It would strengthen data protection provisions, provide for tougher penalties, and potentially create a new enforcement agency.

    After Cambridge Analytica, says Hong, “the government agency understood, ‘Okay, if you don’t really implement or enforce those privacy rules, then you could have a major scandal, even affecting political things.’”

    The local police investigation of Xu Yuyu’s death eventually identified the scammers who had called her. It had been a gang of seven who’d cheated many other victims out of more than 560,000 yuan using illegally obtained personal information. The court ruled that Xu’s death had been a direct result of the stress of losing her family’s savings. Because of this, and his role in orchestrating tens of thousands of other calls, the ringleader, Chen Wenhui, 22, was sentenced to life in prison. The others received sentences between three and 15 years.Retour ligne automatique
    xu yuyu

    Emboldened, Chinese media and consumers began more openly criticizing privacy violations. In March 2018, internet search giant Baidu’s CEO, Robin Li, sparked social-media outrage after suggesting that Chinese consumers were willing to “exchange privacy for safety, convenience, or efficiency.” “Nonsense,” wrote a social-media user, later quoted by the People’s Daily. “It’s more accurate to say [it is] impossible to defend [our privacy] effectively.”

    In late October 2019, social-media users once again expressed anger after photos began circulating of a school’s students wearing brainwave-monitoring headbands, supposedly to improve their focus and learning. The local educational authority eventually stepped in and told the school to stop using the headbands because they violated students’ privacy. A week later, a Chinese law professor sued a Hangzhou wildlife zoo for replacing its fingerprint-based entry system with face recognition, saying the zoo had failed to obtain his consent for storing his image.

    But the public’s growing sensitivity to infringements of consumer privacy has not led to many limits on state surveillance, nor even much scrutiny of it. As Maya Wang, a researcher at Human Rights Watch, points out, this is in part because most Chinese citizens don’t know the scale or scope of the government’s operations. In China, as in the US and Europe, there are broad public and national security exemptions to data privacy laws. The Cybersecurity Law, for example, allows the government to demand data from private actors to assist in criminal legal investigations. The Ministry of Public Security also accumulates massive amounts of data on individuals directly. As a result, data privacy in industry can be strengthened without significantly limiting the state’s access to information.

    The onset of the pandemic, however, has disturbed this uneasy balance.

    On February 11, Ant Financial, a financial technology giant headquartered in Hangzhou, a city southwest of Shanghai, released an app-building platform called AliPay Health Code. The same day, the Hangzhou government released an app it had built using the platform. The Hangzhou app asked people to self-report their travel and health information, and then gave them a color code of red, yellow, or green. Suddenly Hangzhou’s 10 million residents were all required to show a green code to take the subway, shop for groceries, or enter a mall. Within a week, local governments in over 100 cities had used AliPay Health Code to develop their own apps. Rival tech giant Tencent quickly followed with its own platform for building them.

    The apps made visible a worrying level of state surveillance and sparked a new wave of public debate. In March, Hu Yong, a journalism professor at Beijing University and an influential blogger on Weibo, argued that the government’s pandemic data collection had crossed a line. Not only had it led to instances of information being stolen, he wrote, but it had also opened the door to such data being used beyond its original purpose. “Has history ever shown that once the government has surveillance tools, it will maintain modesty and caution when using them?” he asked.

    Indeed, in late May, leaked documents revealed plans from the Hangzhou government to make a more permanent health-code app that would score citizens on behaviors like exercising, smoking, and sleeping. After a public outcry, city officials canceled the project. That state-run media had also published stories criticizing the app likely helped.

    The debate quickly made its way to the central government. That month, the National People’s Congress announced it intended to fast-track the Personal Information Protection Law. The scale of the data collected during the pandemic had made strong enforcement more urgent, delegates said, and highlighted the need to clarify the scope of the government’s data collection and data deletion procedures during special emergencies. By July, the legislative body had proposed a new “strict approval” process for government authorities to undergo before collecting data from private-sector platforms. The language again remains vague, to be fleshed out later—perhaps through another nonbinding document—but this move “could mark a step toward limiting the broad scope” of existing government exemptions for national security, wrote Sacks and fellow China scholars at New America.

    Hong similarly believes the discrepancy between rules governing industry and government data collection won’t last, and the government will soon begin to limit its own scope. “We cannot simply address one actor while leaving the other out,” he says. “That wouldn’t be a very scientific approach.”

    Other observers disagree. The government could easily make superficial efforts to address public backlash against visible data collection without really touching the core of the Ministry of Public Security’s national operations, says Wang, of Human Rights Watch. She adds that any laws would likely be enforced unevenly: “In Xinjiang, Turkic Muslims have no say whatsoever in how they’re treated.”

    Still, Hong remains an optimist. In July, he started a job teaching law at Beijing University, and he now maintains a blog on cybersecurity and data issues. Monthly, he meets with a budding community of data protection officers in China, who carefully watch how data governance is evolving around the world.

    #criminalité #Nokia_Siemens #fraude #Huawei #payement #Cisco #CambridgeAnalytica/Emerdata #Baidu #Alibaba #domination #bénéfices #BHATX #BigData #lutte #publicité (...)

    ##criminalité ##CambridgeAnalytica/Emerdata ##publicité ##[fr]Règlement_Général_sur_la_Protection_des_Données__RGPD_[en]General_Data_Protection_Regulation__GDPR_[nl]General_Data_Protection_Regulation__GDPR_ ##Nortel_Networks ##Facebook ##biométrie ##consommation ##génétique ##consentement ##facial ##reconnaissance ##empreintes ##Islam ##SocialCreditSystem ##surveillance ##TheGreatFirewallofChina ##HumanRightsWatch

  • “I started crying”: Inside Timnit Gebru’s last days at Google | MIT Technology Review
    https://www.technologyreview.com/2020/12/16/1014634/google-ai-ethics-lead-timnit-gebru-tells-story

    By now, we’ve all heard some version of the story. On December 2, after a protracted disagreement over the release of a research paper, Google forced out its ethical AI co-lead, Timnit Gebru. The paper was on the risks of large language models, AI models trained on staggering amounts of text data, which are a line of research core to Google’s business. Gebru, a leading voice in AI ethics, was one of the only Black women at Google Research.

    The move has since sparked a debate about growing corporate influence over AI, the long-standing lack of diversity in tech, and what it means to do meaningful AI ethics research. As of December 15, over 2,600 Google employees and 4,300 others in academia, industry, and civil society had signed a petition denouncing the dismissal of Gebru, calling it “unprecedented research censorship” and “an act of retaliation.”

    The company’s star ethics researcher highlighted the risks of large language models, which are key to Google’s business.
    Gebru is known for foundational work in revealing AI discrimination, developing methods for documenting and auditing AI models, and advocating for greater diversity in research. In 2016, she cofounded the nonprofit Black in AI, which has become a central resource for civil rights activists, labor organizers, and leading AI ethics researchers, cultivating and highlighting Black AI research talent.

    Then in that document, I wrote that this has been extremely disrespectful to the Ethical AI team, and there needs to be a conversation, not just with Jeff and our team, and Megan and our team, but the whole of Research about respect for researchers and how to have these kinds of discussions. Nope. No engagement with that whatsoever.

    I cried, by the way. When I had that first meeting, which was Thursday before Thanksgiving, a day before I was going to go on vacation—when Megan told us that you have to retract this paper, I started crying. I was so upset because I said, I’m so tired of constant fighting here. I thought that if I just ignored all of this DEI [diversity, equity, and inclusion] hypocrisy and other stuff, and I just focused on my work, then at least I could get my work done. And now you’re coming for my work. So I literally started crying.

    You’ve mentioned that this is not just about you; it’s not just about Google. It’s a confluence of so many different issues. What does this particular experience say about tech companies’ influence on AI in general, and their capacity to actually do meaningful work in AI ethics?
    You know, there were a number of people comparing Big Tech and Big Tobacco, and how they were censoring research even though they knew the issues for a while. I push back on the academia-versus-tech dichotomy, because they both have the same sort of very racist and sexist paradigm. The paradigm that you learn and take to Google or wherever starts in academia. And people move. They go to industry and then they go back to academia, or vice versa. They’re all friends; they are all going to the same conferences.

    I don’t think the lesson is that there should be no AI ethics research in tech companies, but I think the lesson is that a) there needs to be a lot more independent research. We need to have more choices than just DARPA [the Defense Advanced Research Projects Agency] versus corporations. And b) there needs to be oversight of tech companies, obviously. At this point I just don’t understand how we can continue to think that they’re gonna self-regulate on DEI or ethics or whatever it is. They haven’t been doing the right thing, and they’re not going to do the right thing.

    I think academic institutions and conferences need to rethink their relationships with big corporations and the amount of money they’re taking from them. Some people were even wondering, for instance, if some of these conferences should have a “no censorship” code of conduct or something like that. So I think that there is a lot that these conferences and academic institutions can do. There’s too much of an imbalance of power right now.

    #Intelligence_artificielle #Timnit_Gebru #Google #Ethique

  • The coming war on the hidden algorithms that trap people in poverty | MIT Technology Review
    https://www.technologyreview.com/2020/12/04/1013068/algorithms-create-a-poverty-trap-lawyers-fight-back

    A growing group of lawyers are uncovering, navigating, and fighting the automated systems that deny the poor housing, jobs, and basic services.

    Credit-scoring algorithms are not the only ones that affect people’s economic well-being and access to basic services. Algorithms now decide which children enter foster care, which patients receive medical care, which families get access to stable housing. Those of us with means can pass our lives unaware of any of this. But for low-income individuals, the rapid growth and adoption of automated decision-making systems has created a hidden web of interlocking traps.

    Fortunately, a growing group of civil lawyers are beginning to organize around this issue. Borrowing a playbook from the criminal defense world’s pushback against risk-assessment algorithms, they’re seeking to educate themselves on these systems, build a community, and develop litigation strategies. “Basically every civil lawyer is starting to deal with this stuff, because all of our clients are in some way or another being touched by these systems,” says Michele Gilman, a clinical law professor at the University of Baltimore. “We need to wake up, get training. If we want to be really good holistic lawyers, we need to be aware of that.”

    “This is happening across the board to our clients,” she says. “They’re enmeshed in so many different algorithms that are barring them from basic services. And the clients may not be aware of that, because a lot of these systems are invisible.”

    Government agencies, on the other hand, are driven to adopt algorithms when they want to modernize their systems. The push to adopt web-based apps and digital tools began in the early 2000s and has continued with a move toward more data-driven automated systems and AI. There are good reasons to seek these changes. During the pandemic, many unemployment benefit systems struggled to handle the massive volume of new requests, leading to significant delays. Modernizing these legacy systems promises faster and more reliable results.

    But the software procurement process is rarely transparent, and thus lacks accountability. Public agencies often buy automated decision-making tools directly from private vendors. The result is that when systems go awry, the individuals affected——and their lawyers—are left in the dark. “They don’t advertise it anywhere,” says Julia Simon-Mishel, an attorney at Philadelphia Legal Assistance. “It’s often not written in any sort of policy guides or policy manuals. We’re at a disadvantage.”

    The lack of public vetting also makes the systems more prone to error. One of the most egregious malfunctions happened in Michigan in 2013. After a big effort to automate the state’s unemployment benefits system, the algorithm incorrectly flagged over 34,000 people for fraud. “It caused a massive loss of benefits,” Simon-Mishel says. “There were bankruptcies; there were unfortunately suicides. It was a whole mess.”

    Low-income individuals bear the brunt of the shift toward algorithms. They are the people most vulnerable to temporary economic hardships that get codified into consumer reports, and the ones who need and seek public benefits. Over the years, Gilman has seen more and more cases where clients risk entering a vicious cycle. “One person walks through so many systems on a day-to-day basis,” she says. “I mean, we all do. But the consequences of it are much more harsh for poor people and minorities.”

    She brings up a current case in her clinic as an example. A family member lost work because of the pandemic and was denied unemployment benefits because of an automated system failure. The family then fell behind on rent payments, which led their landlord to sue them for eviction. While the eviction won’t be legal because of the CDC’s moratorium, the lawsuit will still be logged in public records. Those records could then feed into tenant-screening algorithms, which could make it harder for the family to find stable housing in the future. Their failure to pay rent and utilities could also be a ding on their credit score, which once again has repercussions. “If they are trying to set up cell-phone service or take out a loan or buy a car or apply for a job, it just has these cascading ripple effects,” Gilman says.

    “Every case is going to turn into an algorithm case”

    In September, Gilman, who is currently a faculty fellow at the Data and Society research institute, released a report documenting all the various algorithms that poverty lawyers might encounter. Called Poverty Lawgorithms, it’s meant to be a guide for her colleagues in the field. Divided into specific practice areas like consumer law, family law, housing, and public benefits, it explains how to deal with issues raised by algorithms and other data-driven technologies within the scope of existing laws.

    Rapport : https://datasociety.net/wp-content/uploads/2020/09/Poverty-Lawgorithms-20200915.pdf

    #Algorithme #Pauvreté #Credit_score #Notation

  • We read the paper that forced Timnit Gebru out of Google. Here’s what it says | MIT Technology Review
    https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/?truid=a497ecb44646822921c70e7e051f7f1a

    The company’s star ethics researcher highlighted the risks of large language models, which are key to Google’s business.
    by

    Karen Hao archive page

    December 4, 2020
    Timnit Gebru
    courtesy of Timnit Gebru

    On the evening of Wednesday, December 2, Timnit Gebru, the co-lead of Google’s ethical AI team, announced via Twitter that the company had forced her out.

    Gebru, a widely respected leader in AI ethics research, is known for coauthoring a groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of color, which means its use can end up discriminating against them. She also cofounded the Black in AI affinity group, and champions diversity in the tech industry. The team she helped build at Google is one of the most diverse in AI, and includes many leading experts in their own right. Peers in the field envied it for producing critical work that often challenged mainstream AI practices.

    A series of tweets, leaked emails, and media articles showed that Gebru’s exit was the culmination of a conflict over another paper she co-authored. Jeff Dean, the head of Google AI, told colleagues in an internal email (which he has since put online) that the paper “didn’t meet our bar for publication” and that Gebru had said she would resign unless Google met a number of conditions, which it was unwilling to meet. Gebru tweeted that she had asked to negotiate “a last date” for her employment after she got back from vacation. She was cut off from her corporate email account before her return.

    Online, many other leaders in the field of AI ethics are arguing that the company pushed her out because of the inconvenient truths that she was uncovering about a core line of its research—and perhaps its bottom line. More than 1,400 Google staff and 1,900 other supporters have also signed a letter of protest.
    Sign up for The Download - Your daily dose of what’s up in emerging technology
    Stay updated on MIT Technology Review initiatives and events?
    Yes
    No

    Many details of the exact sequence of events that led up to Gebru’s departure are not yet clear; both she and Google have declined to comment beyond their posts on social media. But MIT Technology Review obtained a copy of the research paper from one of the co-authors, Emily M. Bender, a professor of computational linguistics at the University of Washington. Though Bender asked us not to publish the paper itself because the authors didn’t want such an early draft circulating online, it gives some insight into the questions Gebru and her colleagues were raising about AI that might be causing Google concern.

    Titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” the paper lays out the risks of large language models—AIs trained on staggering amounts of text data. These have grown increasingly popular—and increasingly large—in the last three years. They are now extraordinarily good, under the right conditions, at producing what looks like convincing, meaningful new text—and sometimes at estimating meaning from language. But, says the introduction to the paper, “we ask whether enough thought has been put into the potential risks associated with developing them and strategies to mitigate these risks.”
    The paper

    The paper, which builds off the work of other researchers, presents the history of natural-language processing, an overview of four main risks of large language models, and suggestions for further research. Since the conflict with Google seems to be over the risks, we’ve focused on summarizing those here.
    Environmental and financial costs

    Training large AI models consumes a lot of computer processing power, and hence a lot of electricity. Gebru and her coauthors refer to a 2019 paper from Emma Strubell and her collaborators on the carbon emissions and financial costs of large language models. It found that their energy consumption and carbon footprint have been exploding since 2017, as models have been fed more and more data.

    Strubell’s study found that one language model with a particular type of “neural architecture search” (NAS) method would have produced the equivalent of 626,155 pounds (284 metric tons) of carbon dioxide—about the lifetime output of five average American cars. A version of Google’s language model, BERT, which underpins the company’s search engine, produced 1,438 pounds of CO2 equivalent in Strubell’s estimate—nearly the same as a roundtrip flight between New York City and San Francisco.

    Gebru’s draft paper points out that the sheer resources required to build and sustain such large AI models means they tend to benefit wealthy organizations, while climate change hits marginalized communities hardest. “It is past time for researchers to prioritize energy efficiency and cost to reduce negative environmental impact and inequitable access to resources,” they write.
    Massive data, inscrutable models

    Large language models are also trained on exponentially increasing amounts of text. This means researchers have sought to collect all the data they can from the internet, so there’s a risk that racist, sexist, and otherwise abusive language ends up in the training data.

    An AI model taught to view racist language as normal is obviously bad. The researchers, though, point out a couple of more subtle problems. One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms.

    It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.

    Moreover, because the training datasets are so large, it’s hard to audit them to check for these embedded biases. “A methodology that relies on datasets too large to document is therefore inherently risky,” the researchers conclude. “While documentation allows for potential accountability, [...] undocumented training data perpetuates harm without recourse.”
    Research opportunity costs

    The researchers summarize the third challenge as the risk of “misdirected research effort.” Though most AI researchers acknowledge that large language models don’t actually understand language and are merely excellent at manipulating it, Big Tech can make money from models that manipulate language more accurately, so it keeps investing in them. “This research effort brings with it an opportunity cost,” Gebru and her colleagues write. Not as much effort goes into working on AI models that might achieve understanding, or that achieve good results with smaller, more carefully curated datasets (and thus also use less energy).
    Illusions of meaning

    The final problem with large language models, the researchers say, is that because they’re so good at mimicking real human language, it’s easy to use them to fool people. There have been a few high-profile cases, such as the college student who churned out AI-generated self-help and productivity advice on a blog, which went viral.

    The dangers are obvious: AI models could be used to generate misinformation about an election or the covid-19 pandemic, for instance. They can also go wrong inadvertently when used for machine translation. The researchers bring up an example: In 2017, Facebook mistranslated a Palestinian man’s post, which said “good morning” in Arabic, as “attack them” in Hebrew, leading to his arrest.
    Why it matters

    Gebru and Bender’s paper has six co-authors, four of whom are Google researchers. Bender asked to avoid disclosing their names for fear of repercussions. (Bender, by contrast, is a tenured professor: “I think this is underscoring the value of academic freedom,” she says.)

    The paper’s goal, Bender says, was to take stock of the landscape of current research in natural-language processing. “We are working at a scale where the people building the things can’t actually get their arms around the data,” she said. “And because the upsides are so obvious, it’s particularly important to step back and ask ourselves, what are the possible downsides? … How do we get the benefits of this while mitigating the risk?”

    In his internal email, Dean, the Google AI head, said one reason the paper “didn’t meet our bar” was that it “ignored too much relevant research.” Specifically, he said it didn’t mention more recent work on how to make large language models more energy-efficient and mitigate problems of bias.

    However, the six collaborators drew on a wide breadth of scholarship. The paper’s citation list, with 128 references, is notably long. “It’s the sort of work that no individual or even pair of authors can pull off,” Bender said. “It really required this collaboration.”

    The version of the paper we saw does also nod to several research efforts on reducing the size and computational costs of large language models, and on measuring the embedded bias of models. It argues, however, that these efforts have not been enough. “I’m very open to seeing what other references we ought to be including,” Bender said.

    Nicolas Le Roux, a Google AI researcher in the Montreal office, later noted on Twitter that the reasoning in Dean’s email was unusual. “My submissions were always checked for disclosure of sensitive material, never for the quality of the literature review,” he said.

    Now might be a good time to remind everyone that the easiest way to discriminate is to make stringent rules, then to decide when and for whom to enforce them.
    My submissions were always checked for disclosure of sensitive material, never for the quality of the literature review.
    — Nicolas Le Roux (@le_roux_nicolas) December 3, 2020

    Dean’s email also says that Gebru and her colleagues gave Google AI only a day for an internal review of the paper before they submitted it to a conference for publication. He wrote that “our aim is to rival peer-reviewed journals in terms of the rigor and thoughtfulness in how we review research before publication.”

    I understand the concern over Timnit’s resignation from Google. She’s done a great deal to move the field forward with her research. I wanted to share the email I sent to Google Research and some thoughts on our research process.https://t.co/djUGdYwNMb
    — Jeff Dean (@🠡) (@JeffDean) December 4, 2020

    Bender noted that even so, the conference would still put the paper through a substantial review process: “Scholarship is always a conversation and always a work in progress,” she said.

    Others, including William Fitzgerald, a former Google PR manager, have further cast doubt on Dean’s claim:

    This is such a lie. It was part of my job on the Google PR team to review these papers. Typically we got so many we didn’t review them in time or a researcher would just publish & we wouldn’t know until afterwards. We NEVER punished people for not doing proper process. https://t.co/hNE7SOWSLS pic.twitter.com/Ic30sVgwtn
    — William Fitzgerald (@william_fitz) December 4, 2020

    Google pioneered much of the foundational research that has since led to the recent explosion in large language models. Google AI was the first to invent the Transformer language model in 2017 that serves as the basis for the company’s later model BERT, and OpenAI’s GPT-2 and GPT-3. BERT, as noted above, now also powers Google search, the company’s cash cow.

    Bender worries that Google’s actions could create “a chilling effect” on future AI ethics research. Many of the top experts in AI ethics work at large tech companies because that is where the money is. “That has been beneficial in many ways,” she says. “But we end up with an ecosystem that maybe has incentives that are not the very best ones for the progress of science for the world.”

    #Intelligence_artificielle #Google #Ethique #Timnit_Gebru

  • Microbes could be used to extract metals and minerals from space rocks | MIT Technology Review
    https://www.technologyreview.com/2020/11/10/1011935/microbes-extract-metals-minerals-space-rocks-mining/?truid=a497ecb44646822921c70e7e051f7f1a

    Donc, si je comprends bien, on va envoyer des bactéries dans l’espace pour extraire les terres rares des roches spatiales, et permettre de rapporter moins lourd sur Terre.
    Mais bon, cela veut dire qu’on enverra des bactéries dans des endroits inhabités. Quand je pense qu’on s’est offusqué de l’envoi de tardigrades sur la Lune par une équipe israélienne.
    Encore un commun qui va disparaître sous la pression d el’économie expansive.

    New experiments on the International Space Station suggest that future space miners could use bacteria to acquire valuable resources.
    by

    Neel V. Patel archive page

    November 10, 2020
    psyche asteroid
    An illustration of asteroid Psyche, thought to be primarily made of metals.ASU/Peter Rubin

    A species of bacteria can successfully pull out rare Earth elements from rocks, even in microgravity environments, a study on the International Space Station has found. The new findings, published in Nature Communications today, suggest a new way we could one day use microbes to mine for valuable metals and minerals off Earth.

    Why bacteria: Single-celled organisms have evolved over time on Earth to extract nutrients and other essential compounds from rocks through specialized chemical reactions. These bacterial processes are harnessed to extract about 20% of the world’s copper and gold for human use. The scientists wanted to know if they worked in microgravity too.

    The findings: BioRock was a series of 36 experiments that took place on the space station. An international team of scientists built what they call “biomining reactors”—tiny containers the size of matchboxes that contain small slices of basalt rock (igneous rock that’s usually found at or near the surface of Earth, and is quite common on the moon and Mars) submerged in a solution of bacteria.

    Up on the ISS those bacteria were exposed to different gravity simulations (microgravity, Mars gravity, and Earth gravity) as they munched on the rocks for about three weeks, while researchers measured the rare Earth elements released from that activity. Of the three bacteria species studied, one—Sphingomonas desiccabilis—was capable of extracting elements like neodymium, cerium, and lanthanum about as effectively in lower-gravity environments as they do on Earth.

    So what: Microbes won’t replace standard mining technology if we ever mine for resources in space, but they could definitely speed things up. The team behind BioRock suggests that microbes could help accelerate mining on extraterrestrial bodies by as much as 400%, helping to separate metal powders and valuable minerals from other useful elements like oxygen. The fact that they seem able to withstand microgravity suggests these microbes could be a potentially cheap way to extract resources to make life in space more sustainable—and enable lengthy journeys and settlements on distant worlds.

    #Espace #Terres_rares #Bactéries #Espace #Communs

  • Live facial recognition is tracking kids suspected of being criminals
    https://www.technologyreview.com/2020/10/09/1009992/live-facial-recognition-is-tracking-kids-suspected-of-crime

    In Buenos Aires, the first known system of its kind is hunting down minors who appear in a national database of alleged offenders. In a national database in Argentina, tens of thousands of entries detail the names, birthdays, and national IDs of people suspected of crimes. The database, known as the Consulta Nacional de Rebeldías y Capturas (National Register of Fugitives and Arrests), or CONARC, began in 2009 as a part of an effort to improve law enforcement for serious crimes. But there (...)

    #algorithme #CCTV #biométrie #criminalité #données #facial #reconnaissance #vidéo-surveillance #enfants (...)

    ##criminalité ##surveillance

  • NASA will pay for moon rocks excavated by private companies | MIT Technology Review
    https://www.technologyreview.com/2020/09/10/1008310/nasa-pay-moon-rocks-lunar-samples-excavated-private-companies/?truid=a497ecb44646822921c70e7e051f7f1a

    Any commercial mission that can prove it has collected lunar samples stands to make up to $25,000.
    by

    Neel V. Patel

    NASA announced today that it was seeking proposals from private companies interested in collecting samples from the moon and making them available for purchase by the agency.

    The news: As part of the new initiative, one or more companies will launch a mission to the moon and collect between 50 and 500 grams of lunar regolith from the surface. If they can store the sample in a proper container and send pictures and data to NASA to prove the sample has been collected and can be brought to Earth safely, NASA will pay that company between $15,000 and $25,000.

    The company would receive 10% of its payment after its bid is selected by NASA, 10% after the mission launches, and the remaining 80% upon delivering the materials to NASA. The agency has yet to determine exactly how it will retrieve the sample, but the exchange would be expected to happen “in place” on the moon itself—meaning any participating company is only obligated to figure out how to get to the moon. NASA would retain sole ownership of the material upon transfer.

    NEWS: @NASA is buying lunar soil from a commercial provider! It’s time to establish the regulatory certainty to extract and trade space resources. More: https://t.co/B1F5bS6pEy pic.twitter.com/oWuGHnB8ev
    — Jim Bridenstine (@JimBridenstine) September 10, 2020

    The samples could be from anywhere on the surface of the moon, and could possess any rock, dust, or ice materials. The agency wants to complete these exchanges before 2024.

    What’s in it for NASA: There’s an extremely high demand for lunar material among scientists. Nearly all the lunar material currently in NASA’s possession was collected during the Apollo program. While the initiative itself will only bring a small amount to Earth compared with the hundreds of kilograms gathered during Apollo, this could be the first step in establishing a new pipeline for lunar samples, in which NASA buys from the private sector instead of devoting resources to building and launching missions for that purpose.

    In a blog post published today, NASA administrator Jim Bridenstine said the new initiative is part of the agency’s larger goal with the Artemis program to bolster private-sector participation in space exploration. The agency is already working with several launch providers under its Commercial Lunar Payload Services (CLPS) program to deliver nearly two dozen scientific and technological payloads to the moon in the run-up to a crewed landing by the end of 2024. The 2024 landing itself is slated to utilize hardware built by private companies, most notably the lunar lander for taking humans to the surface.
    Sign up for The Airlock - Your gateway to the future of space technology
    Stay updated on MIT Technology Review initiatives and events?
    Yes
    No

    What’s in it for the company: $25,000 is paltry compensation for such a mission, so any companies that participate won’t be in this for money. Instead, it’s an incentive to test out new technologies, including those that could be later used to extract resources like water ice from the moon. The mission outlined in today’s announcement will only involve collecting and storing material from the surface, but that’s still something no private company has done before.

    Legal questions: Lastly, many of America’s larger lunar ambitions focus on establishing a moon mining industry and developing a marketplace that allows excavated resources to be bought and sold by different parties. Bridenstine alludes to these plans in his blog post, referencing President Trump’s April 2020 executive order that encourages the recovery and use of resources in outer space. That order was a follow-up to a law passed in 2015 outlining America’s position that US companies are allowed to own and sell resources they’ve extracted from extraterrestrial bodies. There’s still debate as to whether such policies conflict with the 1967 UN Outer Space Treaty.

    #Lune #Communs #Enclosure #Traité_espace #NASA

  • #Covid-19 : la terrible leçon de #Manaus – {Sciences²}
    https://www.lemonde.fr/blog/huet/2020/09/24/covid-19-la-terrible-lecon-de-manaus

    Une grande ville d’Amazonie, Manaus, répond à la question : combien de morts si on laisse le Sars-Cov-2 se propager ?

    [...]

    La réponse de Manaus est-elle extrapolable à d’autres pays ? Oui, à condition de ne pas oublier son côté « optimiste », au regard d’une population similaire à celle de notre pays, où les plus de 60 ans représentent un pourcentage beaucoup plus élevé. Ainsi, un article du Massachussets Institute of Technology https://www.technologyreview.com/2020/09/22/1008709/brazil-manaus-covid-coronavirus-herd-immunity-pandemic relatant l’étude sur Manaus estime que la stratégie dite d’#immunité_collective provoquerait au moins 500 000 morts aux Etats-Unis. Un chiffre minimum de chez minimum, puisque ce pays compte déjà 200 000 décès (officiels) attribués à la Covid-19 alors que le taux d’infection de la population est très loin de celui observé à Manaus. Et qu’une étude « worst case » aboutit plutôt à 1,7 million de morts aux Etats-Unis. Ce chiffre est donc similaire aux calculs de l’article de Arnaud Fontanet et Simon Cauchemez (de l’Institut Pasteur à Paris) paru dans Nature review immunology qui conclut, pour la France, à une estimation entre 100 000 et 450 000 morts dans le cas d’une stratégie d’immunité collective.

    L’étude sur les donneurs de sang de Manaus apporte également une information peu encourageante : il semblerait que la réponse sérologique (donc la présence d’anticorps) diminue avec le temps passé depuis l’infection. L’#immunité serait donc assez rapidement déclinante avec le temps.

  • Why Facebook’s political-ad ban is taking on the wrong problem
    https://www.technologyreview.com/2020/09/06/1008192/why-facebooks-political-ad-ban-is-taking-on-the-wrong-problem

    A moratorium on new political ads just before election day tackles one kind of challenge caused by social media. It’s just not the one that matters. When Mark Zuckerberg announced that Facebook would stop accepting political advertising in the week before the US presidential election, he was responding to widespread fear that social media has outsize power to change the balance of an election. Political campaigns have long believed that direct voter contact and personalized messaging are (...)

    #CambridgeAnalytica/Emerdata #Facebook #algorithme #manipulation #domination #élections (...)

    ##CambridgeAnalytica/Emerdata ##SocialNetwork

  • The long, complicated history of “people analytics”
    https://www.technologyreview.com/2020/08/19/1006365/if-then-lepore-review-simulmatics

    If you work for Bank of America, or the US Army, you might have used technology developed by Humanyze. The company grew out of research at MIT’s cross-disciplinary Media Lab and describes its products as “science-backed analytics to drive adaptability.” If that sounds vague, it might be deliberate. Among the things Humanyze sells to businesses are devices for snooping on employees, such as ID badges with embedded RFID tags, near-field-communication sensors, and built-in microphones that track (...)

    #BankofAmerica #Humanyze #USArmy #DoD #IBM #algorithme #capteur #RFID #militaire #compagnie #élections #prédiction #son #comportement #surveillance #travail (...)

    ##voix

  • The long, complicated history of “people analytics” | MIT Technology Review
    https://www.technologyreview.com/2020/08/19/1006365/if-then-lepore-review-simulmatics/?truid=a497ecb44646822921c70e7e051f7f1a

    If you work for Bank of America, or the US Army, you might have used technology developed by Humanyze. The company describes its products as “science-backed analytics to drive adaptability.”

    If that sounds vague, it might be deliberate. Among the things Humanyze sells to businesses are devices for snooping on employees, such as ID badges with embedded RFID tags, and built-in microphones that track in granular detail the tone and volume (though not the actual words) of people’s conversations throughout the day. Humanyze uses the data to create an “Organizational Health Score,” it promises is “a proven formula to accelerate change and drive improvement.”

    Or perhaps you work for one of the healthcare, retail, or financial-­services companies that use software developed by Receptiviti. The Toronto-based company’s mission is to “help machines understand people” by scanning emails and Slack messages for linguistic hints of unhappiness. “We worry about the perception of Big Brother,” Receptiviti’s CEO recently told the Wall Street Journal. He prefers calling employee surveillance “corporate mindfulness.” (Orwell would have had something to say about that euphemism, too.)

    Such efforts at what its creators call “people analytics” are usually justified on the grounds of improving efficiency or the customer experience. In recent months, some governments and public health experts have advocated tracking and tracing applications as a means of stopping the spread of covid-19.

    But in embracing these technologies, businesses and governments often avoid answering crucial questions: Who should know what about you? Is what they know accurate? What should they be able to do with that information? And is it ever possible to devise a “proven formula” for assessing human behavior? Simulmatics, a now-defunct “people analytics” company provides a cautionary tale, writes Christine Rosen, and confirms that all these ventures are based on a false belief that mathematical laws of human nature are real, in the way that laws of physics are.

    #Travail #Surveillance #Contrôle_social

  • Eight case studies on regulating biometric technology show us a path forward
    https://www.technologyreview.com/2020/09/04/1008164/ai-biometric-face-recognition-regulation-amba-kak

    A new report from the AI Now Institute reveals how different regulatory approaches work or fall short in protecting communities from surveillance. Amba Kak was in law school in India when the country rolled out the Aadhaar project in 2009. The national biometric ID system, conceived as a comprehensive identity program, sought to collect the fingerprints, iris scans, and photographs of all residents. It wasn’t long, Kak remembers, before stories about its devastating consequences began to (...)

    #Clearview #Facebook #biométrie #migration #[fr]Règlement_Général_sur_la_Protection_des_Données_(RGPD)[en]General_Data_Protection_Regulation_(GDPR)[nl]General_Data_Protection_Regulation_(GDPR) #consentement #données #facial #reconnaissance #iris #Aadhaar #discrimination (...)

    ##[fr]Règlement_Général_sur_la_Protection_des_Données__RGPD_[en]General_Data_Protection_Regulation__GDPR_[nl]General_Data_Protection_Regulation__GDPR_ ##empreintes ##pauvreté

  • Participation-washing could be the next dangerous fad in machine learning
    https://www.technologyreview.com/2020/08/25/1007589/participation-washing-ai-trends-opinion-machine-learning

    Many people already participate in the field’s work without recognition or pay. The AI community is finally waking up to the fact that machine learning can cause disproportionate harm to already oppressed and disadvantaged groups. We have activists and organizers to thank for that. Now, machine-learning researchers and scholars are looking for ways to make AI more fair, accountable, and transparent—but also, recently, more participatory. One of the most exciting and well-attended events at (...)

    #Amazon #AmazonWebServices-AWS #algorithme #CAPTCHA #GigEconomy #scraping #travail (...)

    ##éthique

  • Digital gardens let you cultivate your own little bit of the internet | MIT Technology Review
    https://www.technologyreview.com/2020/09/03/1007716/digital-gardens-let-you-cultivate-your-own-little-bit-of-the-internet/?truid=a497ecb44646822921c70e7e051f7f1a

    Le retour des « pages personnelles »

    A growing number of people are creating individualized, creative sites that eschew the one-size-fits-all look and feel of social media
    by

    Tanya Basu
    September 3, 2020
    digital garden illustration of wild plants with flowers growing around screensMs Tech | Wikimedia, Pixabay

    Sara Garner had a nagging feeling something wasn’t quite right.

    A software engineer, she was revamping her personal site, but it just didn’t feel like her. Sure, it had the requisite links to her social media and her professional work, but it didn’t really reflect her personality. So she created a page focused on museums, which she is obsessed with. It’s still under construction, but she envisions a page that includes thoughts on her favorite museums, describes the emotions they evoked, and invites others to share their favorite museums and what they’ve learned.

    “I’m going for a feeling of wonderment, a connection across time,” she says.

    Welcome to the world of “digital gardens.” These creative reimaginings of blogs have quietly taken nerdier corners of the internet by storm. A growing movement of people are tooling with back-end code to create sites that are more collage-like and artsy, in the vein of Myspace and Tumblr—less predictable and formatted than Facebook and Twitter. Digital gardens explore a wide variety of topics and are frequently adjusted and changed to show growth and learning, particularly among people with niche interests. Through them, people are creating an internet that is less about connections and feedback, and more about quiet spaces they can call their own.
    “Everyone does their own weird thing”

    The movement might be gaining steam now, but its roots date back to 1998, when Mark Bernstein introduced the idea of the “hypertext garden,” arguing for spaces on the internet that let a person wade into the unknown. “Gardens … lie between farmland and wilderness,” he wrote. “The garden is farmland that delights the senses, designed for delight rather than commodity.” (His digital garden includes a recent review of a Bay Area carbonara dish and reflections on his favorite essays.)

    The new wave of digital gardens discuss books and movies, with introspective journal entries; others offer thoughts on philosophy and politics. Some are works of art in themselves, visual masterpieces that invite the viewer to explore; others are simpler and more utilitarian, using Google Docs or Wordpress templates to share intensely personal lists. Avid readers in particular have embraced the concept, sharing creative, beautiful digital bookshelves that illustrate their reading journey.

    Nerding hard on digital gardens, personal wikis, and experimental knowledge systems with @_jonesian today.

    We have an epic collection going, check these out...

    1. @tomcritchlow’s Wikifolders: https://t.co/QnXw0vzbMG pic.twitter.com/9ri6g9hD93
    — Maggie Appleton (@Mappletons) April 15, 2020

    Beneath the umbrella term, however, digital gardens don’t follow rules. They’re not blogs, short for “weblogs,” a term that suggests a time-stamped record of thought. They’re not a social-media platform—connections are made, but often it’s through linking to other digital gardens, or gathering in forums like Reddit and Telegram to nerd out over code.

    Tom Critchlow, a consultant who has been cultivating his digital garden for years, spells out the main difference between old-school blogging and digital gardening. “With blogging, you’re talking to a large audience,” he says. “With digital gardening, you’re talking to yourself. You focus on what you want to cultivate over time.”

    What they have in common is that they can be edited at any time to reflect evolution and change. The idea is similar to editing a Wikipedia entry, though digital gardens are not meant to be the ultimate word on a topic. As a slower, clunkier way to explore the internet, they revel in not being the definitive source, just a source, says Mike Caulfield, a digital literacy expert at Washington State University.

    In fact, the whole point of digital gardens is that they can grow and change, and that various pages on the same topic can coexist. “It’s less about iterative learning and more about public learning,” says Maggie Appleton, a designer. Appleton’s digital garden, for example, includes thoughts on plant-based meat, book reviews, and digressions on Javascript and magical capitalism. It is “an open collection of notes, resources, sketches, and explorations I’m currently cultivating,” its introduction declares. “Some notes are Seedlings, some are budding, and some are fully grown Evergreen[s].”

    Appleton, who trained as an anthropologist, says she was drawn to digital gardens because of their depth. “The content is not on Twitter, and it’s never deleted,” she says. “Everyone does their own weird thing. The sky’s the limit.”

    That ethos of creativity and individuality was echoed by several people I spoke to. Some suggested that the digital garden was a backlash to the internet we’ve become grudgingly accustomed to, where things go viral, change is looked down upon, and sites are one-dimensional. Facebook and Twitter profiles have neat slots for photos and posts, but enthusiasts of digital gardens reject those fixed design elements. The sense of time and space to explore is key.

    Caulfield, who has researched misinformation and disinformation, wrote a blog post in 2015 on the “technopastoral,” in which he described the federated wiki structure promoted by computer programmer Ward Cunningham, who thought the internet should support a “chorus of voices” rather than the few rewarded on social media today.

    “The stream has dominated our lives since the mid-2000s,” Caulfield says. But it means people are either posting content or consuming it. And, Caulfield says, the internet as it stands rewards shock value and dumbing things down. “By engaging in digital gardening, you are constantly finding new connections, more depth and nuance,” he says. “What you write about is not a fossilized bit of commentary for a blog post. When you learn more, you add to it. It’s less about shock and rage; it’s more connective.” In an age of doom-scrolling and Zoom fatigue, some digital-garden enthusiasts say the internet they live in is, as Caulfield puts it, “optimistically hopeful.”

    While many people are searching for more intimate communities on the internet, not everyone can spin up a digital garden: you need to be able to do at least some rudimentary coding. Making a page from scratch affords more creative freedom than social-media and web-hosting sites that let you drag and drop elements onto your page, but it can be daunting and time-consuming.

    Chris Biscardi is trying to get rid of that barrier to entry with a text editor for digital gardens that’s still in its alpha stage. Called Toast, it’s “something you might experience with Wordpress,” he says.

    Ultimately, whether digital gardens will be an escapist remnant of 2020’s hellscape or wither in the face of easier social media remains to be seen. “I’m interested in seeing how it plays out,” Appleton says.

    “For some people it’s a reaction to social media, and for others it’s a trend,” Critchlow says. “Whether or not it will hit critical mass … that’s to be seen.”

    #Internet #Culture_numérique #Pages_personnelles #Blog

  • Inside China’s unexpected quest to protect data privacy | MIT Technology Review
    https://www.technologyreview.com/2020/08/19/1006441/china-data-privacy-hong-yanqing-gdpr/?truid=a497ecb44646822921c70e7e051f7f1a

    In the West, it’s widely believed that neither the Chinese government nor Chinese people care about privacy. US tech giants wield this supposed indifference to argue that onerous privacy laws would put them at a competitive disadvantage to Chinese firms.

    In reality, this picture of Chinese attitudes to privacy is out of date. Over the last few years the Chinese government, seeking to strengthen consumers’ trust and participation in the digital economy, has begun to implement privacy protections that in many respects resemble those in America and Europe today.

    Even as the government has strengthened consumer privacy, however, it has ramped up state surveillance. It uses DNA samples and other biometrics, like face and fingerprint recognition, to monitor citizens throughout the country.

    It has tightened internet censorship and developed a “social credit” system, which punishes behaviors the authorities say weaken social stability. During the pandemic, it deployed a system of “health code” apps to dictate who could travel, based on their risk of carrying the coronavirus. And it has used a slew of invasive surveillance technologies in its harsh repression of Muslim Uighurs in the northwestern region of Xinjiang.

    This paradox has become a defining feature of China’s emerging data privacy regime. It raises a question: Can a system endure with strong protections for consumer privacy, but almost none against government snooping? The answer doesn’t affect only China. Its technology companies have an increasingly global footprint, and regulators around the world are watching its policy decisions.

    #Chine #Vie_privée #Surveillance

  • Brazil is sliding into techno-authoritarianism | MIT Technology Review
    https://www.technologyreview.com/2020/08/19/1007094/brazil-bolsonaro-data-privacy-cadastro-base/?truid=a497ecb44646822921c70e7e051f7f1a

    For many years, Latin America’s largest democracy was a leader on data governance. In 1995, it created the Brazilian Internet Steering Committee, a multi-stakeholder body to help the country set principles for internet governance. In 2014, Dilma Rousseff’s government pioneered the Marco Civil (Civil Framework), an internet “bill of rights” lauded by Tim Berners-Lee, the inventor of the World Wide Web. Four years later, Brazil’s congress passed a data protection law, the LGPD, closely modeled on Europe’s GDPR.

    Recently, though, the country has veered down a more authoritarian path. Even before the pandemic, Brazil had begun creating an extensive data-collection and surveillance infrastructure. In October 2019, President Jair Bolsonaro signed a decree compelling all federal bodies to share most of the data they hold on Brazilian citizens, from health records to biometric information, and consolidate it in a vast master database, the Cadastro Base do Cidadão (Citizen’s Basic Register). With no debate or public consultation, the measure took many people by surprise.

    In lowering barriers to the exchange of information, the government says, it hopes to increase the quality and consistency of data it holds. This could—according to the official line—improve public services, cut down on voter fraud, and reduce bureaucracy. In a country with some 210 million people, such a system could speed up the delivery of social welfare and tax benefits, and make public policies more efficient.

    But critics have warned that under Bolsonaro’s far-right leadership, this concentration of data will be used to abuse personal privacy and civil liberties. And the covid-19 pandemic appears to be accelerating the country’s slide toward a surveillance state. Read the full story.

    #Brésil #Surveillance #Vie_privée #Législation