• Inside NSO, Israel’s billion-dollar spyware giant
    https://www.technologyreview.com/2020/08/19/1006458/nso-spyware-controversy-pegasus-human-rights

    The world’s most notorious surveillance company says it wants to clean up its act. Go on, we’re listening.

    Maâti Monjib speaks slowly, like a man who knows he’s being listened to.

    It’s the day of his 58th birthday when we speak, but there’s little celebration in his voice. “The surveillance is hellish,” Monjib tells me. “It is really difficult. It controls everything I do in my life.”

    A history professor at the University of Mohammed V in Rabat, Morocco, Monjib vividly remembers the day in 2017 when his life changed. Charged with endangering state security by the government he has fiercely and publicly criticized, he was sitting outside a courtroom when his iPhone suddenly lit up with a series of text messages from numbers he didn’t recognize. They contained links to salacious news, petitions, and even Black Friday shopping deals.

    A month later, an article accusing him of treason appeared on a popular national news site with close ties to Morocco’s royal rulers. Monjib was used to attacks, but now it seemed his harassers knew everything about him: another article included information about a pro-democracy event he was set to attend but had told almost no one about. One story even proclaimed that the professor “has no secrets from us.”

    He’d been hacked. The messages had all led to websites that researchers say were set up as lures to infect visitors’ devices with Pegasus, the most notorious spyware in the world.

    Pegasus is the blockbuster product of NSO Group, a secretive billion-dollar Israeli surveillance company. It is sold to law enforcement and intelligence agencies around the world, which use the company’s tools to choose a human target, infect the person’s phone with the spyware, and then take over the device. Once Pegasus is on your phone, it is no longer your phone.

    NSO sells Pegasus with the same pitch arms dealers use to sell conventional weapons, positioning it as a crucial aid in the hunt for terrorists and criminals. In an age of ubiquitous technology and strong encryption, such “lawful hacking” has emerged as a powerful tool for public safety when law enforcement needs access to data. NSO insists that the vast majority of its customers are European democracies, although since it doesn’t release client lists and the countries themselves remain silent, that has never been verified.

    Monjib’s case, however, is one of a long list of incidents in which Pegasus has been used as a tool of oppression. It has been linked to cases including the murder of Saudi journalist Jamal Khashoggi, the targeting of scientists and campaigners pushing for political reform in Mexico, and Spanish government surveillance of Catalan separatist politicians. Mexico and Spain have denied using Pegasus to spy on opponents, but accusations that they have done so are backed by substantial technical evidence.

    NSO’s basic argument is that it is the creator of a technology that governments use, but that since it doesn’t attack anyone itself, it can’t be held responsible.

    Some of that evidence is contained in a lawsuit filed last October in California by WhatsApp and its parent company, Facebook, alleging that Pegasus manipulated WhatsApp’s infrastructure to infect more than 1,400 cell phones. Investigators at Facebook found more than 100 human rights defenders, journalists, and public figures among the targets, according to court documents. Each call that was picked up, they discovered, sent malicious code through WhatsApp’s infrastructure and caused the recipient’s phone to download spyware from servers owned by NSO. This, WhatsApp argued, was a violation of American law.

    NSO has long faced such accusations with silence. Claiming that much of its business is an Israeli state secret, it has offered precious little public detail about its operations, customers, or safeguards.

    Now, though, the company suggests things are changing. In 2019, NSO, which was owned by a private equity firm, was sold back to its founders and another private equity firm, Novalpina, for $1 billion. The new owners decided on a fresh strategy: emerge from the shadows. The company hired elite public relations firms, crafted new human rights policies, and developed new self-­governance documents. It even began showing off some of its other products, such as a covid-19 tracking system called Fleming, and Eclipse, which can hack drones deemed a security threat.

    Over several months, I’ve spoken with NSO leadership to understand how the company works and what it says it is doing to prevent human rights abuses carried out using its tools. I have spoken to its critics, who see it as a danger to democratic values; to those who urge more regulation of the hacking business; and to the Israeli regulators responsible for governing it today. The company’s leaders talked about NSO’s future and its policies and procedures for dealing with problems, and it shared documents that detail its relationship with the agencies to which it sells Pegasus and other tools. What I found was a thriving arms dealer—inside the company, employees acknowledge that Pegasus is a genuine weapon—struggling with new levels of scrutiny that threaten the foundations of its entire industry.Retour ligne automatique
    “A difficult task”

    From the first day Shmuel Sunray joined NSO as its general counsel, he faced one international incident after another. Hired just days after WhatsApp’s lawsuit was filed, he found other legal problems waiting on his desk as soon as he arrived. They all centered on the same basic accusation: NSO Group’s hacking tools are sold to, and can be abused by, rich and repressive regimes with little or no accountability.

    Sunray had plenty of experience with secrecy and controversy: his previous job was as vice president of a major weapons manufacturer. Over several conversations, he was friendly as he told me that he’s been instructed by the owners to change NSO’s culture and operations, making it more transparent and trying to prevent human rights abuses from happening. But he was also obviously frustrated by the secrecy that he felt prevented him from responding to critics.

    “It’s a difficult task,” Sunray told me over the phone from the company’s headquarters in Herzliya, north of Tel Aviv. “We understand the power of the tool; we understand the impact of misuse of the tool. We’re trying to do the right thing. We have real challenges dealing with government, intelligence agencies, confidentiality, operational necessities, operational limitations. It’s not a classic case of human rights abuse by a company, because we don’t operate the systems—we’re not involved in actual operations of the systems—but we understand there is a real risk of misuse from the customers. We’re trying to find the right balance.”

    This underpins NSO’s basic argument, one that is common among weapons manufacturers: the company is the creator of a technology that governments use, but it doesn’t attack anyone itself, so it can’t be held responsible.

    Still, according to Sunray, there are several layers of protection in place to try to make sure the wrong people don’t have access.Retour ligne automatique
    Making a sale

    Like most other countries, Israel has export controls that require weapons manufacturers to be licensed and subject to government oversight. In addition, NSO does its own due diligence, says Sunray: its staff examine a country, look at its human rights record, and scrutinize its relationship with Israel. They assess the specific agency’s track record on corruption, safety, finance, and abuse—as well as factoring in how much it needs the tool.

    Sometimes negatives are weighed against positives. Morocco, for example, has a worsening human rights record but a lengthy history of cooperating with Israel and the West on security, as well as a genuine terrorism problem, so a sale was reportedly approved. By contrast, NSO has said that China, Russia, Iran, Cuba, North Korea, Qatar, and Turkey are among 21 nations that will never be customers.

    Finally, before a sale is made, NSO’s governance, risk, and compliance committee has to sign off. The company says the committee, made up of managers and shareholders, can decline sales or add conditions, such as technological restrictions, that are decided case by case. Retour ligne automatique
    Preventing abuse

    Once a sale is agreed to, the company says, technological guardrails prevent certain kinds of abuse. For example, Pegasus does not allow American phone numbers to be infected, NSO says, and infected phones cannot even be physically located in the United States: if one does find itself within American borders, the Pegasus software is supposed to self-destruct.

    NSO says Israeli phone numbers are among others also protected, though who else gets protection and why remains unclear.

    When a report of abuse comes in, an ad hoc team of up to 10 NSO employees is assembled to investigate. They interview the customer about the allegations, and they request Pegasus data logs. These logs don’t contain the content the spyware extracted, like chats or emails—NSO insists it never sees specific intelligence—but do include metadata such as a list of all the phones the spyware tried to infect and their locations at the time.

    According to one recent contract I obtained, customers must “use the system only for the detection, prevention, and investigation of crimes and terrorism and ensure the system will not be used for human rights violations.” They must notify the company of potential misuse. NSO says it has terminated three contracts in the past for infractions including abuse of Pegasus, but it refuses to say which countries or agencies were involved or who the victims were.

    “We’re not naïve”

    Lack of transparency is not the only problem: the safeguards have limits. While the Israeli government can revoke NSO’s license for violations of export law, the regulators do not take it on themselves to look for abuse by potential customers and aren’t involved in the company’s abuse investigations.

    Many of the other procedures are merely reactive as well. NSO has no permanent internal abuse team, unlike almost any other billion-dollar tech firm, and most of its investigations are spun up only when an outside source such as Amnesty International or Citizen Lab claims there has been malfeasance. NSO staff interview the agencies and customers under scrutiny but do not talk to the alleged victims, and while the company often disputes the technical reports offered as evidence, it also claims that both state secrecy and business confidentiality prevent it from sharing more information.

    The Pegasus logs that are crucial to any abuse inquiry also raise plenty of questions. NSO Group’s customers are hackers who work for spy agencies; how hard would it be for them to tamper with the logs? In a statement, the company insisted this isn’t possible but declined to offer details.

    If the logs aren’t disputed, NSO and its customers will decide together whether targets are legitimate, whether genuine crimes have been committed, and whether surveillance was done under due process of law or whether autocratic regimes spied on opponents.

    Sunray, audibly exasperated, says he feels as if secrecy is forcing him to operate with his hands tied behind his back.

    “It’s frustrating,” he told me. “We’re not naïve. There have been misuses. There will be misuses. We sell to many governments. Even the US government—no government is perfect. Misuse can happen, and it should be addressed.”

    But Sunray also returns to the company’s standard response, the argument that underpins its defense in the WhatsApp lawsuit: NSO is a manufacturer, but it’s not the operator of the spyware. We built it but they did the hacking—and they are sovereign nations.

    That’s not enough for many critics. “No company that believes it can be the independent watchdog of their own products ever convinces me,” says Marietje Schaake, a Dutch politician and former member of the European Parliament. “The whole idea that they have their own mechanisms while they have no problem selling commercial spyware to whoever wants to buy it, knowing that it’s used against human rights defenders and journalists—I think it shows the lack of responsibility on the part of this company more than anything.”

    So why the internal push for more transparency now? Because the deluge of technical reports from human rights groups, the WhatsApp lawsuit, and increasing governmental scrutiny threaten NSO’s status quo. And if there is going to be a new debate over how the industry gets regulated, it pays to have a powerful voice. Retour ligne automatique
    Growing scrutiny

    Lawful hacking and cyber-espionage have grown enormously as a business over the past decade, with no signs of retreat. NSO Group’s previous owners bought the company in 2014 for $130 million, less than one-seventh of the valuation it was sold for last year. The rest of the industry is expanding too, profiting from the spread of communications technology and deepening global instability. “There’s no doubt that any state has the right to buy this technology to fight crime and terrorism,” says Amnesty International’s deputy director, Danna Ingleton. “States are rightfully and lawfully able to use these tools. But that needs to be accompanied more with a regulatory system that prevents abuses and provides an accountability mechanism when abuse has happened.” Shining a much brighter light on the hacking industry, she argues, will allow for better regulation and more accountability.

    Earlier this year Amnesty International was in court in Israel arguing that the Ministry of Defense should revoke NSO’s license because of abuses of Pegasus. But just as the case was starting, officials from Amnesty and 29 other petitioners were told to leave the courtroom: a gag order was being placed on the proceedings at the ministry’s urging. Then, in July, a judge rejected the case outright.

    “I do not believe as a matter of principle and as a matter of law that NSO can claim a complete lack of responsibility for the way their tools are being used,” says United Nations special rapporteur Agnès Callamard. “That’s not how it works under international law.”

    Callamard advises the UN on extrajudicial executions and has been vocal about NSO Group and the spyware industry ever since it emerged that Pegasus was being used to spy on friends and associates of Khashoggi shortly before he was murdered. For her, the issue has life-or-death consequences.

    If NSO loses the WhatsApp case, one lawyer says, it calls into question all those companies that make their living by finding flaws in software and exploiting them.

    “We’re not calling for something radically new,” says Callamard. “We are saying that what’s in place at the moment is proving insufficient, and therefore governments or regulatory agencies need to move into a different gear quickly. The industry is expanding, and it should expand on the basis of the proper framework to regulate misuse. It’s important for global peace.”

    There have been calls for a temporary moratorium on sales until stronger regulation is enacted, but it’s not clear what that legal framework would look like. Unlike conventional arms, which are subject to various international laws, cyber weapons are currently not regulated by any worldwide arms control agreement. And while nonproliferation treaties have been suggested, there is little clarity on how they would measure existing capabilities, how monitoring or enforcement would work, or how the rules would keep up with rapid technological developments. Instead, most scrutiny today is happening at the national legal level.

    In the US, both the FBI and Congress are looking into possible hacks of American targets, while an investigation led by Senator Ron Wyden’s office wants to find out whether any Americans are involved in exporting surveillance technology to authoritarian governments. A recent draft US intelligence bill would require a government report on commercial spyware and surveillance technology.

    The WhatsApp lawsuit, meanwhile, has taken aim close to the heart of NSO’s business. The Silicon Valley giant argues that by targeting California residents—that is, WhatsApp and Facebook—NSO has given the court in San Francisco jurisdiction, and that the judge in the case can bar the Israeli company from future attempts to misuse WhatsApp’s and Facebook’s networks. That opens the door to an awful lot of possibilities: Apple, whose iPhone has been a paramount NSO target, could feasibly mount a similar legal attack. Google, too, has spotted NSO targeting Android devices.

    And financial damages are not the only sword hanging over NSO’s head. Such lawsuits also bring with them the threat of courtroom discovery, which has the potential to bring details of NSO’s business deals and customers into the public eye.

    “A lot depends on exactly how the court rules and how broadly it characterizes the violation NSO is alleged to have committed here,” says Alan Rozenshtein, a former Justice Department lawyer now at the University of Minnesota Law School. “At a minimum, if NSO loses this case, it calls into question all of those companies that make their products or make their living by finding flaws in messaging software and providing services exploiting those flaws. This will create enough legal uncertainty that I would imagine these would-be clients would think twice before contracting with them. You don’t know if the company will continue to operate, if they’ll get dragged to court, if your secrets will be exposed.” NSO declined to comment on the alleged WhatsApp hack, since it is still an active case. Retour ligne automatique
    “We are always spied on”

    In Morocco, Maâti Monjib was subjected to at least four more hacking attacks throughout 2019, each more advanced than the one before. At some point, his phone browser was invisibly redirected to a suspicious domain that researchers suspect was used to silently install malware. Instead of something like a text message that can raise the alarm and leaves a visible trace, this one was a much quieter network injection attack, a tactic valued because it’s almost imperceptible except to expert investigators.

    On September 13, 2019, Monjib had lunch at home with his friend Omar Radi, a Moroccan journalist who is one of the regime’s sharpest critics. That very day, an investigation later found, Radi was hit with the same kind of network injection attacks that had snared Monjib. The hacking campaign against Radi lasted at least into January 2020, Amnesty International researchers said. He’s been subject to regular police harassment ever since.

    At least seven more Moroccans received warnings from WhatsApp about Pegasus being used to spy on their phones, including human rights activists, journalists, and politicians. Are these the kinds of legitimate spying targets—the terrorists and criminals—laid out in the contract that Morocco and all NSO customers sign?

    In December, Monjib and the other victims sent a letter to Morocco’s data protection authority asking for an investigation and action. Nothing formally came of it, but one of the men, the pro-democracy economist Fouad Abdelmoumni, says his friends high up at the agency told him the letter was hopeless and urged him to drop the matter. The Moroccan government, meanwhile, has responded by threatening to expel Amnesty International from the country.

    What’s happening in Morocco is emblematic of what’s happening around the world. While it’s clear that democracies are major beneficiaries of lawful hacking, a long and growing list of credible, detailed, technical, and public investigations shows Pegasus being misused by authoritarian regimes with long records of human rights abuse.

    “Morocco is a country under an authoritarian regime who believe people like Monjib and myself have to be destroyed,” says Abdelmoumni. “To destroy us, having access to all information is key. We always consider that we are spied on. All of our information is in the hands of the palace.”

    #Apple #NSO #Facebook #WhatsApp #iPhone #Pegasus #smartphone #spyware #activisme #journalisme #écoutes #hacking #surveillance #Amnesty (...)

    ##CitizenLab

  • Inside China’s unexpected quest to protect data privacy
    https://www.technologyreview.com/2020/08/19/1006441/china-data-privacy-hong-yanqing-gdpr

    A new privacy law would look a lot like Europe’s GDPR—but will it restrict state surveillance?

    Late in the summer of 2016, Xu Yuyu received a call that promised to change her life. Her college entrance examination scores, she was told, had won her admission to the English department of the Nanjing University of Posts and Telecommunications. Xu lived in the city of Linyi in Shandong, a coastal province in China, southeast of Beijing. She came from a poor family, singularly reliant on her father’s meager income. But her parents had painstakingly saved for her tuition; very few of her relatives had ever been to college.

    A few days later, Xu received another call telling her she had also been awarded a scholarship. To collect the 2,600 yuan ($370), she needed to first deposit a 9,900 yuan “activation fee” into her university account. Having applied for financial aid only days before, she wired the money to the number the caller gave her. That night, the family rushed to the police to report that they had been defrauded. Xu’s father later said his greatest regret was asking the officer whether they might still get their money back. The answer—“Likely not”—only exacerbated Xu’s devastation. On the way home she suffered a heart attack. She died in a hospital two days later.

    An investigation determined that while the first call had been genuine, the second had come from scammers who’d paid a hacker for Xu’s number, admissions status, and request for financial aid.

    For Chinese consumers all too familiar with having their data stolen, Xu became an emblem. Her death sparked a national outcry for greater data privacy protections. Only months before, the European Union had adopted the General Data Protection Regulation (GDPR), an attempt to give European citizens control over how their personal data is used. Meanwhile, Donald Trump was about to win the American presidential election, fueled in part by a campaign that relied extensively on voter data. That data included details on 87 million Facebook accounts, illicitly obtained by the consulting firm Cambridge Analytica. Chinese regulators and legal scholars followed these events closely.

    In the West, it’s widely believed that neither the Chinese government nor Chinese people care about privacy. US tech giants wield this supposed indifference to argue that onerous privacy laws would put them at a competitive disadvantage to Chinese firms. In his 2018 Senate testimony after the Cambridge Analytica scandal, Facebook’s CEO, Mark Zuckerberg, urged regulators not to clamp down too hard on technologies like face recognition. “We still need to make it so that American companies can innovate in those areas,” he said, “or else we’re going to fall behind Chinese competitors and others around the world.”

    In reality, this picture of Chinese attitudes to privacy is out of date. Over the last few years the Chinese government, seeking to strengthen consumers’ trust and participation in the digital economy, has begun to implement privacy protections that in many respects resemble those in America and Europe today.

    Even as the government has strengthened consumer privacy, however, it has ramped up state surveillance. It uses DNA samples and other biometrics, like face and fingerprint recognition, to monitor citizens throughout the country. It has tightened internet censorship and developed a “social credit” system, which punishes behaviors the authorities say weaken social stability. During the pandemic, it deployed a system of “health code” apps to dictate who could travel, based on their risk of carrying the coronavirus. And it has used a slew of invasive surveillance technologies in its harsh repression of Muslim Uighurs in the northwestern region of Xinjiang.

    This paradox has become a defining feature of China’s emerging data privacy regime, says Samm Sacks, a leading China scholar at Yale and New America, a think tank in Washington, DC. It raises a question: Can a system endure with strong protections for consumer privacy, but almost none against government snooping? The answer doesn’t affect only China. Its technology companies have an increasingly global footprint, and regulators around the world are watching its policy decisions.

    November 2000 arguably marks the birth of the modern Chinese surveillance state. That month, the Ministry of Public Security, the government agency that oversees daily law enforcement, announced a new project at a trade show in Beijing. The agency envisioned a centralized national system that would integrate both physical and digital surveillance using the latest technology. It was named Golden Shield.

    Eager to cash in, Western companies including American conglomerate Cisco, Finnish telecom giant Nokia, and Canada’s Nortel Networks worked with the agency on different parts of the project. They helped construct a nationwide database for storing information on all Chinese adults, and developed a sophisticated system for controlling information flow on the internet—what would eventually become the Great Firewall. Much of the equipment involved had in fact already been standardized to make surveillance easier in the US—a consequence of the Communications Assistance for Law Enforcement Act of 1994.

    Despite the standardized equipment, the Golden Shield project was hampered by data silos and turf wars within the Chinese government. Over time, the ministry’s pursuit of a singular, unified system devolved into two separate operations: a surveillance and database system, devoted to gathering and storing information, and the social-credit system, which some 40 government departments participate in. When people repeatedly do things that aren’t allowed—from jaywalking to engaging in business corruption—their social-credit score falls and they can be blocked from things like buying train and plane tickets or applying for a mortgage.

    In the same year the Ministry of Public Security announced Golden Shield, Hong Yanqing entered the ministry’s police university in Beijing. But after seven years of training, having received his bachelor’s and master’s degrees, Hong began to have second thoughts about becoming a policeman. He applied instead to study abroad. By the fall of 2007, he had moved to the Netherlands to begin a PhD in international human rights law, approved and subsidized by the Chinese government.

    Over the next four years, he familiarized himself with the Western practice of law through his PhD research and a series of internships at international organizations. He worked at the International Labor Organization on global workplace discrimination law and the World Health Organization on road safety in China. “It’s a very legalistic culture in the West—that really strikes me. People seem to go to court a lot,” he says. “For example, for human rights law, most of the textbooks are about the significant cases in court resolving human rights issues.”

    Hong found this to be strangely inefficient. He saw going to court as a final resort for patching up the law’s inadequacies, not a principal tool for establishing it in the first place. Legislation crafted more comprehensively and with greater forethought, he believed, would achieve better outcomes than a system patched together through a haphazard accumulation of case law, as in the US.

    After graduating, he carried these ideas back to Beijing in 2012, on the eve of Xi Jinping’s ascent to the presidency. Hong worked at the UN Development Program and then as a journalist for the People’s Daily, the largest newspaper in China, which is owned by the government.

    Xi began to rapidly expand the scope of government censorship. Influential commentators, or “Big Vs”—named for their verified accounts on social media—had grown comfortable criticizing and ridiculing the Chinese Communist Party. In the fall of 2013, the party arrested hundreds of microbloggers for what it described as “malicious rumor-mongering” and paraded a particularly influential one on national television to make an example of him.

    The moment marked the beginning of a new era of censorship. The following year, the Cyberspace Administration of China was founded. The new central agency was responsible for everything involved in internet regulation, including national security, media and speech censorship, and data protection. Hong left the People’s Daily and joined the agency’s department of international affairs. He represented it at the UN and other global bodies and worked on cybersecurity cooperation with other governments.

    By July 2015, the Cyberspace Administration had released a draft of its first law. The Cybersecurity Law, which entered into force in June of 2017, required that companies obtain consent from people to collect their personal information. At the same time, it tightened internet censorship by banning anonymous users—a provision enforced by regular government inspections of data from internet service providers.

    In the spring of 2016, Hong sought to return to academia, but the agency asked him to stay. The Cybersecurity Law had purposely left the regulation of personal data protection vague, but consumer data breaches and theft had reached unbearable levels. A 2016 study by the Internet Society of China found that 84% of those surveyed had suffered some leak of their data, including phone numbers, addresses, and bank account details. This was spurring a growing distrust of digital service providers that required access to personal information, such as ride-hailing, food-delivery, and financial apps. Xu Yuyu’s death poured oil on the flames.

    The government worried that such sentiments would weaken participation in the digital economy, which had become a central part of its strategy for shoring up the country’s slowing economic growth. The advent of GDPR also made the government realize that Chinese tech giants would need to meet global privacy norms in order to expand abroad.

    Hong was put in charge of a new task force that would write a Personal Information Protection Specification (PIPS) to help solve these challenges. The document, though nonbinding, would tell companies how regulators intended to implement the Cybersecurity Law. In the process, the government hoped, it would nudge them to adopt new norms for data protection by themselves.

    Hong’s task force set about translating every relevant document they could find into Chinese. They translated the privacy guidelines put out by the Organization for Economic Cooperation and Development and by its counterpart, the Asia-Pacific Economic Cooperation; they translated GDPR and the California Consumer Privacy Act. They even translated the 2012 White House Consumer Privacy Bill of Rights, introduced by the Obama administration but never made into law. All the while, Hong met regularly with European and American data protection regulators and scholars.

    Bit by bit, from the documents and consultations, a general choice emerged. “People were saying, in very simplistic terms, ‘We have a European model and the US model,’” Hong recalls. The two approaches diverged substantially in philosophy and implementation. Which one to follow became the task force’s first debate.

    At the core of the European model is the idea that people have a fundamental right to have their data protected. GDPR places the burden of proof on data collectors, such as companies, to demonstrate why they need the data. By contrast, the US model privileges industry over consumers. Businesses define for themselves what constitutes reasonable data collection; consumers only get to choose whether to use that business. The laws on data protection are also far more piecemeal than in Europe, divvied up among sectoral regulators and specific states.

    At the time, without a central law or single agency in charge of data protection, China’s model more closely resembled the American one. The task force, however, found the European approach compelling. “The European rule structure, the whole system, is more clear,” Hong says.

    But most of the task force members were representatives from Chinese tech giants, like Baidu, Alibaba, and Huawei, and they felt that GDPR was too restrictive. So they adopted its broad strokes—including its limits on data collection and its requirements on data storage and data deletion—and then loosened some of its language. GDPR’s principle of data minimization, for example, maintains that only necessary data should be collected in exchange for a service. PIPS allows room for other data collection relevant to the service provided.

    PIPS took effect in May 2018, the same month that GDPR finally took effect. But as Chinese officials watched the US upheaval over the Facebook and Cambridge Analytica scandal, they realized that a nonbinding agreement would not be enough. The Cybersecurity Law didn’t have a strong mechanism for enforcing data protection. Regulators could only fine violators up to 1,000,000 yuan ($140,000), an inconsequential amount for large companies. Soon after, the National People’s Congress, China’s top legislative body, voted to begin drafting a Personal Information Protection Law within its current five-year legislative period, which ends in 2023. It would strengthen data protection provisions, provide for tougher penalties, and potentially create a new enforcement agency.

    After Cambridge Analytica, says Hong, “the government agency understood, ‘Okay, if you don’t really implement or enforce those privacy rules, then you could have a major scandal, even affecting political things.’”

    The local police investigation of Xu Yuyu’s death eventually identified the scammers who had called her. It had been a gang of seven who’d cheated many other victims out of more than 560,000 yuan using illegally obtained personal information. The court ruled that Xu’s death had been a direct result of the stress of losing her family’s savings. Because of this, and his role in orchestrating tens of thousands of other calls, the ringleader, Chen Wenhui, 22, was sentenced to life in prison. The others received sentences between three and 15 years.Retour ligne automatique
    xu yuyu

    Emboldened, Chinese media and consumers began more openly criticizing privacy violations. In March 2018, internet search giant Baidu’s CEO, Robin Li, sparked social-media outrage after suggesting that Chinese consumers were willing to “exchange privacy for safety, convenience, or efficiency.” “Nonsense,” wrote a social-media user, later quoted by the People’s Daily. “It’s more accurate to say [it is] impossible to defend [our privacy] effectively.”

    In late October 2019, social-media users once again expressed anger after photos began circulating of a school’s students wearing brainwave-monitoring headbands, supposedly to improve their focus and learning. The local educational authority eventually stepped in and told the school to stop using the headbands because they violated students’ privacy. A week later, a Chinese law professor sued a Hangzhou wildlife zoo for replacing its fingerprint-based entry system with face recognition, saying the zoo had failed to obtain his consent for storing his image.

    But the public’s growing sensitivity to infringements of consumer privacy has not led to many limits on state surveillance, nor even much scrutiny of it. As Maya Wang, a researcher at Human Rights Watch, points out, this is in part because most Chinese citizens don’t know the scale or scope of the government’s operations. In China, as in the US and Europe, there are broad public and national security exemptions to data privacy laws. The Cybersecurity Law, for example, allows the government to demand data from private actors to assist in criminal legal investigations. The Ministry of Public Security also accumulates massive amounts of data on individuals directly. As a result, data privacy in industry can be strengthened without significantly limiting the state’s access to information.

    The onset of the pandemic, however, has disturbed this uneasy balance.

    On February 11, Ant Financial, a financial technology giant headquartered in Hangzhou, a city southwest of Shanghai, released an app-building platform called AliPay Health Code. The same day, the Hangzhou government released an app it had built using the platform. The Hangzhou app asked people to self-report their travel and health information, and then gave them a color code of red, yellow, or green. Suddenly Hangzhou’s 10 million residents were all required to show a green code to take the subway, shop for groceries, or enter a mall. Within a week, local governments in over 100 cities had used AliPay Health Code to develop their own apps. Rival tech giant Tencent quickly followed with its own platform for building them.

    The apps made visible a worrying level of state surveillance and sparked a new wave of public debate. In March, Hu Yong, a journalism professor at Beijing University and an influential blogger on Weibo, argued that the government’s pandemic data collection had crossed a line. Not only had it led to instances of information being stolen, he wrote, but it had also opened the door to such data being used beyond its original purpose. “Has history ever shown that once the government has surveillance tools, it will maintain modesty and caution when using them?” he asked.

    Indeed, in late May, leaked documents revealed plans from the Hangzhou government to make a more permanent health-code app that would score citizens on behaviors like exercising, smoking, and sleeping. After a public outcry, city officials canceled the project. That state-run media had also published stories criticizing the app likely helped.

    The debate quickly made its way to the central government. That month, the National People’s Congress announced it intended to fast-track the Personal Information Protection Law. The scale of the data collected during the pandemic had made strong enforcement more urgent, delegates said, and highlighted the need to clarify the scope of the government’s data collection and data deletion procedures during special emergencies. By July, the legislative body had proposed a new “strict approval” process for government authorities to undergo before collecting data from private-sector platforms. The language again remains vague, to be fleshed out later—perhaps through another nonbinding document—but this move “could mark a step toward limiting the broad scope” of existing government exemptions for national security, wrote Sacks and fellow China scholars at New America.

    Hong similarly believes the discrepancy between rules governing industry and government data collection won’t last, and the government will soon begin to limit its own scope. “We cannot simply address one actor while leaving the other out,” he says. “That wouldn’t be a very scientific approach.”

    Other observers disagree. The government could easily make superficial efforts to address public backlash against visible data collection without really touching the core of the Ministry of Public Security’s national operations, says Wang, of Human Rights Watch. She adds that any laws would likely be enforced unevenly: “In Xinjiang, Turkic Muslims have no say whatsoever in how they’re treated.”

    Still, Hong remains an optimist. In July, he started a job teaching law at Beijing University, and he now maintains a blog on cybersecurity and data issues. Monthly, he meets with a budding community of data protection officers in China, who carefully watch how data governance is evolving around the world.

    #criminalité #Nokia_Siemens #fraude #Huawei #payement #Cisco #CambridgeAnalytica/Emerdata #Baidu #Alibaba #domination #bénéfices #BHATX #BigData #lutte #publicité (...)

    ##criminalité ##CambridgeAnalytica/Emerdata ##publicité ##[fr]Règlement_Général_sur_la_Protection_des_Données__RGPD_[en]General_Data_Protection_Regulation__GDPR_[nl]General_Data_Protection_Regulation__GDPR_ ##Nortel_Networks ##Facebook ##biométrie ##consommation ##génétique ##consentement ##facial ##reconnaissance ##empreintes ##Islam ##SocialCreditSystem ##surveillance ##TheGreatFirewallofChina ##HumanRightsWatch

  • The long, complicated history of “people analytics”
    https://www.technologyreview.com/2020/08/19/1006365/if-then-lepore-review-simulmatics

    If you work for Bank of America, or the US Army, you might have used technology developed by Humanyze. The company grew out of research at MIT’s cross-disciplinary Media Lab and describes its products as “science-backed analytics to drive adaptability.” If that sounds vague, it might be deliberate. Among the things Humanyze sells to businesses are devices for snooping on employees, such as ID badges with embedded RFID tags, near-field-communication sensors, and built-in microphones that track (...)

    #BankofAmerica #Humanyze #USArmy #DoD #IBM #algorithme #capteur #RFID #militaire #compagnie #élections #prédiction #son #comportement #surveillance #travail (...)

    ##voix

  • The long, complicated history of “people analytics” | MIT Technology Review
    https://www.technologyreview.com/2020/08/19/1006365/if-then-lepore-review-simulmatics/?truid=a497ecb44646822921c70e7e051f7f1a

    If you work for Bank of America, or the US Army, you might have used technology developed by Humanyze. The company describes its products as “science-backed analytics to drive adaptability.”

    If that sounds vague, it might be deliberate. Among the things Humanyze sells to businesses are devices for snooping on employees, such as ID badges with embedded RFID tags, and built-in microphones that track in granular detail the tone and volume (though not the actual words) of people’s conversations throughout the day. Humanyze uses the data to create an “Organizational Health Score,” it promises is “a proven formula to accelerate change and drive improvement.”

    Or perhaps you work for one of the healthcare, retail, or financial-­services companies that use software developed by Receptiviti. The Toronto-based company’s mission is to “help machines understand people” by scanning emails and Slack messages for linguistic hints of unhappiness. “We worry about the perception of Big Brother,” Receptiviti’s CEO recently told the Wall Street Journal. He prefers calling employee surveillance “corporate mindfulness.” (Orwell would have had something to say about that euphemism, too.)

    Such efforts at what its creators call “people analytics” are usually justified on the grounds of improving efficiency or the customer experience. In recent months, some governments and public health experts have advocated tracking and tracing applications as a means of stopping the spread of covid-19.

    But in embracing these technologies, businesses and governments often avoid answering crucial questions: Who should know what about you? Is what they know accurate? What should they be able to do with that information? And is it ever possible to devise a “proven formula” for assessing human behavior? Simulmatics, a now-defunct “people analytics” company provides a cautionary tale, writes Christine Rosen, and confirms that all these ventures are based on a false belief that mathematical laws of human nature are real, in the way that laws of physics are.

    #Travail #Surveillance #Contrôle_social

  • Inside China’s unexpected quest to protect data privacy | MIT Technology Review
    https://www.technologyreview.com/2020/08/19/1006441/china-data-privacy-hong-yanqing-gdpr/?truid=a497ecb44646822921c70e7e051f7f1a

    In the West, it’s widely believed that neither the Chinese government nor Chinese people care about privacy. US tech giants wield this supposed indifference to argue that onerous privacy laws would put them at a competitive disadvantage to Chinese firms.

    In reality, this picture of Chinese attitudes to privacy is out of date. Over the last few years the Chinese government, seeking to strengthen consumers’ trust and participation in the digital economy, has begun to implement privacy protections that in many respects resemble those in America and Europe today.

    Even as the government has strengthened consumer privacy, however, it has ramped up state surveillance. It uses DNA samples and other biometrics, like face and fingerprint recognition, to monitor citizens throughout the country.

    It has tightened internet censorship and developed a “social credit” system, which punishes behaviors the authorities say weaken social stability. During the pandemic, it deployed a system of “health code” apps to dictate who could travel, based on their risk of carrying the coronavirus. And it has used a slew of invasive surveillance technologies in its harsh repression of Muslim Uighurs in the northwestern region of Xinjiang.

    This paradox has become a defining feature of China’s emerging data privacy regime. It raises a question: Can a system endure with strong protections for consumer privacy, but almost none against government snooping? The answer doesn’t affect only China. Its technology companies have an increasingly global footprint, and regulators around the world are watching its policy decisions.

    #Chine #Vie_privée #Surveillance

  • Brazil is sliding into techno-authoritarianism | MIT Technology Review
    https://www.technologyreview.com/2020/08/19/1007094/brazil-bolsonaro-data-privacy-cadastro-base/?truid=a497ecb44646822921c70e7e051f7f1a

    For many years, Latin America’s largest democracy was a leader on data governance. In 1995, it created the Brazilian Internet Steering Committee, a multi-stakeholder body to help the country set principles for internet governance. In 2014, Dilma Rousseff’s government pioneered the Marco Civil (Civil Framework), an internet “bill of rights” lauded by Tim Berners-Lee, the inventor of the World Wide Web. Four years later, Brazil’s congress passed a data protection law, the LGPD, closely modeled on Europe’s GDPR.

    Recently, though, the country has veered down a more authoritarian path. Even before the pandemic, Brazil had begun creating an extensive data-collection and surveillance infrastructure. In October 2019, President Jair Bolsonaro signed a decree compelling all federal bodies to share most of the data they hold on Brazilian citizens, from health records to biometric information, and consolidate it in a vast master database, the Cadastro Base do Cidadão (Citizen’s Basic Register). With no debate or public consultation, the measure took many people by surprise.

    In lowering barriers to the exchange of information, the government says, it hopes to increase the quality and consistency of data it holds. This could—according to the official line—improve public services, cut down on voter fraud, and reduce bureaucracy. In a country with some 210 million people, such a system could speed up the delivery of social welfare and tax benefits, and make public policies more efficient.

    But critics have warned that under Bolsonaro’s far-right leadership, this concentration of data will be used to abuse personal privacy and civil liberties. And the covid-19 pandemic appears to be accelerating the country’s slide toward a surveillance state. Read the full story.

    #Brésil #Surveillance #Vie_privée #Législation

  • Podcast : Want consumer privacy ? Try China
    https://www.technologyreview.com/2020/08/19/1007425/data-privacy-china-gdpr

    Forget the idea that China doesn’t care about privacy—its citizens will soon have much greater consumer privacy protections than Americans. The narrative in the US that the Chinese don’t care about data privacy is simply misguided. It’s true that the Chinese government has built a sophisticated surveillance apparatus (with the help of Western companies), and continues to spy on its citizenry. But when it comes to what companies can do with people’s information, China is rapidly moving toward a (...)

    #Alibaba #Apple #ByteDance #Cisco #Google #Nokia_Siemens #Nortel_Networks #TikTok #Facebook #WeChat #Weibo #QRcode #smartphone #censure #BHATX #BigData #COVID-19 #GAFAM #santé #surveillance (...)

    ##santé ##[fr]Règlement_Général_sur_la_Protection_des_Données__RGPD_[en]General_Data_Protection_Regulation__GDPR_[nl]General_Data_Protection_Regulation__GDPR_