https://www.technologyreview.com

  • How to poison the data that Big Tech uses to surveil you
    https://www.technologyreview.com/2021/03/05/1020376/resist-big-tech-surveillance-data

    Algorithms are meaningless without good data. The public can exploit that to demand change. Every day, your life leaves a trail of digital breadcrumbs that tech giants use to track you. You send an email, order some food, stream a show. They get back valuable packets of data to build up their understanding of your preferences. That data is fed into machine-learning algorithms to target you with ads and recommendations. Google cashes your data in for over $120 billion a year of ad revenue. (...)

    #Google #algorithme #activisme #[fr]Règlement_Général_sur_la_Protection_des_Données_(RGPD)[en]General_Data_Protection_Regulation_(GDPR)[nl]General_Data_Protection_Regulation_(GDPR) #BigData (...)

    ##[fr]Règlement_Général_sur_la_Protection_des_Données__RGPD_[en]General_Data_Protection_Regulation__GDPR_[nl]General_Data_Protection_Regulation__GDPR_ ##microtargeting

    • Data Leverage: A Framework for Empowering the Public in its
      Relationship with Technology Companies

      https://arxiv.org/pdf/2012.09995.pdf

      Many powerful computing technologies rely on implicit and explicit data contributions from the public. This dependency suggests a potential source of leverage for the public in its relationship with technology companies: by reducing, stopping, redirecting, or otherwise manipulating data contributions, the public can reduce the effectiveness of many lucrative technologies. In this paper, we synthesize emerging research that seeks to better understand and help people action this data leverage. Drawing on prior work in areas including machine learning, human-computer interaction, and fairness and accountability in computing, we present a framework for
      understanding data leverage that highlights new opportunities to change technology company behavior related to privacy, economic inequality, content moderation and other areas of societal concern. Our framework also points towards ways that policymakers can bolster data leverage as a means of changing the balance of power between the public and tech companies.

  • Is the new boom in digital art sales a genuine opportunity or a trap? | MIT Technology Review
    https://www.technologyreview.com/2021/03/25/1021215/nft-artists-scams-profit-environment-blockchain

    Artists are jumping into a market that will pay thousands for their work. But they’re running into scams, environmental concerns, and crypto hype.

    Anna Podedworna first heard about NFTs a month or so ago, when a fellow artist sent her an Instagram message trying to convince her to get on board. She found it really off-putting, like a pitch for a pyramid scheme. He had the best of intentions, she thought: NFTs, or non-fungible tokens, are basically just a way of selling and buying anything digital, including art, that’s supported by cryptocurrency. Despite Podedworna’s initial reaction, she started researching whether they might provide some alternative income.

    She’s still on the fence, but NFTs have become an unavoidable subject for anyone earning a living as a creative person online. Some promise that NFTs are part of a digital revolution that will democratize fame and give creators control. Others point to the environmental impact of crypto and worry about unrealistic expectations set by, say, the news that digital artist Beeple had sold a JPG of his collected works for $69 million in a Christie’s auction.

    Newcomers must untangle practical, logistical, and ethical conundrums if they want to enter the fray before the current wave of interest passes. And there’s a question lingering in the background: Is the NFT craze benefiting digital artists, or are artists helping to make wealthy cryptocurrency holders even richer?

    #NFT #Art_numérique #Cryptoart #Arnaque #Cryptomonnaies #Idéologie_propriétaire

  • Scientists plan to drop limits on how far human embryos are grown in the lab | MIT Technology Review
    https://www.technologyreview.com/2021/03/16/1020879/scientists-14-day-limit-stem-cell-human-embryo-research/?truid=a497ecb44646822921c70e7e051f7f1a

    As technology for manipulating embryonic life accelerates, researchers want to get rid of their biggest stop sign.

    Antonio Regalado
    March 16, 2021

    Pushing the limits: For the last 40 years, scientists have agreed never to allow human embryos to develop beyond two weeks in their labs. Now a key scientific body is ready to do away with the 14-day limit. The International Society for Stem Cell Research has prepared draft recommendations to move such research out of a category of “prohibited” scientific activities and into a class of research that can be permitted after ethics review and depending on national regulations.

    Why? Scientists are motivated to grow embryos longer in order to study—and potentially manipulate—the development process. They believe discoveries could come from studying embryos longer, for example improvements to IVF or finding clues to the causes of birth defects. But such techniques raise the possibility of someday gestating animals outside the womb until birth, a concept called ectogenesis. And the long-term growth of embryos could create a platform to explore the genetic engineering of humans.

    #Cellules_souches #Biotechnologies #Embryons_humains #Hubris

  • He got Facebook hooked on AI. Now he can’t fix its misinformation addiction, by Karen Hao | MIT Technology Review
    https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation

    The company’s AI algorithms gave it an insatiable habit for lies and hate speech. Now the man who built them can’t fix the problem.

    #facebook #AI

  • MIT Technology Review : We reveal our 10 Breakthrough Technologies of 2021

    For the last 20 years, MIT Technology Review has compiled an annual selection of the year’s most important technologies. Today, we unveil this year’s list. Some, such as mRNA vaccines, are already changing our lives, while others are still a few years off. As always, three things are true of our list. It is eclectic; some of the innovations on it are clearly making an impact now, while some have yet to do so; and many of them have the potential to do harm as well as good. Whether or not they come to represent progress 20 years from now depends on how they’re used—and, of course, on how we’re defining progress by then. Taken together, we believe this list represents a glimpse into our collective future.

    Here are our 10 breakthrough technologies of 2021:

    Messenger RNA vaccines. The two most effective vaccines against the coronavirus are based on messenger RNA, a technology that has been in the works for 20 years and could transform medicine, leading to vaccines against various infectious diseases, including malaria.

    GPT-3. Large natural-language computer models that learn to write and speak are a big step toward AI that can better understand and interact with the world. GPT-3 is by far the largest—and most literate—to date.

    TikTok recommendation algorithms. These algorithms have changed the way people become famous online. The ability of new creators to get a lot of views very quickly—and the ease with which users can discover so many kinds of content—have contributed to the app’s stunning growth.

    Lithium-metal batteries. Electric vehicles are expensive, and you can only drive them a few hundred miles before they need to recharge. Lithium-metal batteries, as opposed to the existing lithium-ion, could boost the range of an EV by 80%.

    Data trusts. A data trust is a legal entity that collects and manages people’s personal data on their behalf. They could offer a potential solution to long-standing problems in privacy and security.

    Green hydrogen. Hydrogen has always been an intriguing possible replacement for fossil fuels, but up to now it’s been made from natural gas; the process is dirty and energy intensive. The rapidly dropping cost of solar and wind power means green hydrogen is now cheap enough to be practical.

    Digital contact tracing. Although it hasn’t lived up to the hype in this pandemic, especially in the US, digital contact tracing could not only help us prepare for the next pandemic but also carry over to other areas of healthcare.

    Hyper-accurate positioning. While GPS is accurate to within 5 to 10 meters, new hyper-accurate positioning technologies have accuracies within a few millimeters. That could be transformative for delivery robots and self-driving cars.

    Remote everything. The pandemic forced the world to go remote. The knock-on effects for work, play, healthcare and much else besides are huge.

    Multi-skilled AI. AI currently lacks the ability, found even in young children, to learn how the world works and apply that general knowledge to new situations. That’s changing.

    Read more about each of these technologies, and read the latest issue of MIT Technology Review, all about progress. Not a subscriber? Now’s your chance! Prices range from just $50 to $100 a year for you to get access to fantastic, award-winning journalism about what’s now and what’s next in technology.

    #Technologies

  • This is how we lost control of our faces | MIT Technology Review
    https://www.technologyreview.com/2021/02/05/1017388/ai-deep-learning-facial-recognition-data-history

    The largest ever study of facial-recognition data shows how much the rise of deep learning has fueled a loss of privacy.
    by

    Karen Hao
    February 5, 2021

    In 1964, mathematician and computer scientist Woodrow Bledsoe first attempted the task of matching suspects’ faces to mugshots. He measured out the distances between different facial features in printed photographs and fed them into a computer program. His rudimentary successes would set off decades of research into teaching machines to recognize human faces.

    Now a new study shows just how much this enterprise has eroded our privacy. It hasn’t just fueled an increasingly powerful tool of surveillance. The latest generation of deep-learning-based facial recognition has completely disrupted our norms of consent.

    People were extremely cautious about collecting, documenting, and verifying face data in the early days, says Raji. “Now we don’t care anymore. All of that has been abandoned,” she says. “You just can’t keep track of a million faces. After a certain point, you can’t even pretend that you have control.”

    A history of facial-recognition data

    The researchers identified four major eras of facial recognition, each driven by an increasing desire to improve the technology. The first phase, which ran until the 1990s, was largely characterized by manually intensive and computationally slow methods.

    But then, spurred by the realization that facial recognition could track and identify individuals more effectively than fingerprints, the US Department of Defense pumped $6.5 million into creating the first large-scale face data set. Over 15 photography sessions in three years, the project captured 14,126 images of 1,199 individuals. The Face Recognition Technology (FERET) database was released in 1996.

    The following decade saw an uptick in academic and commercial facial-recognition research, and many more data sets were created. The vast majority were sourced through photo shoots like FERET’s and had full participant consent. Many also included meticulous metadata, Raji says, such as the age and ethnicity of subjects, or illumination information. But these early systems struggled in real-world settings, which drove researchers to seek larger and more diverse data sets.

    In 2007, the release of the Labeled Faces in the Wild (LFW) data set opened the floodgates to data collection through web search. Researchers began downloading images directly from Google, Flickr, and Yahoo without concern for consent. LFW also relaxed standards around the inclusion of minors, using photos found with search terms like “baby,” “juvenile,” and “teen” to increase diversity. This process made it possible to create significantly larger data sets in a short time, but facial recognition still faced many of the same challenges as before. This pushed researchers to seek yet more methods and data to overcome the technology’s poor performance.

    Then, in 2014, Facebook used its user photos to train a deep-learning model called DeepFace. While the company never released the data set, the system’s superhuman performance elevated deep learning to the de facto method for analyzing faces. This is when manual verification and labeling became nearly impossible as data sets grew to tens of millions of photos, says Raji. It’s also when really strange phenomena start appearing, like auto-generated labels that include offensive terminology.

    Image-generation algorithms are regurgitating the same sexist, racist ideas that exist on the internet.

    The way the data sets were used began to change around this time, too. Instead of trying to match individuals, new models began focusing more on classification. “Instead of saying, ‘Is this a photo of Karen? Yes or no,’ it turned into ‘Let’s predict Karen’s internal personality, or her ethnicity,’ and boxing people into these categories,” Raji says.

    Amba Kak, the global policy director at AI Now, who did not participate in the research, says the paper offers a stark picture of how the biometrics industry has evolved. Deep learning may have rescued the technology from some of its struggles, but “that technological advance also has come at a cost,” she says. “It’s thrown up all these issues that we now are quite familiar with: consent, extraction, IP issues, privacy.”

    Raji says her investigation into the data has made her gravely concerned about deep-learning-based facial recognition.

    “It’s so much more dangerous,” she says. “The data requirement forces you to collect incredibly sensitive information about, at minimum, tens of thousands of people. It forces you to violate their privacy. That in itself is a basis of harm. And then we’re hoarding all this information that you can’t control to build something that likely will function in ways you can’t even predict. That’s really the nature of where we’re at.”

    #Reconnaissance_faciale #éthique #Histoire_numérique #Surveillance

  • The space tourism we were promised is finally here—sort of | MIT Technology Review
    https://www.technologyreview.com/2021/02/03/1017255/space-tourism-finally-here-sort-of-spacex-inspiration4/?truid=a497ecb44646822921c70e7e051f7f1a

    SpaceX weathered through the onset of the covid-19 pandemic last year to become the first private company to launch astronauts into space using a commercial spacecraft.

    It’s poised to build on that success with another huge milestone before 2021 is over. On Monday, the company announced plans to launch the first “all-civilian” mission into orbit by the end of the year. Called Inspiration4, the mission will take billionaire Jared Isaacman, a trained pilot and the CEO of digital payments company Shift4Payments, plus three others into low Earth orbit via a Crew Dragon vehicle for two to four days, possibly longer.

    Inspiration4 includes a charity element: Isaacman (the sole buyer of the mission and its “commander”) has donated $100 million to St. Jude Children’s Research Hospital, in Memphis, and is attempting to raise at least $100 million more from public donors. One seat is going to a “St. Jude ambassador” that’s already been chosen. But the two others are still up for grabs: one will be raffled off to someone who donates at least $10 to St. Jude, while the other will be a business entrepreneur chosen through a competition held by Shift4Payments.
    Sign up for The Airlock - Your gateway to the future of space technology
    Stay updated on MIT Technology Review initiatives and events?
    Yes
    No

    “This is an important milestone towards enabling access to space for everyone,” SpaceX CEO Elon Musk told reporters on Monday. “It is only through missions like this that we’re able to bring the cost down over time and make space accessible to all."

    Inspiration4 marks SpaceX’s fourth scheduled private mission in the next few years. The other three include a collaboration with Axiom Space to use Crew Dragon to take four people for an eight-day stay aboard the International Space Station (now scheduled for no earlier than January 2022); another Crew Dragon mission into orbit later that year for four private citizens through tourism company Space Adventures; and Japanese billionaire Yusaku Maezawa’s #dearMoon mission around the moon in 2023 for himself plus seven to 10 others aboard the Starship spacecraft.

    SpaceX has never really billed itself as a space tourism company as aggressively as Blue Origin and Virgin Galactic have. While Crew Dragon goes all the way into low-Earth orbit, Virgin Galactic’s SpaceShipTwo and Blue Origin’s New Shepard vehicles just go into suborbital space, offering a taste of microgravity and a view of the Earth from high above for just a few minutes—but for way less money. And yet, in building a business that goes even farther, with higher launch costs and the need for more powerful rockets, SpaceX already has four more private missions on the books than any other company does.

    When Crew Dragon first took NASA astronauts into space last year, one of the biggest questions to come up was whether customers outside NASA would actually be interested in going.

    “A lot of people believe there is a market for space tourism,” says Howard McCurdy, a space policy expert at American University in Washington, DC. “But right now it’s at the very high end. As transportation capabilities improve, the hope is that the costs will come down. That begs the question of whether or not you can sustain a new space company on space tourism alone. I think that’s questionable.”

    So why has SpaceX’s expansion into the private mission scene gone so well so far? Part of it must be that it’s such an attractive brand to partner with at the moment. But even if a market does not materialize soon to make private missions a profitable venture, SpaceX doesn’t need to be concerned. It has plenty of other ways to make money.

    “I’m not sure Elon Musk cares much if he makes money through this business,” says McCurdy. “But he’s very good at leveraging and financing his operations.” SpaceX launches satellites for government and commercial customers around the world; it’s got contracts with NASA for taking cargo and astronauts alike to the space station; it’s ramping up progress with building out the Starlink constellation and should start offering internet services to customers some time this year.

    “It really reduces your risk when you can have multiple sources of revenue and business for an undertaking that’s based upon the single leap of rockets and space technologies,” says McCurdy. “The market for space tourism is not large enough to sustain a commercial space company. When combined with government contracts, private investments, and foreign sales it starts to become sustainable.”

    Space tourism, especially to low-Earth orbit, will still remain incredibly expensive for the foreseeable future. And that underscores the issue of equity. “If we’re going into space, who’s the ‘we’?” asks McCurdy. “Is it just the top 1% of the top 1%?”

    The lottery concept addresses this to some extent and offers opportunities to ordinary people, but it won’t be enough on its own. Space tourism, and the rest of the space industry, still needs a sustainable model that can invite more people to participate.

    For now, SpaceX appears to be leading the drive to popularize space tourism. And competitors don’t necessarily need to emulate SpaceX’s business model precisely in order to catch up. Robert Goehlich, a German-based space tourism expert at Embry-Riddle Aeronautical University, notes that space tourism itself is already multifaceted, encompassing suborbital flights, orbital flights, space station flights, space hotel flights, and moon flights. The market for one, such as cheaper suborbital flights, is not necessarily faced with the same constraints as the others.

    Still, there is no question this could be the year private missions become a reality. “We’ve waited a long time for space tourism,” says McCurdy. “We’re going to get a chance this year to see if it works as expected.”

    #Espace #Commercialisation #Tourisme #Enclosures

  • How our data encodes systematic racism
    https://www.technologyreview.com/2020/12/10/1013617/racism-data-science-artificial-intelligence-ai-opinion

    Technologists must take responsibility for the toxic ideologies that our data sets and algorithms reflect. I’ve often been told, “The data does not lie.” However, that has never been my experience. For me, the data nearly always lies. Google Image search results for “healthy skin” show only light-skinned women, and a query on “Black girls” still returns pornography. The CelebA face data set has labels of “big nose” and “big lips” that are disproportionately assigned to darker-skinned female faces (...)

    #algorithme #racisme #données #biais #discrimination

  • Inside NSO, Israel’s billion-dollar spyware giant
    https://www.technologyreview.com/2020/08/19/1006458/nso-spyware-controversy-pegasus-human-rights

    The world’s most notorious surveillance company says it wants to clean up its act. Go on, we’re listening.

    Maâti Monjib speaks slowly, like a man who knows he’s being listened to.

    It’s the day of his 58th birthday when we speak, but there’s little celebration in his voice. “The surveillance is hellish,” Monjib tells me. “It is really difficult. It controls everything I do in my life.”

    A history professor at the University of Mohammed V in Rabat, Morocco, Monjib vividly remembers the day in 2017 when his life changed. Charged with endangering state security by the government he has fiercely and publicly criticized, he was sitting outside a courtroom when his iPhone suddenly lit up with a series of text messages from numbers he didn’t recognize. They contained links to salacious news, petitions, and even Black Friday shopping deals.

    A month later, an article accusing him of treason appeared on a popular national news site with close ties to Morocco’s royal rulers. Monjib was used to attacks, but now it seemed his harassers knew everything about him: another article included information about a pro-democracy event he was set to attend but had told almost no one about. One story even proclaimed that the professor “has no secrets from us.”

    He’d been hacked. The messages had all led to websites that researchers say were set up as lures to infect visitors’ devices with Pegasus, the most notorious spyware in the world.

    Pegasus is the blockbuster product of NSO Group, a secretive billion-dollar Israeli surveillance company. It is sold to law enforcement and intelligence agencies around the world, which use the company’s tools to choose a human target, infect the person’s phone with the spyware, and then take over the device. Once Pegasus is on your phone, it is no longer your phone.

    NSO sells Pegasus with the same pitch arms dealers use to sell conventional weapons, positioning it as a crucial aid in the hunt for terrorists and criminals. In an age of ubiquitous technology and strong encryption, such “lawful hacking” has emerged as a powerful tool for public safety when law enforcement needs access to data. NSO insists that the vast majority of its customers are European democracies, although since it doesn’t release client lists and the countries themselves remain silent, that has never been verified.

    Monjib’s case, however, is one of a long list of incidents in which Pegasus has been used as a tool of oppression. It has been linked to cases including the murder of Saudi journalist Jamal Khashoggi, the targeting of scientists and campaigners pushing for political reform in Mexico, and Spanish government surveillance of Catalan separatist politicians. Mexico and Spain have denied using Pegasus to spy on opponents, but accusations that they have done so are backed by substantial technical evidence.

    NSO’s basic argument is that it is the creator of a technology that governments use, but that since it doesn’t attack anyone itself, it can’t be held responsible.

    Some of that evidence is contained in a lawsuit filed last October in California by WhatsApp and its parent company, Facebook, alleging that Pegasus manipulated WhatsApp’s infrastructure to infect more than 1,400 cell phones. Investigators at Facebook found more than 100 human rights defenders, journalists, and public figures among the targets, according to court documents. Each call that was picked up, they discovered, sent malicious code through WhatsApp’s infrastructure and caused the recipient’s phone to download spyware from servers owned by NSO. This, WhatsApp argued, was a violation of American law.

    NSO has long faced such accusations with silence. Claiming that much of its business is an Israeli state secret, it has offered precious little public detail about its operations, customers, or safeguards.

    Now, though, the company suggests things are changing. In 2019, NSO, which was owned by a private equity firm, was sold back to its founders and another private equity firm, Novalpina, for $1 billion. The new owners decided on a fresh strategy: emerge from the shadows. The company hired elite public relations firms, crafted new human rights policies, and developed new self-­governance documents. It even began showing off some of its other products, such as a covid-19 tracking system called Fleming, and Eclipse, which can hack drones deemed a security threat.

    Over several months, I’ve spoken with NSO leadership to understand how the company works and what it says it is doing to prevent human rights abuses carried out using its tools. I have spoken to its critics, who see it as a danger to democratic values; to those who urge more regulation of the hacking business; and to the Israeli regulators responsible for governing it today. The company’s leaders talked about NSO’s future and its policies and procedures for dealing with problems, and it shared documents that detail its relationship with the agencies to which it sells Pegasus and other tools. What I found was a thriving arms dealer—inside the company, employees acknowledge that Pegasus is a genuine weapon—struggling with new levels of scrutiny that threaten the foundations of its entire industry.Retour ligne automatique
    “A difficult task”

    From the first day Shmuel Sunray joined NSO as its general counsel, he faced one international incident after another. Hired just days after WhatsApp’s lawsuit was filed, he found other legal problems waiting on his desk as soon as he arrived. They all centered on the same basic accusation: NSO Group’s hacking tools are sold to, and can be abused by, rich and repressive regimes with little or no accountability.

    Sunray had plenty of experience with secrecy and controversy: his previous job was as vice president of a major weapons manufacturer. Over several conversations, he was friendly as he told me that he’s been instructed by the owners to change NSO’s culture and operations, making it more transparent and trying to prevent human rights abuses from happening. But he was also obviously frustrated by the secrecy that he felt prevented him from responding to critics.

    “It’s a difficult task,” Sunray told me over the phone from the company’s headquarters in Herzliya, north of Tel Aviv. “We understand the power of the tool; we understand the impact of misuse of the tool. We’re trying to do the right thing. We have real challenges dealing with government, intelligence agencies, confidentiality, operational necessities, operational limitations. It’s not a classic case of human rights abuse by a company, because we don’t operate the systems—we’re not involved in actual operations of the systems—but we understand there is a real risk of misuse from the customers. We’re trying to find the right balance.”

    This underpins NSO’s basic argument, one that is common among weapons manufacturers: the company is the creator of a technology that governments use, but it doesn’t attack anyone itself, so it can’t be held responsible.

    Still, according to Sunray, there are several layers of protection in place to try to make sure the wrong people don’t have access.Retour ligne automatique
    Making a sale

    Like most other countries, Israel has export controls that require weapons manufacturers to be licensed and subject to government oversight. In addition, NSO does its own due diligence, says Sunray: its staff examine a country, look at its human rights record, and scrutinize its relationship with Israel. They assess the specific agency’s track record on corruption, safety, finance, and abuse—as well as factoring in how much it needs the tool.

    Sometimes negatives are weighed against positives. Morocco, for example, has a worsening human rights record but a lengthy history of cooperating with Israel and the West on security, as well as a genuine terrorism problem, so a sale was reportedly approved. By contrast, NSO has said that China, Russia, Iran, Cuba, North Korea, Qatar, and Turkey are among 21 nations that will never be customers.

    Finally, before a sale is made, NSO’s governance, risk, and compliance committee has to sign off. The company says the committee, made up of managers and shareholders, can decline sales or add conditions, such as technological restrictions, that are decided case by case. Retour ligne automatique
    Preventing abuse

    Once a sale is agreed to, the company says, technological guardrails prevent certain kinds of abuse. For example, Pegasus does not allow American phone numbers to be infected, NSO says, and infected phones cannot even be physically located in the United States: if one does find itself within American borders, the Pegasus software is supposed to self-destruct.

    NSO says Israeli phone numbers are among others also protected, though who else gets protection and why remains unclear.

    When a report of abuse comes in, an ad hoc team of up to 10 NSO employees is assembled to investigate. They interview the customer about the allegations, and they request Pegasus data logs. These logs don’t contain the content the spyware extracted, like chats or emails—NSO insists it never sees specific intelligence—but do include metadata such as a list of all the phones the spyware tried to infect and their locations at the time.

    According to one recent contract I obtained, customers must “use the system only for the detection, prevention, and investigation of crimes and terrorism and ensure the system will not be used for human rights violations.” They must notify the company of potential misuse. NSO says it has terminated three contracts in the past for infractions including abuse of Pegasus, but it refuses to say which countries or agencies were involved or who the victims were.

    “We’re not naïve”

    Lack of transparency is not the only problem: the safeguards have limits. While the Israeli government can revoke NSO’s license for violations of export law, the regulators do not take it on themselves to look for abuse by potential customers and aren’t involved in the company’s abuse investigations.

    Many of the other procedures are merely reactive as well. NSO has no permanent internal abuse team, unlike almost any other billion-dollar tech firm, and most of its investigations are spun up only when an outside source such as Amnesty International or Citizen Lab claims there has been malfeasance. NSO staff interview the agencies and customers under scrutiny but do not talk to the alleged victims, and while the company often disputes the technical reports offered as evidence, it also claims that both state secrecy and business confidentiality prevent it from sharing more information.

    The Pegasus logs that are crucial to any abuse inquiry also raise plenty of questions. NSO Group’s customers are hackers who work for spy agencies; how hard would it be for them to tamper with the logs? In a statement, the company insisted this isn’t possible but declined to offer details.

    If the logs aren’t disputed, NSO and its customers will decide together whether targets are legitimate, whether genuine crimes have been committed, and whether surveillance was done under due process of law or whether autocratic regimes spied on opponents.

    Sunray, audibly exasperated, says he feels as if secrecy is forcing him to operate with his hands tied behind his back.

    “It’s frustrating,” he told me. “We’re not naïve. There have been misuses. There will be misuses. We sell to many governments. Even the US government—no government is perfect. Misuse can happen, and it should be addressed.”

    But Sunray also returns to the company’s standard response, the argument that underpins its defense in the WhatsApp lawsuit: NSO is a manufacturer, but it’s not the operator of the spyware. We built it but they did the hacking—and they are sovereign nations.

    That’s not enough for many critics. “No company that believes it can be the independent watchdog of their own products ever convinces me,” says Marietje Schaake, a Dutch politician and former member of the European Parliament. “The whole idea that they have their own mechanisms while they have no problem selling commercial spyware to whoever wants to buy it, knowing that it’s used against human rights defenders and journalists—I think it shows the lack of responsibility on the part of this company more than anything.”

    So why the internal push for more transparency now? Because the deluge of technical reports from human rights groups, the WhatsApp lawsuit, and increasing governmental scrutiny threaten NSO’s status quo. And if there is going to be a new debate over how the industry gets regulated, it pays to have a powerful voice. Retour ligne automatique
    Growing scrutiny

    Lawful hacking and cyber-espionage have grown enormously as a business over the past decade, with no signs of retreat. NSO Group’s previous owners bought the company in 2014 for $130 million, less than one-seventh of the valuation it was sold for last year. The rest of the industry is expanding too, profiting from the spread of communications technology and deepening global instability. “There’s no doubt that any state has the right to buy this technology to fight crime and terrorism,” says Amnesty International’s deputy director, Danna Ingleton. “States are rightfully and lawfully able to use these tools. But that needs to be accompanied more with a regulatory system that prevents abuses and provides an accountability mechanism when abuse has happened.” Shining a much brighter light on the hacking industry, she argues, will allow for better regulation and more accountability.

    Earlier this year Amnesty International was in court in Israel arguing that the Ministry of Defense should revoke NSO’s license because of abuses of Pegasus. But just as the case was starting, officials from Amnesty and 29 other petitioners were told to leave the courtroom: a gag order was being placed on the proceedings at the ministry’s urging. Then, in July, a judge rejected the case outright.

    “I do not believe as a matter of principle and as a matter of law that NSO can claim a complete lack of responsibility for the way their tools are being used,” says United Nations special rapporteur Agnès Callamard. “That’s not how it works under international law.”

    Callamard advises the UN on extrajudicial executions and has been vocal about NSO Group and the spyware industry ever since it emerged that Pegasus was being used to spy on friends and associates of Khashoggi shortly before he was murdered. For her, the issue has life-or-death consequences.

    If NSO loses the WhatsApp case, one lawyer says, it calls into question all those companies that make their living by finding flaws in software and exploiting them.

    “We’re not calling for something radically new,” says Callamard. “We are saying that what’s in place at the moment is proving insufficient, and therefore governments or regulatory agencies need to move into a different gear quickly. The industry is expanding, and it should expand on the basis of the proper framework to regulate misuse. It’s important for global peace.”

    There have been calls for a temporary moratorium on sales until stronger regulation is enacted, but it’s not clear what that legal framework would look like. Unlike conventional arms, which are subject to various international laws, cyber weapons are currently not regulated by any worldwide arms control agreement. And while nonproliferation treaties have been suggested, there is little clarity on how they would measure existing capabilities, how monitoring or enforcement would work, or how the rules would keep up with rapid technological developments. Instead, most scrutiny today is happening at the national legal level.

    In the US, both the FBI and Congress are looking into possible hacks of American targets, while an investigation led by Senator Ron Wyden’s office wants to find out whether any Americans are involved in exporting surveillance technology to authoritarian governments. A recent draft US intelligence bill would require a government report on commercial spyware and surveillance technology.

    The WhatsApp lawsuit, meanwhile, has taken aim close to the heart of NSO’s business. The Silicon Valley giant argues that by targeting California residents—that is, WhatsApp and Facebook—NSO has given the court in San Francisco jurisdiction, and that the judge in the case can bar the Israeli company from future attempts to misuse WhatsApp’s and Facebook’s networks. That opens the door to an awful lot of possibilities: Apple, whose iPhone has been a paramount NSO target, could feasibly mount a similar legal attack. Google, too, has spotted NSO targeting Android devices.

    And financial damages are not the only sword hanging over NSO’s head. Such lawsuits also bring with them the threat of courtroom discovery, which has the potential to bring details of NSO’s business deals and customers into the public eye.

    “A lot depends on exactly how the court rules and how broadly it characterizes the violation NSO is alleged to have committed here,” says Alan Rozenshtein, a former Justice Department lawyer now at the University of Minnesota Law School. “At a minimum, if NSO loses this case, it calls into question all of those companies that make their products or make their living by finding flaws in messaging software and providing services exploiting those flaws. This will create enough legal uncertainty that I would imagine these would-be clients would think twice before contracting with them. You don’t know if the company will continue to operate, if they’ll get dragged to court, if your secrets will be exposed.” NSO declined to comment on the alleged WhatsApp hack, since it is still an active case. Retour ligne automatique
    “We are always spied on”

    In Morocco, Maâti Monjib was subjected to at least four more hacking attacks throughout 2019, each more advanced than the one before. At some point, his phone browser was invisibly redirected to a suspicious domain that researchers suspect was used to silently install malware. Instead of something like a text message that can raise the alarm and leaves a visible trace, this one was a much quieter network injection attack, a tactic valued because it’s almost imperceptible except to expert investigators.

    On September 13, 2019, Monjib had lunch at home with his friend Omar Radi, a Moroccan journalist who is one of the regime’s sharpest critics. That very day, an investigation later found, Radi was hit with the same kind of network injection attacks that had snared Monjib. The hacking campaign against Radi lasted at least into January 2020, Amnesty International researchers said. He’s been subject to regular police harassment ever since.

    At least seven more Moroccans received warnings from WhatsApp about Pegasus being used to spy on their phones, including human rights activists, journalists, and politicians. Are these the kinds of legitimate spying targets—the terrorists and criminals—laid out in the contract that Morocco and all NSO customers sign?

    In December, Monjib and the other victims sent a letter to Morocco’s data protection authority asking for an investigation and action. Nothing formally came of it, but one of the men, the pro-democracy economist Fouad Abdelmoumni, says his friends high up at the agency told him the letter was hopeless and urged him to drop the matter. The Moroccan government, meanwhile, has responded by threatening to expel Amnesty International from the country.

    What’s happening in Morocco is emblematic of what’s happening around the world. While it’s clear that democracies are major beneficiaries of lawful hacking, a long and growing list of credible, detailed, technical, and public investigations shows Pegasus being misused by authoritarian regimes with long records of human rights abuse.

    “Morocco is a country under an authoritarian regime who believe people like Monjib and myself have to be destroyed,” says Abdelmoumni. “To destroy us, having access to all information is key. We always consider that we are spied on. All of our information is in the hands of the palace.”

    #Apple #NSO #Facebook #WhatsApp #iPhone #Pegasus #smartphone #spyware #activisme #journalisme #écoutes #hacking #surveillance #Amnesty (...)

    ##CitizenLab

  • This is how Facebook’s AI looks for bad stuff
    https://www.technologyreview.com/2019/11/29/131792/this-is-how-facebooks-ai-looks-for-bad-stuff

    The context : The vast majority of Facebook’s moderation is now done automatically by the company’s machine-learning systems, reducing the amount of harrowing content its moderators have to review. In its latest community standards enforcement report, published earlier this month, the company claimed that 98% of terrorist videos and photos are removed before anyone has the chance to see them, let alone report them. So, what are we seeing here ? The company has been training its (...)

    #MetropolitanPolice #Facebook #algorithme #anti-terrorisme #modération #reconnaissance #vidéo-surveillance #forme (...)

    ##surveillance

  • Singapore’s police now have access to contact tracing data | MIT Technology Review
    https://www.technologyreview.com/2021/01/05/1015734/singapore-contact-tracing-police-data-covid/?truid=a497ecb44646822921c70e7e051f7f1a

    Contact tracing apps and systems around the world have faced longstanding questions about privacy and trust.
    by

    Mia Sato archive page

    January 5, 2021
    In Singapore, people standing in a long line, holding smartphones and wearing face masks.
    Singapore Press via AP Images

    The news: Police will be able to access data collected by Singapore’s covid-19 contact tracing system for use in criminal investigations, a senior official said on Monday. The announcement contradicts the privacy policy originally outlined when the government launched its TraceTogether app in March 2020, and is being criticized as a backpedal just after participation in contact tracing was made mandatory.

    Officials said that while policy had stated that data would “only be used solely for the purpose of contact tracing of persons possibly exposed to covid-19”, the legal reality in Singapore is that police can access any data for criminal investigations—and that contact tracing data was no different. Its privacy policy was changed on January 4, 2021 to clarify “how the Criminal Procedure Code applies to all data under Singapore’s jurisdiction.”

    Early mover: TraceTogether is accessed via a smartphone app or a small wearable device, and is used by nearly 80% of Singapore’s 5.7 million residents. It was the first of the major Bluetooth contact tracing apps unveiled in the spring of 2020, and its data is more centralized than the Apple-Google system used in many other places around the world. Singapore ruled out using the Apple-Google system itself because officials there said they wanted more detailed infection information). Participation in contact tracing was once voluntary, but the government rolled that back late last year and there are now mandatory check-ins at most places where people work, shop, and gather.

    The country’s approach to the pandemic has been forceful in many ways, not just when it comes to contact tracing technology. For example, people caught without a mask in public face large fines.
    Sign up for The Coronavirus Tech Report - A weekly newsletter about covid-19 and how it’s changing our world.
    Stay updated on MIT Technology Review initiatives and events?
    Yes
    No

    Why it matters: Our Covid Tracing Tracker notes the privacy policies for dozens of apps around the world that notify users of potential exposure to covid-19. Although Singapore’s general attitudes about data privacy may not mirror what’s happening elsewhere, contact tracing apps around the world have raised questions of user privacy since the first were launched last year. The news from Singapore hits on activists’ and ethicists’ concerns about data misuse, and groups like Human Rights Watch have outlined how surveillance could further hurt already marginalized communities.

    In a recent essay in the journal Science, bioethicists Alessandro Blasimme and Effy Vayena from ETH Zurich in Switzerland, said that the “piecemeal creation of public trust” was an important missing ingredient if we want more people to use these apps.

    Data is still important: This isn’t the first time the use of contact tracing data has intersected with law enforcement. Last July, German restaurants, bars and patrons raised objections when it was reported that police used information collected in contact tracing efforts to track down witnesses in investigations. And in late December 2020, New York Governor Andrew Cuomo signed a law that prohibits law enforcement and immigration authorities from accessing contact tracing data. Groups like the New York Civil Liberties Union, Electronic Frontier Foundation and New York Immigration Coalition applauded the move.

    #Covid #Data_trackers #Surveillance #Singapour

  • Inside China’s unexpected quest to protect data privacy
    https://www.technologyreview.com/2020/08/19/1006441/china-data-privacy-hong-yanqing-gdpr

    A new privacy law would look a lot like Europe’s GDPR—but will it restrict state surveillance?

    Late in the summer of 2016, Xu Yuyu received a call that promised to change her life. Her college entrance examination scores, she was told, had won her admission to the English department of the Nanjing University of Posts and Telecommunications. Xu lived in the city of Linyi in Shandong, a coastal province in China, southeast of Beijing. She came from a poor family, singularly reliant on her father’s meager income. But her parents had painstakingly saved for her tuition; very few of her relatives had ever been to college.

    A few days later, Xu received another call telling her she had also been awarded a scholarship. To collect the 2,600 yuan ($370), she needed to first deposit a 9,900 yuan “activation fee” into her university account. Having applied for financial aid only days before, she wired the money to the number the caller gave her. That night, the family rushed to the police to report that they had been defrauded. Xu’s father later said his greatest regret was asking the officer whether they might still get their money back. The answer—“Likely not”—only exacerbated Xu’s devastation. On the way home she suffered a heart attack. She died in a hospital two days later.

    An investigation determined that while the first call had been genuine, the second had come from scammers who’d paid a hacker for Xu’s number, admissions status, and request for financial aid.

    For Chinese consumers all too familiar with having their data stolen, Xu became an emblem. Her death sparked a national outcry for greater data privacy protections. Only months before, the European Union had adopted the General Data Protection Regulation (GDPR), an attempt to give European citizens control over how their personal data is used. Meanwhile, Donald Trump was about to win the American presidential election, fueled in part by a campaign that relied extensively on voter data. That data included details on 87 million Facebook accounts, illicitly obtained by the consulting firm Cambridge Analytica. Chinese regulators and legal scholars followed these events closely.

    In the West, it’s widely believed that neither the Chinese government nor Chinese people care about privacy. US tech giants wield this supposed indifference to argue that onerous privacy laws would put them at a competitive disadvantage to Chinese firms. In his 2018 Senate testimony after the Cambridge Analytica scandal, Facebook’s CEO, Mark Zuckerberg, urged regulators not to clamp down too hard on technologies like face recognition. “We still need to make it so that American companies can innovate in those areas,” he said, “or else we’re going to fall behind Chinese competitors and others around the world.”

    In reality, this picture of Chinese attitudes to privacy is out of date. Over the last few years the Chinese government, seeking to strengthen consumers’ trust and participation in the digital economy, has begun to implement privacy protections that in many respects resemble those in America and Europe today.

    Even as the government has strengthened consumer privacy, however, it has ramped up state surveillance. It uses DNA samples and other biometrics, like face and fingerprint recognition, to monitor citizens throughout the country. It has tightened internet censorship and developed a “social credit” system, which punishes behaviors the authorities say weaken social stability. During the pandemic, it deployed a system of “health code” apps to dictate who could travel, based on their risk of carrying the coronavirus. And it has used a slew of invasive surveillance technologies in its harsh repression of Muslim Uighurs in the northwestern region of Xinjiang.

    This paradox has become a defining feature of China’s emerging data privacy regime, says Samm Sacks, a leading China scholar at Yale and New America, a think tank in Washington, DC. It raises a question: Can a system endure with strong protections for consumer privacy, but almost none against government snooping? The answer doesn’t affect only China. Its technology companies have an increasingly global footprint, and regulators around the world are watching its policy decisions.

    November 2000 arguably marks the birth of the modern Chinese surveillance state. That month, the Ministry of Public Security, the government agency that oversees daily law enforcement, announced a new project at a trade show in Beijing. The agency envisioned a centralized national system that would integrate both physical and digital surveillance using the latest technology. It was named Golden Shield.

    Eager to cash in, Western companies including American conglomerate Cisco, Finnish telecom giant Nokia, and Canada’s Nortel Networks worked with the agency on different parts of the project. They helped construct a nationwide database for storing information on all Chinese adults, and developed a sophisticated system for controlling information flow on the internet—what would eventually become the Great Firewall. Much of the equipment involved had in fact already been standardized to make surveillance easier in the US—a consequence of the Communications Assistance for Law Enforcement Act of 1994.

    Despite the standardized equipment, the Golden Shield project was hampered by data silos and turf wars within the Chinese government. Over time, the ministry’s pursuit of a singular, unified system devolved into two separate operations: a surveillance and database system, devoted to gathering and storing information, and the social-credit system, which some 40 government departments participate in. When people repeatedly do things that aren’t allowed—from jaywalking to engaging in business corruption—their social-credit score falls and they can be blocked from things like buying train and plane tickets or applying for a mortgage.

    In the same year the Ministry of Public Security announced Golden Shield, Hong Yanqing entered the ministry’s police university in Beijing. But after seven years of training, having received his bachelor’s and master’s degrees, Hong began to have second thoughts about becoming a policeman. He applied instead to study abroad. By the fall of 2007, he had moved to the Netherlands to begin a PhD in international human rights law, approved and subsidized by the Chinese government.

    Over the next four years, he familiarized himself with the Western practice of law through his PhD research and a series of internships at international organizations. He worked at the International Labor Organization on global workplace discrimination law and the World Health Organization on road safety in China. “It’s a very legalistic culture in the West—that really strikes me. People seem to go to court a lot,” he says. “For example, for human rights law, most of the textbooks are about the significant cases in court resolving human rights issues.”

    Hong found this to be strangely inefficient. He saw going to court as a final resort for patching up the law’s inadequacies, not a principal tool for establishing it in the first place. Legislation crafted more comprehensively and with greater forethought, he believed, would achieve better outcomes than a system patched together through a haphazard accumulation of case law, as in the US.

    After graduating, he carried these ideas back to Beijing in 2012, on the eve of Xi Jinping’s ascent to the presidency. Hong worked at the UN Development Program and then as a journalist for the People’s Daily, the largest newspaper in China, which is owned by the government.

    Xi began to rapidly expand the scope of government censorship. Influential commentators, or “Big Vs”—named for their verified accounts on social media—had grown comfortable criticizing and ridiculing the Chinese Communist Party. In the fall of 2013, the party arrested hundreds of microbloggers for what it described as “malicious rumor-mongering” and paraded a particularly influential one on national television to make an example of him.

    The moment marked the beginning of a new era of censorship. The following year, the Cyberspace Administration of China was founded. The new central agency was responsible for everything involved in internet regulation, including national security, media and speech censorship, and data protection. Hong left the People’s Daily and joined the agency’s department of international affairs. He represented it at the UN and other global bodies and worked on cybersecurity cooperation with other governments.

    By July 2015, the Cyberspace Administration had released a draft of its first law. The Cybersecurity Law, which entered into force in June of 2017, required that companies obtain consent from people to collect their personal information. At the same time, it tightened internet censorship by banning anonymous users—a provision enforced by regular government inspections of data from internet service providers.

    In the spring of 2016, Hong sought to return to academia, but the agency asked him to stay. The Cybersecurity Law had purposely left the regulation of personal data protection vague, but consumer data breaches and theft had reached unbearable levels. A 2016 study by the Internet Society of China found that 84% of those surveyed had suffered some leak of their data, including phone numbers, addresses, and bank account details. This was spurring a growing distrust of digital service providers that required access to personal information, such as ride-hailing, food-delivery, and financial apps. Xu Yuyu’s death poured oil on the flames.

    The government worried that such sentiments would weaken participation in the digital economy, which had become a central part of its strategy for shoring up the country’s slowing economic growth. The advent of GDPR also made the government realize that Chinese tech giants would need to meet global privacy norms in order to expand abroad.

    Hong was put in charge of a new task force that would write a Personal Information Protection Specification (PIPS) to help solve these challenges. The document, though nonbinding, would tell companies how regulators intended to implement the Cybersecurity Law. In the process, the government hoped, it would nudge them to adopt new norms for data protection by themselves.

    Hong’s task force set about translating every relevant document they could find into Chinese. They translated the privacy guidelines put out by the Organization for Economic Cooperation and Development and by its counterpart, the Asia-Pacific Economic Cooperation; they translated GDPR and the California Consumer Privacy Act. They even translated the 2012 White House Consumer Privacy Bill of Rights, introduced by the Obama administration but never made into law. All the while, Hong met regularly with European and American data protection regulators and scholars.

    Bit by bit, from the documents and consultations, a general choice emerged. “People were saying, in very simplistic terms, ‘We have a European model and the US model,’” Hong recalls. The two approaches diverged substantially in philosophy and implementation. Which one to follow became the task force’s first debate.

    At the core of the European model is the idea that people have a fundamental right to have their data protected. GDPR places the burden of proof on data collectors, such as companies, to demonstrate why they need the data. By contrast, the US model privileges industry over consumers. Businesses define for themselves what constitutes reasonable data collection; consumers only get to choose whether to use that business. The laws on data protection are also far more piecemeal than in Europe, divvied up among sectoral regulators and specific states.

    At the time, without a central law or single agency in charge of data protection, China’s model more closely resembled the American one. The task force, however, found the European approach compelling. “The European rule structure, the whole system, is more clear,” Hong says.

    But most of the task force members were representatives from Chinese tech giants, like Baidu, Alibaba, and Huawei, and they felt that GDPR was too restrictive. So they adopted its broad strokes—including its limits on data collection and its requirements on data storage and data deletion—and then loosened some of its language. GDPR’s principle of data minimization, for example, maintains that only necessary data should be collected in exchange for a service. PIPS allows room for other data collection relevant to the service provided.

    PIPS took effect in May 2018, the same month that GDPR finally took effect. But as Chinese officials watched the US upheaval over the Facebook and Cambridge Analytica scandal, they realized that a nonbinding agreement would not be enough. The Cybersecurity Law didn’t have a strong mechanism for enforcing data protection. Regulators could only fine violators up to 1,000,000 yuan ($140,000), an inconsequential amount for large companies. Soon after, the National People’s Congress, China’s top legislative body, voted to begin drafting a Personal Information Protection Law within its current five-year legislative period, which ends in 2023. It would strengthen data protection provisions, provide for tougher penalties, and potentially create a new enforcement agency.

    After Cambridge Analytica, says Hong, “the government agency understood, ‘Okay, if you don’t really implement or enforce those privacy rules, then you could have a major scandal, even affecting political things.’”

    The local police investigation of Xu Yuyu’s death eventually identified the scammers who had called her. It had been a gang of seven who’d cheated many other victims out of more than 560,000 yuan using illegally obtained personal information. The court ruled that Xu’s death had been a direct result of the stress of losing her family’s savings. Because of this, and his role in orchestrating tens of thousands of other calls, the ringleader, Chen Wenhui, 22, was sentenced to life in prison. The others received sentences between three and 15 years.Retour ligne automatique
    xu yuyu

    Emboldened, Chinese media and consumers began more openly criticizing privacy violations. In March 2018, internet search giant Baidu’s CEO, Robin Li, sparked social-media outrage after suggesting that Chinese consumers were willing to “exchange privacy for safety, convenience, or efficiency.” “Nonsense,” wrote a social-media user, later quoted by the People’s Daily. “It’s more accurate to say [it is] impossible to defend [our privacy] effectively.”

    In late October 2019, social-media users once again expressed anger after photos began circulating of a school’s students wearing brainwave-monitoring headbands, supposedly to improve their focus and learning. The local educational authority eventually stepped in and told the school to stop using the headbands because they violated students’ privacy. A week later, a Chinese law professor sued a Hangzhou wildlife zoo for replacing its fingerprint-based entry system with face recognition, saying the zoo had failed to obtain his consent for storing his image.

    But the public’s growing sensitivity to infringements of consumer privacy has not led to many limits on state surveillance, nor even much scrutiny of it. As Maya Wang, a researcher at Human Rights Watch, points out, this is in part because most Chinese citizens don’t know the scale or scope of the government’s operations. In China, as in the US and Europe, there are broad public and national security exemptions to data privacy laws. The Cybersecurity Law, for example, allows the government to demand data from private actors to assist in criminal legal investigations. The Ministry of Public Security also accumulates massive amounts of data on individuals directly. As a result, data privacy in industry can be strengthened without significantly limiting the state’s access to information.

    The onset of the pandemic, however, has disturbed this uneasy balance.

    On February 11, Ant Financial, a financial technology giant headquartered in Hangzhou, a city southwest of Shanghai, released an app-building platform called AliPay Health Code. The same day, the Hangzhou government released an app it had built using the platform. The Hangzhou app asked people to self-report their travel and health information, and then gave them a color code of red, yellow, or green. Suddenly Hangzhou’s 10 million residents were all required to show a green code to take the subway, shop for groceries, or enter a mall. Within a week, local governments in over 100 cities had used AliPay Health Code to develop their own apps. Rival tech giant Tencent quickly followed with its own platform for building them.

    The apps made visible a worrying level of state surveillance and sparked a new wave of public debate. In March, Hu Yong, a journalism professor at Beijing University and an influential blogger on Weibo, argued that the government’s pandemic data collection had crossed a line. Not only had it led to instances of information being stolen, he wrote, but it had also opened the door to such data being used beyond its original purpose. “Has history ever shown that once the government has surveillance tools, it will maintain modesty and caution when using them?” he asked.

    Indeed, in late May, leaked documents revealed plans from the Hangzhou government to make a more permanent health-code app that would score citizens on behaviors like exercising, smoking, and sleeping. After a public outcry, city officials canceled the project. That state-run media had also published stories criticizing the app likely helped.

    The debate quickly made its way to the central government. That month, the National People’s Congress announced it intended to fast-track the Personal Information Protection Law. The scale of the data collected during the pandemic had made strong enforcement more urgent, delegates said, and highlighted the need to clarify the scope of the government’s data collection and data deletion procedures during special emergencies. By July, the legislative body had proposed a new “strict approval” process for government authorities to undergo before collecting data from private-sector platforms. The language again remains vague, to be fleshed out later—perhaps through another nonbinding document—but this move “could mark a step toward limiting the broad scope” of existing government exemptions for national security, wrote Sacks and fellow China scholars at New America.

    Hong similarly believes the discrepancy between rules governing industry and government data collection won’t last, and the government will soon begin to limit its own scope. “We cannot simply address one actor while leaving the other out,” he says. “That wouldn’t be a very scientific approach.”

    Other observers disagree. The government could easily make superficial efforts to address public backlash against visible data collection without really touching the core of the Ministry of Public Security’s national operations, says Wang, of Human Rights Watch. She adds that any laws would likely be enforced unevenly: “In Xinjiang, Turkic Muslims have no say whatsoever in how they’re treated.”

    Still, Hong remains an optimist. In July, he started a job teaching law at Beijing University, and he now maintains a blog on cybersecurity and data issues. Monthly, he meets with a budding community of data protection officers in China, who carefully watch how data governance is evolving around the world.

    #criminalité #Nokia_Siemens #fraude #Huawei #payement #Cisco #CambridgeAnalytica/Emerdata #Baidu #Alibaba #domination #bénéfices #BHATX #BigData #lutte #publicité (...)

    ##criminalité ##CambridgeAnalytica/Emerdata ##publicité ##[fr]Règlement_Général_sur_la_Protection_des_Données__RGPD_[en]General_Data_Protection_Regulation__GDPR_[nl]General_Data_Protection_Regulation__GDPR_ ##Nortel_Networks ##Facebook ##biométrie ##consommation ##génétique ##consentement ##facial ##reconnaissance ##empreintes ##Islam ##SocialCreditSystem ##surveillance ##TheGreatFirewallofChina ##HumanRightsWatch

  • “I started crying”: Inside Timnit Gebru’s last days at Google | MIT Technology Review
    https://www.technologyreview.com/2020/12/16/1014634/google-ai-ethics-lead-timnit-gebru-tells-story

    By now, we’ve all heard some version of the story. On December 2, after a protracted disagreement over the release of a research paper, Google forced out its ethical AI co-lead, Timnit Gebru. The paper was on the risks of large language models, AI models trained on staggering amounts of text data, which are a line of research core to Google’s business. Gebru, a leading voice in AI ethics, was one of the only Black women at Google Research.

    The move has since sparked a debate about growing corporate influence over AI, the long-standing lack of diversity in tech, and what it means to do meaningful AI ethics research. As of December 15, over 2,600 Google employees and 4,300 others in academia, industry, and civil society had signed a petition denouncing the dismissal of Gebru, calling it “unprecedented research censorship” and “an act of retaliation.”

    The company’s star ethics researcher highlighted the risks of large language models, which are key to Google’s business.
    Gebru is known for foundational work in revealing AI discrimination, developing methods for documenting and auditing AI models, and advocating for greater diversity in research. In 2016, she cofounded the nonprofit Black in AI, which has become a central resource for civil rights activists, labor organizers, and leading AI ethics researchers, cultivating and highlighting Black AI research talent.

    Then in that document, I wrote that this has been extremely disrespectful to the Ethical AI team, and there needs to be a conversation, not just with Jeff and our team, and Megan and our team, but the whole of Research about respect for researchers and how to have these kinds of discussions. Nope. No engagement with that whatsoever.

    I cried, by the way. When I had that first meeting, which was Thursday before Thanksgiving, a day before I was going to go on vacation—when Megan told us that you have to retract this paper, I started crying. I was so upset because I said, I’m so tired of constant fighting here. I thought that if I just ignored all of this DEI [diversity, equity, and inclusion] hypocrisy and other stuff, and I just focused on my work, then at least I could get my work done. And now you’re coming for my work. So I literally started crying.

    You’ve mentioned that this is not just about you; it’s not just about Google. It’s a confluence of so many different issues. What does this particular experience say about tech companies’ influence on AI in general, and their capacity to actually do meaningful work in AI ethics?
    You know, there were a number of people comparing Big Tech and Big Tobacco, and how they were censoring research even though they knew the issues for a while. I push back on the academia-versus-tech dichotomy, because they both have the same sort of very racist and sexist paradigm. The paradigm that you learn and take to Google or wherever starts in academia. And people move. They go to industry and then they go back to academia, or vice versa. They’re all friends; they are all going to the same conferences.

    I don’t think the lesson is that there should be no AI ethics research in tech companies, but I think the lesson is that a) there needs to be a lot more independent research. We need to have more choices than just DARPA [the Defense Advanced Research Projects Agency] versus corporations. And b) there needs to be oversight of tech companies, obviously. At this point I just don’t understand how we can continue to think that they’re gonna self-regulate on DEI or ethics or whatever it is. They haven’t been doing the right thing, and they’re not going to do the right thing.

    I think academic institutions and conferences need to rethink their relationships with big corporations and the amount of money they’re taking from them. Some people were even wondering, for instance, if some of these conferences should have a “no censorship” code of conduct or something like that. So I think that there is a lot that these conferences and academic institutions can do. There’s too much of an imbalance of power right now.

    #Intelligence_artificielle #Timnit_Gebru #Google #Ethique

  • The coming war on the hidden algorithms that trap people in poverty | MIT Technology Review
    https://www.technologyreview.com/2020/12/04/1013068/algorithms-create-a-poverty-trap-lawyers-fight-back

    A growing group of lawyers are uncovering, navigating, and fighting the automated systems that deny the poor housing, jobs, and basic services.

    Credit-scoring algorithms are not the only ones that affect people’s economic well-being and access to basic services. Algorithms now decide which children enter foster care, which patients receive medical care, which families get access to stable housing. Those of us with means can pass our lives unaware of any of this. But for low-income individuals, the rapid growth and adoption of automated decision-making systems has created a hidden web of interlocking traps.

    Fortunately, a growing group of civil lawyers are beginning to organize around this issue. Borrowing a playbook from the criminal defense world’s pushback against risk-assessment algorithms, they’re seeking to educate themselves on these systems, build a community, and develop litigation strategies. “Basically every civil lawyer is starting to deal with this stuff, because all of our clients are in some way or another being touched by these systems,” says Michele Gilman, a clinical law professor at the University of Baltimore. “We need to wake up, get training. If we want to be really good holistic lawyers, we need to be aware of that.”

    “This is happening across the board to our clients,” she says. “They’re enmeshed in so many different algorithms that are barring them from basic services. And the clients may not be aware of that, because a lot of these systems are invisible.”

    Government agencies, on the other hand, are driven to adopt algorithms when they want to modernize their systems. The push to adopt web-based apps and digital tools began in the early 2000s and has continued with a move toward more data-driven automated systems and AI. There are good reasons to seek these changes. During the pandemic, many unemployment benefit systems struggled to handle the massive volume of new requests, leading to significant delays. Modernizing these legacy systems promises faster and more reliable results.

    But the software procurement process is rarely transparent, and thus lacks accountability. Public agencies often buy automated decision-making tools directly from private vendors. The result is that when systems go awry, the individuals affected——and their lawyers—are left in the dark. “They don’t advertise it anywhere,” says Julia Simon-Mishel, an attorney at Philadelphia Legal Assistance. “It’s often not written in any sort of policy guides or policy manuals. We’re at a disadvantage.”

    The lack of public vetting also makes the systems more prone to error. One of the most egregious malfunctions happened in Michigan in 2013. After a big effort to automate the state’s unemployment benefits system, the algorithm incorrectly flagged over 34,000 people for fraud. “It caused a massive loss of benefits,” Simon-Mishel says. “There were bankruptcies; there were unfortunately suicides. It was a whole mess.”

    Low-income individuals bear the brunt of the shift toward algorithms. They are the people most vulnerable to temporary economic hardships that get codified into consumer reports, and the ones who need and seek public benefits. Over the years, Gilman has seen more and more cases where clients risk entering a vicious cycle. “One person walks through so many systems on a day-to-day basis,” she says. “I mean, we all do. But the consequences of it are much more harsh for poor people and minorities.”

    She brings up a current case in her clinic as an example. A family member lost work because of the pandemic and was denied unemployment benefits because of an automated system failure. The family then fell behind on rent payments, which led their landlord to sue them for eviction. While the eviction won’t be legal because of the CDC’s moratorium, the lawsuit will still be logged in public records. Those records could then feed into tenant-screening algorithms, which could make it harder for the family to find stable housing in the future. Their failure to pay rent and utilities could also be a ding on their credit score, which once again has repercussions. “If they are trying to set up cell-phone service or take out a loan or buy a car or apply for a job, it just has these cascading ripple effects,” Gilman says.

    “Every case is going to turn into an algorithm case”

    In September, Gilman, who is currently a faculty fellow at the Data and Society research institute, released a report documenting all the various algorithms that poverty lawyers might encounter. Called Poverty Lawgorithms, it’s meant to be a guide for her colleagues in the field. Divided into specific practice areas like consumer law, family law, housing, and public benefits, it explains how to deal with issues raised by algorithms and other data-driven technologies within the scope of existing laws.

    Rapport : https://datasociety.net/wp-content/uploads/2020/09/Poverty-Lawgorithms-20200915.pdf

    #Algorithme #Pauvreté #Credit_score #Notation

  • We read the paper that forced Timnit Gebru out of Google. Here’s what it says | MIT Technology Review
    https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/?truid=a497ecb44646822921c70e7e051f7f1a

    The company’s star ethics researcher highlighted the risks of large language models, which are key to Google’s business.
    by

    Karen Hao archive page

    December 4, 2020
    Timnit Gebru
    courtesy of Timnit Gebru

    On the evening of Wednesday, December 2, Timnit Gebru, the co-lead of Google’s ethical AI team, announced via Twitter that the company had forced her out.

    Gebru, a widely respected leader in AI ethics research, is known for coauthoring a groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of color, which means its use can end up discriminating against them. She also cofounded the Black in AI affinity group, and champions diversity in the tech industry. The team she helped build at Google is one of the most diverse in AI, and includes many leading experts in their own right. Peers in the field envied it for producing critical work that often challenged mainstream AI practices.

    A series of tweets, leaked emails, and media articles showed that Gebru’s exit was the culmination of a conflict over another paper she co-authored. Jeff Dean, the head of Google AI, told colleagues in an internal email (which he has since put online) that the paper “didn’t meet our bar for publication” and that Gebru had said she would resign unless Google met a number of conditions, which it was unwilling to meet. Gebru tweeted that she had asked to negotiate “a last date” for her employment after she got back from vacation. She was cut off from her corporate email account before her return.

    Online, many other leaders in the field of AI ethics are arguing that the company pushed her out because of the inconvenient truths that she was uncovering about a core line of its research—and perhaps its bottom line. More than 1,400 Google staff and 1,900 other supporters have also signed a letter of protest.
    Sign up for The Download - Your daily dose of what’s up in emerging technology
    Stay updated on MIT Technology Review initiatives and events?
    Yes
    No

    Many details of the exact sequence of events that led up to Gebru’s departure are not yet clear; both she and Google have declined to comment beyond their posts on social media. But MIT Technology Review obtained a copy of the research paper from one of the co-authors, Emily M. Bender, a professor of computational linguistics at the University of Washington. Though Bender asked us not to publish the paper itself because the authors didn’t want such an early draft circulating online, it gives some insight into the questions Gebru and her colleagues were raising about AI that might be causing Google concern.

    Titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” the paper lays out the risks of large language models—AIs trained on staggering amounts of text data. These have grown increasingly popular—and increasingly large—in the last three years. They are now extraordinarily good, under the right conditions, at producing what looks like convincing, meaningful new text—and sometimes at estimating meaning from language. But, says the introduction to the paper, “we ask whether enough thought has been put into the potential risks associated with developing them and strategies to mitigate these risks.”
    The paper

    The paper, which builds off the work of other researchers, presents the history of natural-language processing, an overview of four main risks of large language models, and suggestions for further research. Since the conflict with Google seems to be over the risks, we’ve focused on summarizing those here.
    Environmental and financial costs

    Training large AI models consumes a lot of computer processing power, and hence a lot of electricity. Gebru and her coauthors refer to a 2019 paper from Emma Strubell and her collaborators on the carbon emissions and financial costs of large language models. It found that their energy consumption and carbon footprint have been exploding since 2017, as models have been fed more and more data.

    Strubell’s study found that one language model with a particular type of “neural architecture search” (NAS) method would have produced the equivalent of 626,155 pounds (284 metric tons) of carbon dioxide—about the lifetime output of five average American cars. A version of Google’s language model, BERT, which underpins the company’s search engine, produced 1,438 pounds of CO2 equivalent in Strubell’s estimate—nearly the same as a roundtrip flight between New York City and San Francisco.

    Gebru’s draft paper points out that the sheer resources required to build and sustain such large AI models means they tend to benefit wealthy organizations, while climate change hits marginalized communities hardest. “It is past time for researchers to prioritize energy efficiency and cost to reduce negative environmental impact and inequitable access to resources,” they write.
    Massive data, inscrutable models

    Large language models are also trained on exponentially increasing amounts of text. This means researchers have sought to collect all the data they can from the internet, so there’s a risk that racist, sexist, and otherwise abusive language ends up in the training data.

    An AI model taught to view racist language as normal is obviously bad. The researchers, though, point out a couple of more subtle problems. One is that shifts in language play an important role in social change; the MeToo and Black Lives Matter movements, for example, have tried to establish a new anti-sexist and anti-racist vocabulary. An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary and won’t produce or interpret language in line with these new cultural norms.

    It will also fail to capture the language and the norms of countries and peoples that have less access to the internet and thus a smaller linguistic footprint online. The result is that AI-generated language will be homogenized, reflecting the practices of the richest countries and communities.

    Moreover, because the training datasets are so large, it’s hard to audit them to check for these embedded biases. “A methodology that relies on datasets too large to document is therefore inherently risky,” the researchers conclude. “While documentation allows for potential accountability, [...] undocumented training data perpetuates harm without recourse.”
    Research opportunity costs

    The researchers summarize the third challenge as the risk of “misdirected research effort.” Though most AI researchers acknowledge that large language models don’t actually understand language and are merely excellent at manipulating it, Big Tech can make money from models that manipulate language more accurately, so it keeps investing in them. “This research effort brings with it an opportunity cost,” Gebru and her colleagues write. Not as much effort goes into working on AI models that might achieve understanding, or that achieve good results with smaller, more carefully curated datasets (and thus also use less energy).
    Illusions of meaning

    The final problem with large language models, the researchers say, is that because they’re so good at mimicking real human language, it’s easy to use them to fool people. There have been a few high-profile cases, such as the college student who churned out AI-generated self-help and productivity advice on a blog, which went viral.

    The dangers are obvious: AI models could be used to generate misinformation about an election or the covid-19 pandemic, for instance. They can also go wrong inadvertently when used for machine translation. The researchers bring up an example: In 2017, Facebook mistranslated a Palestinian man’s post, which said “good morning” in Arabic, as “attack them” in Hebrew, leading to his arrest.
    Why it matters

    Gebru and Bender’s paper has six co-authors, four of whom are Google researchers. Bender asked to avoid disclosing their names for fear of repercussions. (Bender, by contrast, is a tenured professor: “I think this is underscoring the value of academic freedom,” she says.)

    The paper’s goal, Bender says, was to take stock of the landscape of current research in natural-language processing. “We are working at a scale where the people building the things can’t actually get their arms around the data,” she said. “And because the upsides are so obvious, it’s particularly important to step back and ask ourselves, what are the possible downsides? … How do we get the benefits of this while mitigating the risk?”

    In his internal email, Dean, the Google AI head, said one reason the paper “didn’t meet our bar” was that it “ignored too much relevant research.” Specifically, he said it didn’t mention more recent work on how to make large language models more energy-efficient and mitigate problems of bias.

    However, the six collaborators drew on a wide breadth of scholarship. The paper’s citation list, with 128 references, is notably long. “It’s the sort of work that no individual or even pair of authors can pull off,” Bender said. “It really required this collaboration.”

    The version of the paper we saw does also nod to several research efforts on reducing the size and computational costs of large language models, and on measuring the embedded bias of models. It argues, however, that these efforts have not been enough. “I’m very open to seeing what other references we ought to be including,” Bender said.

    Nicolas Le Roux, a Google AI researcher in the Montreal office, later noted on Twitter that the reasoning in Dean’s email was unusual. “My submissions were always checked for disclosure of sensitive material, never for the quality of the literature review,” he said.

    Now might be a good time to remind everyone that the easiest way to discriminate is to make stringent rules, then to decide when and for whom to enforce them.
    My submissions were always checked for disclosure of sensitive material, never for the quality of the literature review.
    — Nicolas Le Roux (@le_roux_nicolas) December 3, 2020

    Dean’s email also says that Gebru and her colleagues gave Google AI only a day for an internal review of the paper before they submitted it to a conference for publication. He wrote that “our aim is to rival peer-reviewed journals in terms of the rigor and thoughtfulness in how we review research before publication.”

    I understand the concern over Timnit’s resignation from Google. She’s done a great deal to move the field forward with her research. I wanted to share the email I sent to Google Research and some thoughts on our research process.https://t.co/djUGdYwNMb
    — Jeff Dean (@🠡) (@JeffDean) December 4, 2020

    Bender noted that even so, the conference would still put the paper through a substantial review process: “Scholarship is always a conversation and always a work in progress,” she said.

    Others, including William Fitzgerald, a former Google PR manager, have further cast doubt on Dean’s claim:

    This is such a lie. It was part of my job on the Google PR team to review these papers. Typically we got so many we didn’t review them in time or a researcher would just publish & we wouldn’t know until afterwards. We NEVER punished people for not doing proper process. https://t.co/hNE7SOWSLS pic.twitter.com/Ic30sVgwtn
    — William Fitzgerald (@william_fitz) December 4, 2020

    Google pioneered much of the foundational research that has since led to the recent explosion in large language models. Google AI was the first to invent the Transformer language model in 2017 that serves as the basis for the company’s later model BERT, and OpenAI’s GPT-2 and GPT-3. BERT, as noted above, now also powers Google search, the company’s cash cow.

    Bender worries that Google’s actions could create “a chilling effect” on future AI ethics research. Many of the top experts in AI ethics work at large tech companies because that is where the money is. “That has been beneficial in many ways,” she says. “But we end up with an ecosystem that maybe has incentives that are not the very best ones for the progress of science for the world.”

    #Intelligence_artificielle #Google #Ethique #Timnit_Gebru

  • Microbes could be used to extract metals and minerals from space rocks | MIT Technology Review
    https://www.technologyreview.com/2020/11/10/1011935/microbes-extract-metals-minerals-space-rocks-mining/?truid=a497ecb44646822921c70e7e051f7f1a

    Donc, si je comprends bien, on va envoyer des bactéries dans l’espace pour extraire les terres rares des roches spatiales, et permettre de rapporter moins lourd sur Terre.
    Mais bon, cela veut dire qu’on enverra des bactéries dans des endroits inhabités. Quand je pense qu’on s’est offusqué de l’envoi de tardigrades sur la Lune par une équipe israélienne.
    Encore un commun qui va disparaître sous la pression d el’économie expansive.

    New experiments on the International Space Station suggest that future space miners could use bacteria to acquire valuable resources.
    by

    Neel V. Patel archive page

    November 10, 2020
    psyche asteroid
    An illustration of asteroid Psyche, thought to be primarily made of metals.ASU/Peter Rubin

    A species of bacteria can successfully pull out rare Earth elements from rocks, even in microgravity environments, a study on the International Space Station has found. The new findings, published in Nature Communications today, suggest a new way we could one day use microbes to mine for valuable metals and minerals off Earth.

    Why bacteria: Single-celled organisms have evolved over time on Earth to extract nutrients and other essential compounds from rocks through specialized chemical reactions. These bacterial processes are harnessed to extract about 20% of the world’s copper and gold for human use. The scientists wanted to know if they worked in microgravity too.

    The findings: BioRock was a series of 36 experiments that took place on the space station. An international team of scientists built what they call “biomining reactors”—tiny containers the size of matchboxes that contain small slices of basalt rock (igneous rock that’s usually found at or near the surface of Earth, and is quite common on the moon and Mars) submerged in a solution of bacteria.

    Up on the ISS those bacteria were exposed to different gravity simulations (microgravity, Mars gravity, and Earth gravity) as they munched on the rocks for about three weeks, while researchers measured the rare Earth elements released from that activity. Of the three bacteria species studied, one—Sphingomonas desiccabilis—was capable of extracting elements like neodymium, cerium, and lanthanum about as effectively in lower-gravity environments as they do on Earth.

    So what: Microbes won’t replace standard mining technology if we ever mine for resources in space, but they could definitely speed things up. The team behind BioRock suggests that microbes could help accelerate mining on extraterrestrial bodies by as much as 400%, helping to separate metal powders and valuable minerals from other useful elements like oxygen. The fact that they seem able to withstand microgravity suggests these microbes could be a potentially cheap way to extract resources to make life in space more sustainable—and enable lengthy journeys and settlements on distant worlds.

    #Espace #Terres_rares #Bactéries #Espace #Communs

  • How China got a head start in fintech, and why the West won’t catch up
    https://www.technologyreview.com/2018/12/19/138354/how-china-got-a-head-start-in-fintech-and-why-the-west-wont-catch-

    Payment apps like Alipay and WeChat transformed daily life in China. The West won’t see a similar payments revolution—and that might even be a good thing. In 2013 I moved from Paris to Beijing to study China’s financial system. I stayed for two years and became fluent enough to translate economics books from Mandarin into English and give talks on monetary policy in Mandarin. But I never really felt I fit in until I visited again and Alipay finally approved me (foreigners can have a hard (...)

    #Alibaba #Apple #Google #Tencent #WeChat #Alipay #payement #QRcode #technologisme #domination #finance (...)

    ##surveillance

  • Live facial recognition is tracking kids suspected of being criminals
    https://www.technologyreview.com/2020/10/09/1009992/live-facial-recognition-is-tracking-kids-suspected-of-crime

    In Buenos Aires, the first known system of its kind is hunting down minors who appear in a national database of alleged offenders. In a national database in Argentina, tens of thousands of entries detail the names, birthdays, and national IDs of people suspected of crimes. The database, known as the Consulta Nacional de Rebeldías y Capturas (National Register of Fugitives and Arrests), or CONARC, began in 2009 as a part of an effort to improve law enforcement for serious crimes. But there (...)

    #algorithme #CCTV #biométrie #criminalité #données #facial #reconnaissance #vidéo-surveillance #enfants (...)

    ##criminalité ##surveillance