https://www.technologyreview.com

  • TikTok changed the shape of some people’s faces without asking | MIT Technology Review
    https://www.technologyreview.com/2021/06/10/1026074/tiktok-mandatory-beauty-filter-bug/?truid=a497ecb44646822921c70e7e051f7f1a

    Users noticed what appeared to be a beauty filter they hadn’t requested—and which they couldn’t turn off.
    by

    Abby Ohlheiser
    June 10, 2021
    An user opening TikTok on his iPhone
    Lorenzo Di Cola/NurPhoto via AP

    “That’s not my face,” Tori Dawn thought, after opening TikTok to make a video in late May. The jaw reflected back on the screen was wrong, slimmer and more feminine. And when they waved their hand in front of the camera, blocking most of their face from the lens, their jaw appeared to pop back to normal. Was their skin also a little softer?

    On further investigation, it seemed as if the image was being run through a beauty filter in the TikTok app. Normally, Dawn keeps those filters off in livestreams and videos to around 320,000 followers. But as they flipped around the app’s settings, there was no way to disable the effect:. it seemed to be permanently in place, subtly feminizing Dawn’s features.
    Related Story
    Beauty filters are changing the way young girls see themselves

    The most widespread use of augmented reality isn’t in gaming: it’s the face filters on social media. The result? A mass experiment on girls and young women.

    “My face is pretty androgynous and I like my jawline,” Dawn said in an interview. “So when I saw that it was popping in and out, I’m like ‘why would they do that, why?’ This is one of the only things that I like about my face. Why would you do that?”

    Beauty filters are now a part of life online, allowing users to opt in to changing the face they present to the world on social media. Filters can widen eyes, plump up lips, apply makeup, and change the shape of the face, among other things. But it’s usually a choice, not forced on users—which is why Dawn and others who encountered this strange effect, were so angry and disturbed by it.

    Dawn told her followers about it in a video. “As long as that’s still a thing,” Dawn said, showing the effect to their jaw pop in and out on screen, “I don’t feel comfortable making videos because this is not what I look like, and I don’t know how to fix it.” The video got more than 300,000 views, they said, and was shared and duetted by other users who noticed the same thing.

    congrats tiktok I am super uncomfortable and disphoric now cuz of whatever the fuck this shit is
    ♬ original sound - Tori Dawn

    “Is that why I’ve been kind of looking like an alien lately?” said one.

    “Tiktok. Fix this,” said another.

    Videos like these circulated for days in late May, as a portion of TikTok’s users looked into the camera and saw a face that wasn’t their own. As the videos spread, many users wondered whether the company was secretly testing out a beauty filter on some users.
    An odd, temporary issue

    I’m a TikTok lurker, not a maker, so it was only after seeing Dawn’s video that I decided to see if the effect appeared on my own camera. Once I started making a video, the change to my jaw shape was obvious. I suspected, but couldn’t tell for sure, that my skin had been smoothed as well. I sent a video of it in action to coworkers and my Twitter followers, asking them to open the app and try the same thing on their own phones: from their responses, I learned that the effect only seemed to impact Android phones. I reached out to TikTok, and the effect stopped appearing two days later. The company later acknowledged in a short statement that there was an issue that had been resolved, but did not provide further details.
    Sign up for The Download - Your daily dose of what’s up in emerging technology
    Stay updated on MIT Technology Review initiatives and events?
    Yes
    No

    On the surface it was an odd, temporary issue that affected some users and not others. But it was also forcibly changing people’s appearances—an important glitch for an app that is used by around 100 million people in the US. So I also sent the video to Amy Niu, a PhD candidate at the University of Wisconsin who studies the psychological impact of beauty filters. She pointed out that in China, and some other places, some apps add a subtle beauty filter by default. When Niu uses apps like WeChat, she can only really tell that a filter is in place by comparing a photo of herself using her camera to the image produced in the app.

    A couple months ago, she said, she downloaded the Chinese version of TikTok, called Douyin. “When I turned off the beauty mode and filters, I can still see an adjustment to my face,” she said.

    Having beauty filters in an app isn’t necessarily a bad thing, Niu said, but app designers have a responsibility to consider how those filters will be used, and how they will change the people who use them. Even if it was a temporary bug, it could have an impact on how people see themselves.

    “People’s internalization of beauty standards, their own body image or whether they will intensify their appearance concerns,” Niu said, are all considerations.

    For Dawn, the strange facial effect was just one more thing to add to the list of frustrations with TikTok: “It’s been very reminiscent of a relationship with a narcissist because they love bomb you one minute, they’re giving you all these followers and all this attention and it feels so good,” they said. “And then for some reason they just, they’re just like, we’re cutting you off.”

    #Beauty_filters #Image_de_soi #Filtres #Image

  • Here’s what China wants from its next space station | MIT Technology Review
    https://www.technologyreview.com/2021/04/30/1024371/china-space-station-tianhe-1-iss

    “From my perspective, the Chinese government’s number one goal is its own survival,” says Hines. “And so these projects are very much aligned with those domestic interests, even if they don’t make a ton of sense in broader geopolitical considerations or have much in the way of scientific contributions.”

  • Police in Ogden, Utah and small cities around the US are using these surveillance technologies | MIT Technology Review
    https://www.technologyreview.com/2021/04/19/1022893/police-surveillance-tactics-cameras-rtcc/?truid=a497ecb44646822921c70e7e051f7f1a

    Police departments want to know as much as they legally can. But does ever-greater surveillance technology serve the public interest?

    At a conference in New Orleans in 2007, Jon Greiner, then the chief of police in Ogden, Utah, heard a presentation by the New York City Police Department about a sophisticated new data hub called a “real time crime center.” Reams of information rendered in red and green splotches, dotted lines, and tiny yellow icons appeared as overlays on an interactive map of New York City: Murders. Shootings. Road closures.

    In the early 1990s, the NYPD had pioneered a system called CompStat that aimed to discern patterns in crime data, since widely adopted by large police departments around the country. With the real time crime center, the idea was to go a step further: What if dispatchers could use the department’s vast trove of data to inform the police response to incidents as they occurred?

    In 2021, it might be simpler to ask what can’t be mapped. Law enforcement agencies today have access to powerful new engines of data processing and association. Police agencies in major cities are already using facial recognition to identify suspects—sometimes falsely—and deploying predictive policing to define patrol routes.

    Around the country, the expansion of police technology has followed a similar pattern, driven more by conversations between police agencies and their vendors than between police and the public they serve. The question is: where do we draw the line? And who gets to decide?

    #Police #Prédiction #Smart_city

  • Police in Ogden, Utah and small cities around the US are using these surveillance technologies
    https://www.technologyreview.com/2021/04/19/1022893/police-surveillance-tactics-cameras-rtcc

    Police departments want to know as much as they legally can. But does ever-greater surveillance technology serve the public interest ? At a conference in New Orleans in 2007, Jon Greiner, then the chief of police in Ogden, Utah, heard a presentation by the New York City Police Department about a sophisticated new data hub called a “real time crime center.” Reams of information rendered in red and green splotches, dotted lines, and tiny yellow icons appeared as overlays on an interactive map (...)

    #NYPD #algorithme #CCTV #police #criminalité #prédiction #vidéo-surveillance #surveillance

    ##criminalité

  • The new lawsuit that shows facial recognition is officially a civil rights issue
    https://www.technologyreview.com/2021/04/14/1022676/robert-williams-facial-recognition-lawsuit-aclu-detroit-police

    Robert Williams, who was wrongfully arrested because of a faulty facial recognition match, is asking for the technology to be banned. On January 9, 2020, Detroit police drove to the suburb of Farmington Hill and arrested Robert Williams in his driveway while his wife and young daughters looked on. Williams, a Black man, was accused of stealing watches from a luxury store. He was held overnight in jail. During questioning, an officer showed Williams a picture of a suspect. His response, he (...)

    #algorithme #CCTV #biométrie #procès #racisme #facial #reconnaissance #biais #discrimination (...)

    ##ACLU

  • Why 2020 was a pivotal, contradictory year for facial recognition
    https://www.technologyreview.com/2020/12/29/1015563/why-2020-was-a-pivotal-contradictory-year-for-facial-recognition

    The racial justice movement pushed problems with the technology into public consciousness—but despite scandals and bans, its growth isn’t slowing. America’s first confirmed wrongful arrest by facial recognition technology happened in January 2020. Robert Williams, a Black man, was arrested in his driveway just outside Detroit, with his wife and young daughter watching. He spent the night in jail. The next day in the questioning room, a detective slid a picture across the table to Williams of (...)

    #algorithme #CCTV #biométrie #racisme #facial #reconnaissance #vidéo-surveillance #BlackLivesMatter #discrimination #surveillance #Clearview #Microsoft #IBM #Amazon #lobbying (...)

    ##ACLU

  • The new lawsuit that shows facial recognition is officially a civil rights issue | MIT Technology Review
    https://www.technologyreview.com/2021/04/14/1022676/robert-williams-facial-recognition-lawsuit-aclu-detroit-police/?truid=a497ecb44646822921c70e7e051f7f1a

    Robert Williams, who was wrongfully arrested because of a faulty facial recognition match, is asking for the technology to be banned.

    The news: On January 9, 2020, Detroit Police wrongfully arrested a Black man named Robert Williams due to a bad match from their department’s facial recognition system. Two more instances of false arrests have since been made public. Both are also Black men, and both have taken legal action to try rectifying the situation. Now Williams is following in their path and going further—not only by suing the Detroit Police for his wrongful arrest, but by trying to get the technology banned.

    The details: On Tuesday, the ACLU and the University of Michigan Law School’s Civil Rights Litigation Initiative filed a lawsuit on behalf of Williams, alleging that his arrest violated Williams’s Fourth Amendment rights and was in defiance of Michigan’s civil rights law. The suit requests compensation, greater transparency about the use of facial recognition, and that the Detroit Police Department stop using all facial recognition technology, either directly or indirectly.

    The significance: Racism within American law enforcement makes the use of facial recognition, which has been proven to misidentify Black people at much higher rates, even more concerning.

    #Reconnaissance_faciale #Racisme #Droits_humains #Intelligence_artificielle

  • Facebook’s ad algorithms are still excluding women from seeing jobs
    https://www.technologyreview.com/2021/04/09/1022217/facebook-ad-algorithm-sex-discrimination

    Its ad-delivery system is excluding women from opportunities without regard to their qualifications. That would be illegal under US employment law. Facebook is withholding certain job ads from women because of their gender, according to the latest audit of its ad service. The audit, conducted by independent researchers at the University of Southern California (USC), reveals that Facebook’s ad-delivery system shows different job ads to women and men even though the jobs require the same (...)

    #Facebook #sexisme #algorithme #biais #discrimination #femmes #travail

  • The NYPD used Clearview’s controversial facial recognition tool. Here’s what you need to know
    https://www.technologyreview.com/2021/04/09/1022240/clearview-ai-nypd-emails

    Newly-released emails show New York police have been widely using the controversial Clearview AI facial recognition system—and making misleading statements about it. It’s been a busy week for Clearview AI, the controversial facial recognition company that uses 3 billion photos scraped from the web to power a search engine for faces. On April 6, Buzzfeed News published a database of over 1,800 entities—including state and local police and other taxpayer-funded agencies such as health-care (...)

    #Clearview #algorithme #CCTV #biométrie #police #facial #reconnaissance #vidéo-surveillance #surveillance (...)

    ##NYPD

  • How beauty filters took over social media
    https://www.technologyreview.com/2021/04/02/1021635/beauty-filters-young-girls-augmented-reality-social-media

    The most widespread use of augmented reality isn’t in gaming : it’s the face filters on social media. The result ? A mass experiment on girls and young women. Veronica started using filters to edit pictures of herself on social media when she was 14 years old. She remembers everyone in her middle school being excited by the technology when it became available, and they had fun playing with it. “It was kind of a joke,” she says. “People weren’t trying to look good when they used the filters.” (...)

    #TikTok #Facebook #Instagram #MySpace #Snapchat #algorithme #technologisme #beauté #femmes #jeunesse #selfie (...)

    ##beauté ##SocialNetwork

  • How to poison the data that Big Tech uses to surveil you
    https://www.technologyreview.com/2021/03/05/1020376/resist-big-tech-surveillance-data

    Algorithms are meaningless without good data. The public can exploit that to demand change. Every day, your life leaves a trail of digital breadcrumbs that tech giants use to track you. You send an email, order some food, stream a show. They get back valuable packets of data to build up their understanding of your preferences. That data is fed into machine-learning algorithms to target you with ads and recommendations. Google cashes your data in for over $120 billion a year of ad revenue. (...)

    #Google #algorithme #activisme #[fr]Règlement_Général_sur_la_Protection_des_Données_(RGPD)[en]General_Data_Protection_Regulation_(GDPR)[nl]General_Data_Protection_Regulation_(GDPR) #BigData (...)

    ##[fr]Règlement_Général_sur_la_Protection_des_Données__RGPD_[en]General_Data_Protection_Regulation__GDPR_[nl]General_Data_Protection_Regulation__GDPR_ ##microtargeting

    • Data Leverage: A Framework for Empowering the Public in its
      Relationship with Technology Companies

      https://arxiv.org/pdf/2012.09995.pdf

      Many powerful computing technologies rely on implicit and explicit data contributions from the public. This dependency suggests a potential source of leverage for the public in its relationship with technology companies: by reducing, stopping, redirecting, or otherwise manipulating data contributions, the public can reduce the effectiveness of many lucrative technologies. In this paper, we synthesize emerging research that seeks to better understand and help people action this data leverage. Drawing on prior work in areas including machine learning, human-computer interaction, and fairness and accountability in computing, we present a framework for
      understanding data leverage that highlights new opportunities to change technology company behavior related to privacy, economic inequality, content moderation and other areas of societal concern. Our framework also points towards ways that policymakers can bolster data leverage as a means of changing the balance of power between the public and tech companies.

  • Is the new boom in digital art sales a genuine opportunity or a trap? | MIT Technology Review
    https://www.technologyreview.com/2021/03/25/1021215/nft-artists-scams-profit-environment-blockchain

    Artists are jumping into a market that will pay thousands for their work. But they’re running into scams, environmental concerns, and crypto hype.

    Anna Podedworna first heard about NFTs a month or so ago, when a fellow artist sent her an Instagram message trying to convince her to get on board. She found it really off-putting, like a pitch for a pyramid scheme. He had the best of intentions, she thought: NFTs, or non-fungible tokens, are basically just a way of selling and buying anything digital, including art, that’s supported by cryptocurrency. Despite Podedworna’s initial reaction, she started researching whether they might provide some alternative income.

    She’s still on the fence, but NFTs have become an unavoidable subject for anyone earning a living as a creative person online. Some promise that NFTs are part of a digital revolution that will democratize fame and give creators control. Others point to the environmental impact of crypto and worry about unrealistic expectations set by, say, the news that digital artist Beeple had sold a JPG of his collected works for $69 million in a Christie’s auction.

    Newcomers must untangle practical, logistical, and ethical conundrums if they want to enter the fray before the current wave of interest passes. And there’s a question lingering in the background: Is the NFT craze benefiting digital artists, or are artists helping to make wealthy cryptocurrency holders even richer?

    #NFT #Art_numérique #Cryptoart #Arnaque #Cryptomonnaies #Idéologie_propriétaire

  • Scientists plan to drop limits on how far human embryos are grown in the lab | MIT Technology Review
    https://www.technologyreview.com/2021/03/16/1020879/scientists-14-day-limit-stem-cell-human-embryo-research/?truid=a497ecb44646822921c70e7e051f7f1a

    As technology for manipulating embryonic life accelerates, researchers want to get rid of their biggest stop sign.

    Antonio Regalado
    March 16, 2021

    Pushing the limits: For the last 40 years, scientists have agreed never to allow human embryos to develop beyond two weeks in their labs. Now a key scientific body is ready to do away with the 14-day limit. The International Society for Stem Cell Research has prepared draft recommendations to move such research out of a category of “prohibited” scientific activities and into a class of research that can be permitted after ethics review and depending on national regulations.

    Why? Scientists are motivated to grow embryos longer in order to study—and potentially manipulate—the development process. They believe discoveries could come from studying embryos longer, for example improvements to IVF or finding clues to the causes of birth defects. But such techniques raise the possibility of someday gestating animals outside the womb until birth, a concept called ectogenesis. And the long-term growth of embryos could create a platform to explore the genetic engineering of humans.

    #Cellules_souches #Biotechnologies #Embryons_humains #Hubris

  • He got Facebook hooked on AI. Now he can’t fix its misinformation addiction, by Karen Hao | MIT Technology Review
    https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation

    The company’s AI algorithms gave it an insatiable habit for lies and hate speech. Now the man who built them can’t fix the problem.

    #facebook #AI

  • MIT Technology Review : We reveal our 10 Breakthrough Technologies of 2021

    For the last 20 years, MIT Technology Review has compiled an annual selection of the year’s most important technologies. Today, we unveil this year’s list. Some, such as mRNA vaccines, are already changing our lives, while others are still a few years off. As always, three things are true of our list. It is eclectic; some of the innovations on it are clearly making an impact now, while some have yet to do so; and many of them have the potential to do harm as well as good. Whether or not they come to represent progress 20 years from now depends on how they’re used—and, of course, on how we’re defining progress by then. Taken together, we believe this list represents a glimpse into our collective future.

    Here are our 10 breakthrough technologies of 2021:

    Messenger RNA vaccines. The two most effective vaccines against the coronavirus are based on messenger RNA, a technology that has been in the works for 20 years and could transform medicine, leading to vaccines against various infectious diseases, including malaria.

    GPT-3. Large natural-language computer models that learn to write and speak are a big step toward AI that can better understand and interact with the world. GPT-3 is by far the largest—and most literate—to date.

    TikTok recommendation algorithms. These algorithms have changed the way people become famous online. The ability of new creators to get a lot of views very quickly—and the ease with which users can discover so many kinds of content—have contributed to the app’s stunning growth.

    Lithium-metal batteries. Electric vehicles are expensive, and you can only drive them a few hundred miles before they need to recharge. Lithium-metal batteries, as opposed to the existing lithium-ion, could boost the range of an EV by 80%.

    Data trusts. A data trust is a legal entity that collects and manages people’s personal data on their behalf. They could offer a potential solution to long-standing problems in privacy and security.

    Green hydrogen. Hydrogen has always been an intriguing possible replacement for fossil fuels, but up to now it’s been made from natural gas; the process is dirty and energy intensive. The rapidly dropping cost of solar and wind power means green hydrogen is now cheap enough to be practical.

    Digital contact tracing. Although it hasn’t lived up to the hype in this pandemic, especially in the US, digital contact tracing could not only help us prepare for the next pandemic but also carry over to other areas of healthcare.

    Hyper-accurate positioning. While GPS is accurate to within 5 to 10 meters, new hyper-accurate positioning technologies have accuracies within a few millimeters. That could be transformative for delivery robots and self-driving cars.

    Remote everything. The pandemic forced the world to go remote. The knock-on effects for work, play, healthcare and much else besides are huge.

    Multi-skilled AI. AI currently lacks the ability, found even in young children, to learn how the world works and apply that general knowledge to new situations. That’s changing.

    Read more about each of these technologies, and read the latest issue of MIT Technology Review, all about progress. Not a subscriber? Now’s your chance! Prices range from just $50 to $100 a year for you to get access to fantastic, award-winning journalism about what’s now and what’s next in technology.

    #Technologies

  • This is how we lost control of our faces | MIT Technology Review
    https://www.technologyreview.com/2021/02/05/1017388/ai-deep-learning-facial-recognition-data-history

    The largest ever study of facial-recognition data shows how much the rise of deep learning has fueled a loss of privacy.
    by

    Karen Hao
    February 5, 2021

    In 1964, mathematician and computer scientist Woodrow Bledsoe first attempted the task of matching suspects’ faces to mugshots. He measured out the distances between different facial features in printed photographs and fed them into a computer program. His rudimentary successes would set off decades of research into teaching machines to recognize human faces.

    Now a new study shows just how much this enterprise has eroded our privacy. It hasn’t just fueled an increasingly powerful tool of surveillance. The latest generation of deep-learning-based facial recognition has completely disrupted our norms of consent.

    People were extremely cautious about collecting, documenting, and verifying face data in the early days, says Raji. “Now we don’t care anymore. All of that has been abandoned,” she says. “You just can’t keep track of a million faces. After a certain point, you can’t even pretend that you have control.”

    A history of facial-recognition data

    The researchers identified four major eras of facial recognition, each driven by an increasing desire to improve the technology. The first phase, which ran until the 1990s, was largely characterized by manually intensive and computationally slow methods.

    But then, spurred by the realization that facial recognition could track and identify individuals more effectively than fingerprints, the US Department of Defense pumped $6.5 million into creating the first large-scale face data set. Over 15 photography sessions in three years, the project captured 14,126 images of 1,199 individuals. The Face Recognition Technology (FERET) database was released in 1996.

    The following decade saw an uptick in academic and commercial facial-recognition research, and many more data sets were created. The vast majority were sourced through photo shoots like FERET’s and had full participant consent. Many also included meticulous metadata, Raji says, such as the age and ethnicity of subjects, or illumination information. But these early systems struggled in real-world settings, which drove researchers to seek larger and more diverse data sets.

    In 2007, the release of the Labeled Faces in the Wild (LFW) data set opened the floodgates to data collection through web search. Researchers began downloading images directly from Google, Flickr, and Yahoo without concern for consent. LFW also relaxed standards around the inclusion of minors, using photos found with search terms like “baby,” “juvenile,” and “teen” to increase diversity. This process made it possible to create significantly larger data sets in a short time, but facial recognition still faced many of the same challenges as before. This pushed researchers to seek yet more methods and data to overcome the technology’s poor performance.

    Then, in 2014, Facebook used its user photos to train a deep-learning model called DeepFace. While the company never released the data set, the system’s superhuman performance elevated deep learning to the de facto method for analyzing faces. This is when manual verification and labeling became nearly impossible as data sets grew to tens of millions of photos, says Raji. It’s also when really strange phenomena start appearing, like auto-generated labels that include offensive terminology.

    Image-generation algorithms are regurgitating the same sexist, racist ideas that exist on the internet.

    The way the data sets were used began to change around this time, too. Instead of trying to match individuals, new models began focusing more on classification. “Instead of saying, ‘Is this a photo of Karen? Yes or no,’ it turned into ‘Let’s predict Karen’s internal personality, or her ethnicity,’ and boxing people into these categories,” Raji says.

    Amba Kak, the global policy director at AI Now, who did not participate in the research, says the paper offers a stark picture of how the biometrics industry has evolved. Deep learning may have rescued the technology from some of its struggles, but “that technological advance also has come at a cost,” she says. “It’s thrown up all these issues that we now are quite familiar with: consent, extraction, IP issues, privacy.”

    Raji says her investigation into the data has made her gravely concerned about deep-learning-based facial recognition.

    “It’s so much more dangerous,” she says. “The data requirement forces you to collect incredibly sensitive information about, at minimum, tens of thousands of people. It forces you to violate their privacy. That in itself is a basis of harm. And then we’re hoarding all this information that you can’t control to build something that likely will function in ways you can’t even predict. That’s really the nature of where we’re at.”

    #Reconnaissance_faciale #éthique #Histoire_numérique #Surveillance

  • The space tourism we were promised is finally here—sort of | MIT Technology Review
    https://www.technologyreview.com/2021/02/03/1017255/space-tourism-finally-here-sort-of-spacex-inspiration4/?truid=a497ecb44646822921c70e7e051f7f1a

    SpaceX weathered through the onset of the covid-19 pandemic last year to become the first private company to launch astronauts into space using a commercial spacecraft.

    It’s poised to build on that success with another huge milestone before 2021 is over. On Monday, the company announced plans to launch the first “all-civilian” mission into orbit by the end of the year. Called Inspiration4, the mission will take billionaire Jared Isaacman, a trained pilot and the CEO of digital payments company Shift4Payments, plus three others into low Earth orbit via a Crew Dragon vehicle for two to four days, possibly longer.

    Inspiration4 includes a charity element: Isaacman (the sole buyer of the mission and its “commander”) has donated $100 million to St. Jude Children’s Research Hospital, in Memphis, and is attempting to raise at least $100 million more from public donors. One seat is going to a “St. Jude ambassador” that’s already been chosen. But the two others are still up for grabs: one will be raffled off to someone who donates at least $10 to St. Jude, while the other will be a business entrepreneur chosen through a competition held by Shift4Payments.
    Sign up for The Airlock - Your gateway to the future of space technology
    Stay updated on MIT Technology Review initiatives and events?
    Yes
    No

    “This is an important milestone towards enabling access to space for everyone,” SpaceX CEO Elon Musk told reporters on Monday. “It is only through missions like this that we’re able to bring the cost down over time and make space accessible to all."

    Inspiration4 marks SpaceX’s fourth scheduled private mission in the next few years. The other three include a collaboration with Axiom Space to use Crew Dragon to take four people for an eight-day stay aboard the International Space Station (now scheduled for no earlier than January 2022); another Crew Dragon mission into orbit later that year for four private citizens through tourism company Space Adventures; and Japanese billionaire Yusaku Maezawa’s #dearMoon mission around the moon in 2023 for himself plus seven to 10 others aboard the Starship spacecraft.

    SpaceX has never really billed itself as a space tourism company as aggressively as Blue Origin and Virgin Galactic have. While Crew Dragon goes all the way into low-Earth orbit, Virgin Galactic’s SpaceShipTwo and Blue Origin’s New Shepard vehicles just go into suborbital space, offering a taste of microgravity and a view of the Earth from high above for just a few minutes—but for way less money. And yet, in building a business that goes even farther, with higher launch costs and the need for more powerful rockets, SpaceX already has four more private missions on the books than any other company does.

    When Crew Dragon first took NASA astronauts into space last year, one of the biggest questions to come up was whether customers outside NASA would actually be interested in going.

    “A lot of people believe there is a market for space tourism,” says Howard McCurdy, a space policy expert at American University in Washington, DC. “But right now it’s at the very high end. As transportation capabilities improve, the hope is that the costs will come down. That begs the question of whether or not you can sustain a new space company on space tourism alone. I think that’s questionable.”

    So why has SpaceX’s expansion into the private mission scene gone so well so far? Part of it must be that it’s such an attractive brand to partner with at the moment. But even if a market does not materialize soon to make private missions a profitable venture, SpaceX doesn’t need to be concerned. It has plenty of other ways to make money.

    “I’m not sure Elon Musk cares much if he makes money through this business,” says McCurdy. “But he’s very good at leveraging and financing his operations.” SpaceX launches satellites for government and commercial customers around the world; it’s got contracts with NASA for taking cargo and astronauts alike to the space station; it’s ramping up progress with building out the Starlink constellation and should start offering internet services to customers some time this year.

    “It really reduces your risk when you can have multiple sources of revenue and business for an undertaking that’s based upon the single leap of rockets and space technologies,” says McCurdy. “The market for space tourism is not large enough to sustain a commercial space company. When combined with government contracts, private investments, and foreign sales it starts to become sustainable.”

    Space tourism, especially to low-Earth orbit, will still remain incredibly expensive for the foreseeable future. And that underscores the issue of equity. “If we’re going into space, who’s the ‘we’?” asks McCurdy. “Is it just the top 1% of the top 1%?”

    The lottery concept addresses this to some extent and offers opportunities to ordinary people, but it won’t be enough on its own. Space tourism, and the rest of the space industry, still needs a sustainable model that can invite more people to participate.

    For now, SpaceX appears to be leading the drive to popularize space tourism. And competitors don’t necessarily need to emulate SpaceX’s business model precisely in order to catch up. Robert Goehlich, a German-based space tourism expert at Embry-Riddle Aeronautical University, notes that space tourism itself is already multifaceted, encompassing suborbital flights, orbital flights, space station flights, space hotel flights, and moon flights. The market for one, such as cheaper suborbital flights, is not necessarily faced with the same constraints as the others.

    Still, there is no question this could be the year private missions become a reality. “We’ve waited a long time for space tourism,” says McCurdy. “We’re going to get a chance this year to see if it works as expected.”

    #Espace #Commercialisation #Tourisme #Enclosures

  • How our data encodes systematic racism
    https://www.technologyreview.com/2020/12/10/1013617/racism-data-science-artificial-intelligence-ai-opinion

    Technologists must take responsibility for the toxic ideologies that our data sets and algorithms reflect. I’ve often been told, “The data does not lie.” However, that has never been my experience. For me, the data nearly always lies. Google Image search results for “healthy skin” show only light-skinned women, and a query on “Black girls” still returns pornography. The CelebA face data set has labels of “big nose” and “big lips” that are disproportionately assigned to darker-skinned female faces (...)

    #algorithme #racisme #données #biais #discrimination

  • Inside NSO, Israel’s billion-dollar spyware giant
    https://www.technologyreview.com/2020/08/19/1006458/nso-spyware-controversy-pegasus-human-rights

    The world’s most notorious surveillance company says it wants to clean up its act. Go on, we’re listening.

    Maâti Monjib speaks slowly, like a man who knows he’s being listened to.

    It’s the day of his 58th birthday when we speak, but there’s little celebration in his voice. “The surveillance is hellish,” Monjib tells me. “It is really difficult. It controls everything I do in my life.”

    A history professor at the University of Mohammed V in Rabat, Morocco, Monjib vividly remembers the day in 2017 when his life changed. Charged with endangering state security by the government he has fiercely and publicly criticized, he was sitting outside a courtroom when his iPhone suddenly lit up with a series of text messages from numbers he didn’t recognize. They contained links to salacious news, petitions, and even Black Friday shopping deals.

    A month later, an article accusing him of treason appeared on a popular national news site with close ties to Morocco’s royal rulers. Monjib was used to attacks, but now it seemed his harassers knew everything about him: another article included information about a pro-democracy event he was set to attend but had told almost no one about. One story even proclaimed that the professor “has no secrets from us.”

    He’d been hacked. The messages had all led to websites that researchers say were set up as lures to infect visitors’ devices with Pegasus, the most notorious spyware in the world.

    Pegasus is the blockbuster product of NSO Group, a secretive billion-dollar Israeli surveillance company. It is sold to law enforcement and intelligence agencies around the world, which use the company’s tools to choose a human target, infect the person’s phone with the spyware, and then take over the device. Once Pegasus is on your phone, it is no longer your phone.

    NSO sells Pegasus with the same pitch arms dealers use to sell conventional weapons, positioning it as a crucial aid in the hunt for terrorists and criminals. In an age of ubiquitous technology and strong encryption, such “lawful hacking” has emerged as a powerful tool for public safety when law enforcement needs access to data. NSO insists that the vast majority of its customers are European democracies, although since it doesn’t release client lists and the countries themselves remain silent, that has never been verified.

    Monjib’s case, however, is one of a long list of incidents in which Pegasus has been used as a tool of oppression. It has been linked to cases including the murder of Saudi journalist Jamal Khashoggi, the targeting of scientists and campaigners pushing for political reform in Mexico, and Spanish government surveillance of Catalan separatist politicians. Mexico and Spain have denied using Pegasus to spy on opponents, but accusations that they have done so are backed by substantial technical evidence.

    NSO’s basic argument is that it is the creator of a technology that governments use, but that since it doesn’t attack anyone itself, it can’t be held responsible.

    Some of that evidence is contained in a lawsuit filed last October in California by WhatsApp and its parent company, Facebook, alleging that Pegasus manipulated WhatsApp’s infrastructure to infect more than 1,400 cell phones. Investigators at Facebook found more than 100 human rights defenders, journalists, and public figures among the targets, according to court documents. Each call that was picked up, they discovered, sent malicious code through WhatsApp’s infrastructure and caused the recipient’s phone to download spyware from servers owned by NSO. This, WhatsApp argued, was a violation of American law.

    NSO has long faced such accusations with silence. Claiming that much of its business is an Israeli state secret, it has offered precious little public detail about its operations, customers, or safeguards.

    Now, though, the company suggests things are changing. In 2019, NSO, which was owned by a private equity firm, was sold back to its founders and another private equity firm, Novalpina, for $1 billion. The new owners decided on a fresh strategy: emerge from the shadows. The company hired elite public relations firms, crafted new human rights policies, and developed new self-­governance documents. It even began showing off some of its other products, such as a covid-19 tracking system called Fleming, and Eclipse, which can hack drones deemed a security threat.

    Over several months, I’ve spoken with NSO leadership to understand how the company works and what it says it is doing to prevent human rights abuses carried out using its tools. I have spoken to its critics, who see it as a danger to democratic values; to those who urge more regulation of the hacking business; and to the Israeli regulators responsible for governing it today. The company’s leaders talked about NSO’s future and its policies and procedures for dealing with problems, and it shared documents that detail its relationship with the agencies to which it sells Pegasus and other tools. What I found was a thriving arms dealer—inside the company, employees acknowledge that Pegasus is a genuine weapon—struggling with new levels of scrutiny that threaten the foundations of its entire industry.Retour ligne automatique
    “A difficult task”

    From the first day Shmuel Sunray joined NSO as its general counsel, he faced one international incident after another. Hired just days after WhatsApp’s lawsuit was filed, he found other legal problems waiting on his desk as soon as he arrived. They all centered on the same basic accusation: NSO Group’s hacking tools are sold to, and can be abused by, rich and repressive regimes with little or no accountability.

    Sunray had plenty of experience with secrecy and controversy: his previous job was as vice president of a major weapons manufacturer. Over several conversations, he was friendly as he told me that he’s been instructed by the owners to change NSO’s culture and operations, making it more transparent and trying to prevent human rights abuses from happening. But he was also obviously frustrated by the secrecy that he felt prevented him from responding to critics.

    “It’s a difficult task,” Sunray told me over the phone from the company’s headquarters in Herzliya, north of Tel Aviv. “We understand the power of the tool; we understand the impact of misuse of the tool. We’re trying to do the right thing. We have real challenges dealing with government, intelligence agencies, confidentiality, operational necessities, operational limitations. It’s not a classic case of human rights abuse by a company, because we don’t operate the systems—we’re not involved in actual operations of the systems—but we understand there is a real risk of misuse from the customers. We’re trying to find the right balance.”

    This underpins NSO’s basic argument, one that is common among weapons manufacturers: the company is the creator of a technology that governments use, but it doesn’t attack anyone itself, so it can’t be held responsible.

    Still, according to Sunray, there are several layers of protection in place to try to make sure the wrong people don’t have access.Retour ligne automatique
    Making a sale

    Like most other countries, Israel has export controls that require weapons manufacturers to be licensed and subject to government oversight. In addition, NSO does its own due diligence, says Sunray: its staff examine a country, look at its human rights record, and scrutinize its relationship with Israel. They assess the specific agency’s track record on corruption, safety, finance, and abuse—as well as factoring in how much it needs the tool.

    Sometimes negatives are weighed against positives. Morocco, for example, has a worsening human rights record but a lengthy history of cooperating with Israel and the West on security, as well as a genuine terrorism problem, so a sale was reportedly approved. By contrast, NSO has said that China, Russia, Iran, Cuba, North Korea, Qatar, and Turkey are among 21 nations that will never be customers.

    Finally, before a sale is made, NSO’s governance, risk, and compliance committee has to sign off. The company says the committee, made up of managers and shareholders, can decline sales or add conditions, such as technological restrictions, that are decided case by case. Retour ligne automatique
    Preventing abuse

    Once a sale is agreed to, the company says, technological guardrails prevent certain kinds of abuse. For example, Pegasus does not allow American phone numbers to be infected, NSO says, and infected phones cannot even be physically located in the United States: if one does find itself within American borders, the Pegasus software is supposed to self-destruct.

    NSO says Israeli phone numbers are among others also protected, though who else gets protection and why remains unclear.

    When a report of abuse comes in, an ad hoc team of up to 10 NSO employees is assembled to investigate. They interview the customer about the allegations, and they request Pegasus data logs. These logs don’t contain the content the spyware extracted, like chats or emails—NSO insists it never sees specific intelligence—but do include metadata such as a list of all the phones the spyware tried to infect and their locations at the time.

    According to one recent contract I obtained, customers must “use the system only for the detection, prevention, and investigation of crimes and terrorism and ensure the system will not be used for human rights violations.” They must notify the company of potential misuse. NSO says it has terminated three contracts in the past for infractions including abuse of Pegasus, but it refuses to say which countries or agencies were involved or who the victims were.

    “We’re not naïve”

    Lack of transparency is not the only problem: the safeguards have limits. While the Israeli government can revoke NSO’s license for violations of export law, the regulators do not take it on themselves to look for abuse by potential customers and aren’t involved in the company’s abuse investigations.

    Many of the other procedures are merely reactive as well. NSO has no permanent internal abuse team, unlike almost any other billion-dollar tech firm, and most of its investigations are spun up only when an outside source such as Amnesty International or Citizen Lab claims there has been malfeasance. NSO staff interview the agencies and customers under scrutiny but do not talk to the alleged victims, and while the company often disputes the technical reports offered as evidence, it also claims that both state secrecy and business confidentiality prevent it from sharing more information.

    The Pegasus logs that are crucial to any abuse inquiry also raise plenty of questions. NSO Group’s customers are hackers who work for spy agencies; how hard would it be for them to tamper with the logs? In a statement, the company insisted this isn’t possible but declined to offer details.

    If the logs aren’t disputed, NSO and its customers will decide together whether targets are legitimate, whether genuine crimes have been committed, and whether surveillance was done under due process of law or whether autocratic regimes spied on opponents.

    Sunray, audibly exasperated, says he feels as if secrecy is forcing him to operate with his hands tied behind his back.

    “It’s frustrating,” he told me. “We’re not naïve. There have been misuses. There will be misuses. We sell to many governments. Even the US government—no government is perfect. Misuse can happen, and it should be addressed.”

    But Sunray also returns to the company’s standard response, the argument that underpins its defense in the WhatsApp lawsuit: NSO is a manufacturer, but it’s not the operator of the spyware. We built it but they did the hacking—and they are sovereign nations.

    That’s not enough for many critics. “No company that believes it can be the independent watchdog of their own products ever convinces me,” says Marietje Schaake, a Dutch politician and former member of the European Parliament. “The whole idea that they have their own mechanisms while they have no problem selling commercial spyware to whoever wants to buy it, knowing that it’s used against human rights defenders and journalists—I think it shows the lack of responsibility on the part of this company more than anything.”

    So why the internal push for more transparency now? Because the deluge of technical reports from human rights groups, the WhatsApp lawsuit, and increasing governmental scrutiny threaten NSO’s status quo. And if there is going to be a new debate over how the industry gets regulated, it pays to have a powerful voice. Retour ligne automatique
    Growing scrutiny

    Lawful hacking and cyber-espionage have grown enormously as a business over the past decade, with no signs of retreat. NSO Group’s previous owners bought the company in 2014 for $130 million, less than one-seventh of the valuation it was sold for last year. The rest of the industry is expanding too, profiting from the spread of communications technology and deepening global instability. “There’s no doubt that any state has the right to buy this technology to fight crime and terrorism,” says Amnesty International’s deputy director, Danna Ingleton. “States are rightfully and lawfully able to use these tools. But that needs to be accompanied more with a regulatory system that prevents abuses and provides an accountability mechanism when abuse has happened.” Shining a much brighter light on the hacking industry, she argues, will allow for better regulation and more accountability.

    Earlier this year Amnesty International was in court in Israel arguing that the Ministry of Defense should revoke NSO’s license because of abuses of Pegasus. But just as the case was starting, officials from Amnesty and 29 other petitioners were told to leave the courtroom: a gag order was being placed on the proceedings at the ministry’s urging. Then, in July, a judge rejected the case outright.

    “I do not believe as a matter of principle and as a matter of law that NSO can claim a complete lack of responsibility for the way their tools are being used,” says United Nations special rapporteur Agnès Callamard. “That’s not how it works under international law.”

    Callamard advises the UN on extrajudicial executions and has been vocal about NSO Group and the spyware industry ever since it emerged that Pegasus was being used to spy on friends and associates of Khashoggi shortly before he was murdered. For her, the issue has life-or-death consequences.

    If NSO loses the WhatsApp case, one lawyer says, it calls into question all those companies that make their living by finding flaws in software and exploiting them.

    “We’re not calling for something radically new,” says Callamard. “We are saying that what’s in place at the moment is proving insufficient, and therefore governments or regulatory agencies need to move into a different gear quickly. The industry is expanding, and it should expand on the basis of the proper framework to regulate misuse. It’s important for global peace.”

    There have been calls for a temporary moratorium on sales until stronger regulation is enacted, but it’s not clear what that legal framework would look like. Unlike conventional arms, which are subject to various international laws, cyber weapons are currently not regulated by any worldwide arms control agreement. And while nonproliferation treaties have been suggested, there is little clarity on how they would measure existing capabilities, how monitoring or enforcement would work, or how the rules would keep up with rapid technological developments. Instead, most scrutiny today is happening at the national legal level.

    In the US, both the FBI and Congress are looking into possible hacks of American targets, while an investigation led by Senator Ron Wyden’s office wants to find out whether any Americans are involved in exporting surveillance technology to authoritarian governments. A recent draft US intelligence bill would require a government report on commercial spyware and surveillance technology.

    The WhatsApp lawsuit, meanwhile, has taken aim close to the heart of NSO’s business. The Silicon Valley giant argues that by targeting California residents—that is, WhatsApp and Facebook—NSO has given the court in San Francisco jurisdiction, and that the judge in the case can bar the Israeli company from future attempts to misuse WhatsApp’s and Facebook’s networks. That opens the door to an awful lot of possibilities: Apple, whose iPhone has been a paramount NSO target, could feasibly mount a similar legal attack. Google, too, has spotted NSO targeting Android devices.

    And financial damages are not the only sword hanging over NSO’s head. Such lawsuits also bring with them the threat of courtroom discovery, which has the potential to bring details of NSO’s business deals and customers into the public eye.

    “A lot depends on exactly how the court rules and how broadly it characterizes the violation NSO is alleged to have committed here,” says Alan Rozenshtein, a former Justice Department lawyer now at the University of Minnesota Law School. “At a minimum, if NSO loses this case, it calls into question all of those companies that make their products or make their living by finding flaws in messaging software and providing services exploiting those flaws. This will create enough legal uncertainty that I would imagine these would-be clients would think twice before contracting with them. You don’t know if the company will continue to operate, if they’ll get dragged to court, if your secrets will be exposed.” NSO declined to comment on the alleged WhatsApp hack, since it is still an active case. Retour ligne automatique
    “We are always spied on”

    In Morocco, Maâti Monjib was subjected to at least four more hacking attacks throughout 2019, each more advanced than the one before. At some point, his phone browser was invisibly redirected to a suspicious domain that researchers suspect was used to silently install malware. Instead of something like a text message that can raise the alarm and leaves a visible trace, this one was a much quieter network injection attack, a tactic valued because it’s almost imperceptible except to expert investigators.

    On September 13, 2019, Monjib had lunch at home with his friend Omar Radi, a Moroccan journalist who is one of the regime’s sharpest critics. That very day, an investigation later found, Radi was hit with the same kind of network injection attacks that had snared Monjib. The hacking campaign against Radi lasted at least into January 2020, Amnesty International researchers said. He’s been subject to regular police harassment ever since.

    At least seven more Moroccans received warnings from WhatsApp about Pegasus being used to spy on their phones, including human rights activists, journalists, and politicians. Are these the kinds of legitimate spying targets—the terrorists and criminals—laid out in the contract that Morocco and all NSO customers sign?

    In December, Monjib and the other victims sent a letter to Morocco’s data protection authority asking for an investigation and action. Nothing formally came of it, but one of the men, the pro-democracy economist Fouad Abdelmoumni, says his friends high up at the agency told him the letter was hopeless and urged him to drop the matter. The Moroccan government, meanwhile, has responded by threatening to expel Amnesty International from the country.

    What’s happening in Morocco is emblematic of what’s happening around the world. While it’s clear that democracies are major beneficiaries of lawful hacking, a long and growing list of credible, detailed, technical, and public investigations shows Pegasus being misused by authoritarian regimes with long records of human rights abuse.

    “Morocco is a country under an authoritarian regime who believe people like Monjib and myself have to be destroyed,” says Abdelmoumni. “To destroy us, having access to all information is key. We always consider that we are spied on. All of our information is in the hands of the palace.”

    #Apple #NSO #Facebook #WhatsApp #iPhone #Pegasus #smartphone #spyware #activisme #journalisme #écoutes #hacking #surveillance #Amnesty (...)

    ##CitizenLab