/2021

  • Behind the painstaking process of creating Chinese computer fonts | MIT Technology Review
    https://www.technologyreview.com/2021/05/31/1025599/history-first-chinese-digital-computer-fonts

    Bruce Rosenblum switched on his Apple II, which rang out a high F note followed by the clatter of the floppy drive. After a string of thock thock keystrokes, the 12-inch Sanyo monitor began to phosphoresce. A green grid appeared, 16 units wide and 16 units tall. This was “Gridmaster,” a program Bruce had cooked up in the programming language BASIC to build one of the world’s first Chinese digital fonts. He was developing the font for an experimental machine called the Sinotype III, which was among the first personal computers to handle Chinese-language input and output.

    At the time, in the late 1970s and early 1980s, there were no personal computers being built in China. So to make a “Chinese” PC, Rosenblum’s team was reprogramming an Apple II to operate in Chinese. His list of tasks was long. He had to program an operating system from scratch, since Apple II’s DOS 3.3 simply wouldn’t allow the inputting and outputting of Chinese-character texts. Likewise, he had to program the Chinese word processor itself, a job he worked on tirelessly for months.
    A photograph of the Sinotype III monitor shows the Gridmaster program and the digitization process of the Chinese character 电 (dian, electricity).
    LOUIS ROSENBLUM COLLECTION, STANFORD UNIVERSITY LIBRARY SPECIAL COLLECTIONS

    While Gridmaster may have been a simple program, the task that it would be used to accomplish—creating digital bitmaps of thousands of Chinese characters—posed profound design challenges. In fact, creating the font for Sinotype III—a machine developed by the Graphics Arts Research Foundation (GARF) in Cambridge, Massachusetts—took far longer than programming the computer itself. Without a font, there would be no way to display Chinese characters on screen, or to output them on the machine’s dot-matrix printer.

    For each Chinese character, designers had to make 256 separate decisions, one for each potential pixel in the bitmap. (A bitmap is a way of storing images digitally—whether as a JPEG, GIF, BMP, or other file format—using a grid of pixels that together make up a symbol or an image.) Multiplied across thousands of characters, this amounted to literally hundreds of thousands of decisions in a development process that took more than two years to complete.

    Programming Gridmaster—which in hindsight Rosenblum described to me as “clunky to use, at best”—enabled his father, Louis Rosenblum, and GARF to farm out the responsibility of creating the digital font. Using any Apple II machine, and running Gridmaster off a floppy disc, data entry temps could create and save new Chinese character bitmaps, remotely. Once these bitmaps were created and stored, the Rosenblums could install them on the Sinotype III by using a second program (also designed by Bruce) that ingested them and their corresponding input codes into the system’s database.

    Sinotype III was never commercially released. Nevertheless, the painstaking work that went into its development—including the development of this bitmap Chinese font—was central to a complex global effort to solve a vexing engineering puzzle: how to equip a computer to handle Chinese, one of the most widely used languages on Earth.
    A photograph of a Sinotype III monitor displaying the Chinese bitmap font.
    LOUIS ROSENBLUM COLLECTION, STANFORD UNIVERSITY LIBRARY SPECIAL COLLECTIONS

    At the advent of computing and word processing in the West, engineers and designers determined that a low-resolution digital font for English could be built upon a 5-by-7 bitmap grid—requiring only five bytes of memory per symbol. Storing all 128 low-resolution characters in the American Standard Code for Information Interchange (ASCII), which includes every letter in the English alphabet, the numerals 0 through 9, and common punctuation symbols, required just 640 bytes of memory—a tiny fraction of, for example, the Apple II’s 64 kilobytes of onboard memory.
    Related Story
    brain made of electrical cord
    Is your brain a computer?

    We asked experts for their best arguments in the long-standing debate over whether brains and computers process information the same way.

    But there are tens of thousands of Chinese characters, and a 5-by-7 grid was too small to make them legible. Chinese required a grid of 16 by 16 or larger—i.e., at least 32 bytes of memory (256 bits) per character. Were one to imagine a font containing 70,000 low-resolution Chinese characters, the total memory requirement would exceed two megabytes. Even a font containing only 8,000 of the most common Chinese characters would require approximately 256 kilobytes just to store the bitmaps. That was four times the total memory capacity of most off-the-shelf personal computers in the early 1980s.

    As serious as these memory challenges were, the most taxing problems confronting low-res Chinese font production in the 1970s and 1980s were ones of aesthetics and design. Long before anyone sat down with a program like Gridmaster, the lion’s share of work took place off the computer, using pen, paper, and correction fluid.

    Designers spent years trying to fashion bitmaps that fulfilled the low-memory requirements and preserved a modicum of calligraphic elegance. Among those who created this character set, whether by hand-drawing drafts of bitmaps for specific Chinese characters or digitizing them using Gridmaster, were Lily Huan-Ming Ling (凌焕銘) and Ellen Di Giovanni.
    Draft bitmap drawings of Chinese characters for the Sinotype III font.
    LOUIS ROSENBLUM COLLECTION, STANFORD UNIVERSITY LIBRARY SPECIAL COLLECTIONS

    The core problem that designers faced was translating between two radically different ways of writing Chinese: the hand-drawn character, produced with pen or brush, and the bitmap glyph, produced with an array of pixels arranged on two axes. Designers had to decide how (and whether) they were going to try to re-create certain orthographic features of handwritten Chinese, such as entrance strokes, stroke tapering, and exit strokes.

    In the case of the Sinotype III font, the process of designing and digitizing low-resolution Chinese bitmaps was thoroughly documented. One of the most fascinating archival sources from this period is a binder full of grids with hand-drawn hash marks all over them—sketches that would later be digitized into bitmaps for many thousands of Chinese characters. Each of these characters was carefully laid out and, in most cases, edited by Louis Rosenblum and GARF, using correction fluid to erase any “bits” the editor disagreed with. Over top of the initial set of green hash marks, then, a second set of red hash marks indicated the “final” draft. Only then did the work of data entry begin.
    A close-up of a draft bitmap drawing of bei (背, back, rear) showing edits made using correction fluid.
    LOUIS ROSENBLUM COLLECTION, STANFORD UNIVERSITY LIBRARY SPECIAL COLLECTIONS

    Given the sheer number of bitmaps that the team needed to design—at least 3,000 (and ideally many more) if the machine had any hopes of fulfilling consumers’ needs—one might assume that the designers looked for ways to streamline their work. One way they could have done this, for example, would have been to duplicate Chinese radicals—the base components of a character—when they appeared in roughly the same location, size, and orientation from one character to another. When producing the many dozens of common Chinese characters containing the “woman radical” (女), for example, the team at GARF could have (and, in theory, should have) created just one standard bitmap, and then replicated it within every character in which that radical appeared.

    No such mechanistic decisions were made, however, as the archival materials show. On the contrary, Louis Rosenblum insisted that designers adjust each of these components—often in nearly imperceptible ways—to ensure they were in harmony with the overall character in which they appeared.

    In the bitmaps for juan (娟, graceful) and mian (娩, to deliver), for example—each of which contains the woman radical—that radical has been changed ever so slightly. In the character juan, the middle section of the woman radical occupies a horizontal span of six pixels, as compared with five pixels in the character mian. At the same time, however, the bottom-right curve of the woman radical extends outward just one pixel further in the character mian, and in the character juan that stroke does not extend at all.
    The bitmap characters for juan (娟, graceful) and mian (娩, to deliver) from the Sinotype III font, recreated by the author.
    LOUIS ROSENBLUM COLLECTION, STANFORD UNIVERSITY LIBRARY SPECIAL COLLECTIONS

    Across the entire font, this level of precision was the rule rather than the exception.

    When we juxtapose the draft bitmap drawings against their final forms, we see that more changes have been made. In the draft version of luo (罗, collect, net), for example, the bottom-left stroke extends downward at a perfect 45° angle before tapering into the digitized version of an outstroke. In the final version, however, the curve has been “flattened,” beginning at 45° but then leveling out.
    A comparison of two draft versions of the character luo (罗, collect, net).
    LOUIS ROSENBLUM COLLECTION, STANFORD UNIVERSITY LIBRARY SPECIAL COLLECTIONS

    Despite the seemingly small space in which designers had to work, they had to make a staggering number of choices. And every one of these decisions affected every other decision they made for a specific character, since adding even one pixel often changed the overall horizontal and vertical balance.

    The unforgiving size of the grid impinged upon the designers’ work in other, unexpected ways. We see this most clearly in the devilish problem of achieving symmetry. Symmetrical layouts—which abound in Chinese characters—were especially difficult to represent in low-resolution frameworks because, by the rules of mathematics, creating symmetry requires odd-sized spatial zones. Bitmap grids with even dimensions (such as the 16-by-16 grid) made symmetry impossible. GARF managed to achieve symmetry by, in many cases, using only a portion of the overall grid: just a 15-by-15 region within the overall 16-by-16 grid. This reduced the amount of usable space even further.
    Symmetry and asymmetry in the characters shan (山, mounting), zhong (中, middle), ri (日, sun), and tian (田, field).
    LOUIS ROSENBLUM COLLECTION, STANFORD UNIVERSITY LIBRARY SPECIAL COLLECTIONS

    The story becomes even more complex when we begin to compare the bitmap fonts created by different companies or creators for different projects. Consider the water radical (氵) as it appeared in the Sinotype III font (below and on the right), as opposed to another early Chinese font created by H.C. Tien (on the left), a Chinese-American psychotherapist and entrepreneur who experimented with Chinese computing in the 1970s and 1980s.
    A comparison of the water radical (氵) as it appeared in the Sinotype III font (right) versus an early Chinese font created by H.C. Tien (left).
    LOUIS ROSENBLUM COLLECTION, STANFORD UNIVERSITY LIBRARY SPECIAL COLLECTIONS

    As minor as the above examples might seem, each represented yet another decision (among thousands) that the GARF design team had to make, whether during the drafting or the digitization phase.

    Low resolution did not stay “low” for long, of course. Computing advances gave rise to ever denser bitmaps, ever faster processing speeds, and ever diminishing costs for memory. In our current age of 4K resolution, retina displays, and more, it may be hard to appreciate the artistry—both aesthetic and technical—that went into the creation of early Chinese bitmap fonts, as limited as they were. But it was problem-solving like this that ultimately made computing, new media, and the internet accessible to one-sixth of the global population.

    Tom Mullaney is a professor of Chinese history at Stanford University, a Guggenheim fellow, and the Kluge Chair in Technology and Society at the Library of Congress. He is the author or lead editor of six books, including The Chinese Typewriter, Your Computer Is on Fire, and the forthcoming The Chinese Computer—the first comprehensive history of Chinese-language computing.
    by Tom Mullaney

    #Chine #Caractères #Bitmap #Histoire_informatique #Tom_Mullaney

  • Troll farms reached 140 million Americans a month on Facebook before 2020 election | MIT Technology Review
    https://www.technologyreview.com/2021/09/16/1035851/facebook-troll-farms-report-us-2020-election

    As of October 2019, around 15,000 Facebook pages with a majority US audience were being run out of Kosovo and Macedonia, known bad actors during the 2016 election.
    Collectively, those troll-farm pages—which the report treats as a single page for comparison purposes—reached 140 million US users monthly and 360 million global users weekly.

    (ça date d’il y a un an et qqs)

  • The biggest technology failures of 2021 | MIT Technology Review
    https://www.technologyreview.com/2021/12/29/1043061/the-worst-technology-of-2021/?truid=a497ecb44646822921c70e7e051f7f1a

    We’ve never relied more on technology to solve our problems than we do now. Sometimes it works. Vaccines against covid-19 have cut the death toll. We’ve got virus tests and drugs, too.

    But this isn’t the story about what worked in 2021. This is MIT Technology Review’s annual list of cases where innovation went wrong. From the metaverse to Alzheimer’s drugs, the technologies on this are the ones that didn’t work (or that worked too well), the Eurekas we wish no one had ever had, the inventions spawned by the dark side of the human intellect. Read on.

    Biogen’s Alzheimer’s drug

    The best kind of medicine is inexpensive, safe, and effective. Think of setting a bone in a cast, filling a cavity, or administering a $2 polio vaccine. The worst medicine of 2021 is exactly the opposite. It’s Aduhelm—an Alzheimer’s drug that went on sale in June in the US at a yearly cost of around $56,400, without much evidence it helps patients, but with substantial risk of serious brain swelling.
    Related Story
    The biggest technology failures of 2020
    The covid pandemic made this the year we counted on technology more than ever. Here’s how it failed us.

    The drug, sold by Biogen, is an antibody that attaches to brain plaques. Aduhelm flopped in a large human trial, which showed no concrete benefit to patients with the brain disease. Yet the company and the US Food and Drug Administration decided to move forward in June, over the objections of the agency’s expert advisors. Several resigned. One, Aaron Kesselheim, called the episode “probably the worst drug approval decision in recent US history.”

    Yes, we need new treatments for Alzheimer’s. But this approval marked a concerning trend toward approving drugs using a weaker type of evidence known as “surrogate markers.” Because Aduhelm causes a measurable reduction in brain plaques—a marker of dementia—the FDA concluded there was “reasonable likelihood” it would benefit patients. One problem with such guesswork is that no one knows whether these plaques cause disease or are just among its symptoms.

    Aduhelm, the first new Alzheimer’s drug in 20 years, is already a fiasco. Few patients are getting it, Biogen’s sales are minuscule, and at least one person has died from brain swelling. Since the approval, the company has cut the drug’s price in half, and its research chief has abruptly resigned.

    Read more: “How an Unproven Alzheimer’s Drug Got Approved,” New York Times .

    Zillow’s house-buying algorithm

    “Don’t get high on your own supply” is a familiar business maxim. The real estate listing company Zillow did exactly that, with catastrophic results.

    The company’s real-estate listing site is popular, and so are its computer-generated house values, known as “Zestimates.” The company’s error was using its estimates to purchase homes itself, sight unseen, in order to flip them and collect transaction fees. Zillow soon learned that its algorithm didn’t correctly forecast changes in housing prices. And that wasn’t the only problem.

    Zillow was competing with other digital bidders, known as “iBuyers.” So it did what any house hunter determined to make a deal would do: it overpaid. By this year, Zillow was listing hundreds of homes for less than its own purchase price. In November, the company shuttered its iBuying unit Zillow Offers, cut 2,000 jobs, and took a $500 million write-off in what the Wall Street Journal termed “one of the sharpest recent American corporate retreats.”

    Zillow will stick to its original business of selling advertisements to real estate brokers. Its Zestimates still have a home on the site.

    Read more: “What Went Wrong with Zillow? A Real-Estate Algorithm Derailed Its Big Bet,” Wall Street Journal

    Ransomware

    Ransomware is malicious software that kidnaps a company’s computer files by encrypting them. Criminals then demand money to restore access. It’s a booming business. Ransomware hit a new record in 2021 with more than 500 million attacks, according to cybersecurity company SonicWall.
    Related Story
    Why the ransomware crisis suddenly feels so relentless

    Attacks on major companies and critical infrastructure have panicked the US, but the roots of the problem go back years.

    The problem came to wider attention on May 7, 2021, when a ransomware group called DarkSide locked the files of Colonial Pipeline, which operates 5,500 miles of gasoline and fuel pipes stretching between Houston and New York. The company quickly paid more than $4 million in Bitcoin, but the disruption still caused temporary chaos at gas stations on the US East Coast.

    By attacking critical infrastructure, the gang drew more attention than it expected. The FBI tracked and seized back about half the Bitcoin ransom, and DarkSide later announced on its website that it was going out of business.

    As long as people pay ransoms, however, the criminals will be back.

    Space tourism

    If you’ve ever been to the Louvre in Paris, you’ve seen the crowds of wealthy tourists waving iPhones at the Mona Lisa, even if they can barely see it. The famous painting is now just a bucket-list item. Get there, snap a selfie, and then on to the next “experience.”

    Now a snapshot floating above planet Earth is what’s on the wish list for a few billionaires and their buddies. It’s called “space tourism,” but we wonder what the point is. Wikipedia defines it as “human space travel for recreational purposes.”

    It’s not exactly new: the first paying customer launched in 1984 on the space shuttle. But this year the trend expanded in clouds of burnt fuel as Virgin Galactic founder Richard Branson and then Jeff Bezos, the founder of Amazon, each rode vehicles up to the edges of space.

    It’s all about an exclusive experience. But, likes lots of favorite tourist spots, it could soon get crowded up there.

    Blue Origin, the space company started by Bezos, plans an “orbital reef,” a kind of office park circling the planet where people rent space to make films. On Virgin’s website, Branson says the reason for his space plane—with rides costing $200,000 and up—is to get “millions of kids all over the world” excited about “the possibility of them going to space one day.” Get your selfie sticks ready.

    Beauty filters

    This year, Facebook rebranded itself as “Meta,” signaling Mark Zuckerberg’s bet on the emerging virtual worlds of work and play. The appeal of digital reality is that you can be anyone and do anything.
    Related Story
    Beauty filters are changing the way young girls see themselves

    The most widespread use of augmented reality isn’t in gaming: it’s the face filters on social media. The result? A mass experiment on girls and young women.

    But early experience with one form of augmented reality at scale shows that different isn’t always better. We’re talking about beauty filters—apps that let people, often young girls, smooth their skin, thin their noses, and enlarge their eyes in digital images. These apps are not just gimmicks, like those that give you bunny ears. For some young women, they enforce false images they can’t live up to. The message kids are getting is not “Be yourself.”

    Beauty apps are available on Snapchat, TikTok, and Meta’s Instagram—and millions are using them. Meta has already barred some apps that encourage extreme weight loss or plastic surgery, acknowledging some problems. But this year a whistleblower, Frances Haugen, stepped forward to say that Zuckerberg’s company had further data showing that addictive use of Instagram—constantly posting images, seeking likes, and making comparisons—“harms children” and creates “a toxic environment for teens.”

    People feel bad when they use it, but they can’t stop. Beauty filters that make people look good but feel unhappy are a troubling start for the metaverse.

    Read more: “Beauty filters are changing the way young girls see themselves,” MIT Technology Review

    by Antonio Regalado

    #Technologie #Echec #Antisocial

  • The metaverse has a groping problem already | MIT Technology Review
    https://www.technologyreview.com/2021/12/16/1042516/the-metaverse-has-a-groping-problem

    But not everything has been warm and fuzzy. According to Meta, on November 26, a beta tester reported something deeply troubling: she had been groped by a stranger on Horizon Worlds.

    #facebook #meta #metaverse #meta_metaverse #agression_sexuelle #vr #réalité_virtuelle #sécurité #insécurité

  • How #Facebook and #Google fund global misinformation | MIT Technology Review
    https://www.technologyreview.com/2021/11/20/1039076/facebook-google-disinformation-clickbait

    The tech giants are paying millions of dollars to the operators of clickbait pages, bankrolling the deterioration of #information ecosystems around the world.

    #putaclic #démocraties #états-unis « #leadership »

  • Why you should be more concerned about internet shutdowns | MIT Technology Review
    https://www.technologyreview.com/2021/09/09/1035237/internet-shutdowns-censorship-exponential-jigsaw-google

    Deliberate internet shutdowns enacted by governments around the world are increasing in frequency and sophistication, according to a recent report. The study, published by Google’s Jigsaw project with the digital rights nonprofit Access Now and the censorship measurement company Censored Planet, says internet shutdowns are growing “exponentially”: out of nearly 850 shutdowns documented over the last 10 years, 768 have happened since 2016.

    “Internet shutdown” describes a category of activity to curtail access to information. I think when most people use the term, they’re referring to this total shutdown of the internet—which indeed we see, especially in certain countries over the last several years. But there is a spectrum of threats that are subtler but, in some ways, just as damaging as a total internet blackout. As this international consensus grows against complete internet shutdowns, we’re seeing an increase in this subtler, more targeted, and more low-grade shutting down and censorship.

  • Mathematicians are deploying algorithms to stop gerrymandering | MIT Technology Review
    https://www.technologyreview.com/2021/08/12/1031567/mathematicians-algorithms-stop-gerrymandering/?truid=a497ecb44646822921c70e7e051f7f1a

    With the 2020 US Census data release, states start the process of redrawing district maps. New computational tools will help hold politicians to account.
    by

    Siobhan Roberts archive page

    August 12, 2021
    conceptual illustration of a map being cut up and taped together
    Alexander Glandien

    The maps for US congressional and state legislative races often resemble electoral bestiaries, with bizarrely shaped districts emerging from wonky hybrids of counties, precincts, and census blocks.

    It’s the drawing of these maps, more than anything—more than voter suppression laws, more than voter fraud—that determines how votes translate into who gets elected. “You can take the same set of votes, with different district maps, and get very different outcomes,” says Jonathan Mattingly, a mathematician at Duke University in the purple state of North Carolina. “The question is, if the choice of maps is so important to how we interpret these votes, which map should we choose, and how should we decide if someone has done a good job in choosing that map?”

    Over recent months, Mattingly and like-minded mathematicians have been busy in anticipation of a data release expected today, August 12, from the US Census Bureau. Every decade, new census data launches the decennial redistricting cycle—state legislators (or sometimes appointed commissions) draw new maps, moving district lines to account for demographic shifts.

    In preparation, mathematicians are sharpening new algorithms—open-source tools, developed over recent years—that detect and counter gerrymandering, the egregious practice giving rise to those bestiaries, whereby politicians rig the maps and skew the results to favor one political party over another. Republicans have openly declared that with this redistricting cycle they intend to gerrymander a path to retaking the US House of Representatives in 2022.

    Lizard politics

    The term “gerrymander” dates to 1812, when a Massachusetts district drawn to the advantage of Governor Elbridge Gerry was so strangely shaped that it was likened to a salamander. Thus, to “gerrymander” is to manipulate district boundaries with a political agenda, and thereby manipulate election outcomes.

    The use of computers to generate and gerrymander electoral maps became relatively common in the 1990s, although early redistricting software was prohibitively expensive, costing $500,000 to $1 million. Now the industry standard is Maptitude, made by Caliper. When the first Maptitude for Redistricting package was released, in the late 1990s, it cost $2,999. The current price ranges from $1,000 to $10,000, depending on the user’s needs.

    That the technology had advanced by leaps and bounds since the previous redistricting cycle only supercharged the outcome. “It made the gerrymanders drawn that year so much more lasting and enduring than any other gerrymanders in our nation’s history,” he says. “It’s the sophistication of the computer software, the speed of the computers, the amount of data available, that makes it possible for partisan mapmakers to put their maps through 60 or 70 different iterations and to really refine and optimize the partisan performance of those maps.”

    As Michael Li, a redistricting expert at the Brennan Center for Justice at the New York University’s law school, puts it: “What used to be a dark art is now a dark science.” And when manipulated maps are implemented in an election, he says, they are nearly impossible to overcome.

    “The five justices on the Supreme Court are the only ones who seemed to have trouble seeing how the math and models worked,” says Li. “State and other federal courts managed to apply it—this was not beyond the intellectual ability of the courts to handle, any more than a complex sex discrimination case is, or a complex securities fraud case. But five justices of the Supreme Court said, ‘This is too hard for us.’”

    “They also said, ‘This is not for us to fix—this is for the states to fix; this is for Congress to fix; it’s not for us to fix,’” says Li.
    Will it matter?

    As Daley sees it, the Supreme Court decision gives state lawmakers “a green light and no speed limit when it comes to the kind of partisan gerrymanders that they can enact when map-making later this month.” At the same time, he says, “the technology has improved to such a place that we can now use [it] to see through the technology-driven gerrymanders that are created by lawmakers.”

    #Election #Manipulation #Démocratie #Gerrymandering

  • TikTok changed the shape of some people’s faces without asking | MIT Technology Review
    https://www.technologyreview.com/2021/06/10/1026074/tiktok-mandatory-beauty-filter-bug/?truid=a497ecb44646822921c70e7e051f7f1a

    Users noticed what appeared to be a beauty filter they hadn’t requested—and which they couldn’t turn off.
    by

    Abby Ohlheiser
    June 10, 2021
    An user opening TikTok on his iPhone
    Lorenzo Di Cola/NurPhoto via AP

    “That’s not my face,” Tori Dawn thought, after opening TikTok to make a video in late May. The jaw reflected back on the screen was wrong, slimmer and more feminine. And when they waved their hand in front of the camera, blocking most of their face from the lens, their jaw appeared to pop back to normal. Was their skin also a little softer?

    On further investigation, it seemed as if the image was being run through a beauty filter in the TikTok app. Normally, Dawn keeps those filters off in livestreams and videos to around 320,000 followers. But as they flipped around the app’s settings, there was no way to disable the effect:. it seemed to be permanently in place, subtly feminizing Dawn’s features.
    Related Story
    Beauty filters are changing the way young girls see themselves

    The most widespread use of augmented reality isn’t in gaming: it’s the face filters on social media. The result? A mass experiment on girls and young women.

    “My face is pretty androgynous and I like my jawline,” Dawn said in an interview. “So when I saw that it was popping in and out, I’m like ‘why would they do that, why?’ This is one of the only things that I like about my face. Why would you do that?”

    Beauty filters are now a part of life online, allowing users to opt in to changing the face they present to the world on social media. Filters can widen eyes, plump up lips, apply makeup, and change the shape of the face, among other things. But it’s usually a choice, not forced on users—which is why Dawn and others who encountered this strange effect, were so angry and disturbed by it.

    Dawn told her followers about it in a video. “As long as that’s still a thing,” Dawn said, showing the effect to their jaw pop in and out on screen, “I don’t feel comfortable making videos because this is not what I look like, and I don’t know how to fix it.” The video got more than 300,000 views, they said, and was shared and duetted by other users who noticed the same thing.

    congrats tiktok I am super uncomfortable and disphoric now cuz of whatever the fuck this shit is
    ♬ original sound - Tori Dawn

    “Is that why I’ve been kind of looking like an alien lately?” said one.

    “Tiktok. Fix this,” said another.

    Videos like these circulated for days in late May, as a portion of TikTok’s users looked into the camera and saw a face that wasn’t their own. As the videos spread, many users wondered whether the company was secretly testing out a beauty filter on some users.
    An odd, temporary issue

    I’m a TikTok lurker, not a maker, so it was only after seeing Dawn’s video that I decided to see if the effect appeared on my own camera. Once I started making a video, the change to my jaw shape was obvious. I suspected, but couldn’t tell for sure, that my skin had been smoothed as well. I sent a video of it in action to coworkers and my Twitter followers, asking them to open the app and try the same thing on their own phones: from their responses, I learned that the effect only seemed to impact Android phones. I reached out to TikTok, and the effect stopped appearing two days later. The company later acknowledged in a short statement that there was an issue that had been resolved, but did not provide further details.
    Sign up for The Download - Your daily dose of what’s up in emerging technology
    Stay updated on MIT Technology Review initiatives and events?
    Yes
    No

    On the surface it was an odd, temporary issue that affected some users and not others. But it was also forcibly changing people’s appearances—an important glitch for an app that is used by around 100 million people in the US. So I also sent the video to Amy Niu, a PhD candidate at the University of Wisconsin who studies the psychological impact of beauty filters. She pointed out that in China, and some other places, some apps add a subtle beauty filter by default. When Niu uses apps like WeChat, she can only really tell that a filter is in place by comparing a photo of herself using her camera to the image produced in the app.

    A couple months ago, she said, she downloaded the Chinese version of TikTok, called Douyin. “When I turned off the beauty mode and filters, I can still see an adjustment to my face,” she said.

    Having beauty filters in an app isn’t necessarily a bad thing, Niu said, but app designers have a responsibility to consider how those filters will be used, and how they will change the people who use them. Even if it was a temporary bug, it could have an impact on how people see themselves.

    “People’s internalization of beauty standards, their own body image or whether they will intensify their appearance concerns,” Niu said, are all considerations.

    For Dawn, the strange facial effect was just one more thing to add to the list of frustrations with TikTok: “It’s been very reminiscent of a relationship with a narcissist because they love bomb you one minute, they’re giving you all these followers and all this attention and it feels so good,” they said. “And then for some reason they just, they’re just like, we’re cutting you off.”

    #Beauty_filters #Image_de_soi #Filtres #Image

  • Here’s what China wants from its next space station | MIT Technology Review
    https://www.technologyreview.com/2021/04/30/1024371/china-space-station-tianhe-1-iss

    “From my perspective, the Chinese government’s number one goal is its own survival,” says Hines. “And so these projects are very much aligned with those domestic interests, even if they don’t make a ton of sense in broader geopolitical considerations or have much in the way of scientific contributions.”

  • Police in Ogden, Utah and small cities around the US are using these surveillance technologies | MIT Technology Review
    https://www.technologyreview.com/2021/04/19/1022893/police-surveillance-tactics-cameras-rtcc/?truid=a497ecb44646822921c70e7e051f7f1a

    Police departments want to know as much as they legally can. But does ever-greater surveillance technology serve the public interest?

    At a conference in New Orleans in 2007, Jon Greiner, then the chief of police in Ogden, Utah, heard a presentation by the New York City Police Department about a sophisticated new data hub called a “real time crime center.” Reams of information rendered in red and green splotches, dotted lines, and tiny yellow icons appeared as overlays on an interactive map of New York City: Murders. Shootings. Road closures.

    In the early 1990s, the NYPD had pioneered a system called CompStat that aimed to discern patterns in crime data, since widely adopted by large police departments around the country. With the real time crime center, the idea was to go a step further: What if dispatchers could use the department’s vast trove of data to inform the police response to incidents as they occurred?

    In 2021, it might be simpler to ask what can’t be mapped. Law enforcement agencies today have access to powerful new engines of data processing and association. Police agencies in major cities are already using facial recognition to identify suspects—sometimes falsely—and deploying predictive policing to define patrol routes.

    Around the country, the expansion of police technology has followed a similar pattern, driven more by conversations between police agencies and their vendors than between police and the public they serve. The question is: where do we draw the line? And who gets to decide?

    #Police #Prédiction #Smart_city

  • Police in Ogden, Utah and small cities around the US are using these surveillance technologies
    https://www.technologyreview.com/2021/04/19/1022893/police-surveillance-tactics-cameras-rtcc

    Police departments want to know as much as they legally can. But does ever-greater surveillance technology serve the public interest ? At a conference in New Orleans in 2007, Jon Greiner, then the chief of police in Ogden, Utah, heard a presentation by the New York City Police Department about a sophisticated new data hub called a “real time crime center.” Reams of information rendered in red and green splotches, dotted lines, and tiny yellow icons appeared as overlays on an interactive map (...)

    #NYPD #algorithme #CCTV #police #criminalité #prédiction #vidéo-surveillance #surveillance

    ##criminalité

  • The new lawsuit that shows facial recognition is officially a civil rights issue
    https://www.technologyreview.com/2021/04/14/1022676/robert-williams-facial-recognition-lawsuit-aclu-detroit-police

    Robert Williams, who was wrongfully arrested because of a faulty facial recognition match, is asking for the technology to be banned. On January 9, 2020, Detroit police drove to the suburb of Farmington Hill and arrested Robert Williams in his driveway while his wife and young daughters looked on. Williams, a Black man, was accused of stealing watches from a luxury store. He was held overnight in jail. During questioning, an officer showed Williams a picture of a suspect. His response, he (...)

    #algorithme #CCTV #biométrie #procès #racisme #facial #reconnaissance #biais #discrimination (...)

    ##ACLU

  • The new lawsuit that shows facial recognition is officially a civil rights issue | MIT Technology Review
    https://www.technologyreview.com/2021/04/14/1022676/robert-williams-facial-recognition-lawsuit-aclu-detroit-police/?truid=a497ecb44646822921c70e7e051f7f1a

    Robert Williams, who was wrongfully arrested because of a faulty facial recognition match, is asking for the technology to be banned.

    The news: On January 9, 2020, Detroit Police wrongfully arrested a Black man named Robert Williams due to a bad match from their department’s facial recognition system. Two more instances of false arrests have since been made public. Both are also Black men, and both have taken legal action to try rectifying the situation. Now Williams is following in their path and going further—not only by suing the Detroit Police for his wrongful arrest, but by trying to get the technology banned.

    The details: On Tuesday, the ACLU and the University of Michigan Law School’s Civil Rights Litigation Initiative filed a lawsuit on behalf of Williams, alleging that his arrest violated Williams’s Fourth Amendment rights and was in defiance of Michigan’s civil rights law. The suit requests compensation, greater transparency about the use of facial recognition, and that the Detroit Police Department stop using all facial recognition technology, either directly or indirectly.

    The significance: Racism within American law enforcement makes the use of facial recognition, which has been proven to misidentify Black people at much higher rates, even more concerning.

    #Reconnaissance_faciale #Racisme #Droits_humains #Intelligence_artificielle

  • Facebook’s ad algorithms are still excluding women from seeing jobs
    https://www.technologyreview.com/2021/04/09/1022217/facebook-ad-algorithm-sex-discrimination

    Its ad-delivery system is excluding women from opportunities without regard to their qualifications. That would be illegal under US employment law. Facebook is withholding certain job ads from women because of their gender, according to the latest audit of its ad service. The audit, conducted by independent researchers at the University of Southern California (USC), reveals that Facebook’s ad-delivery system shows different job ads to women and men even though the jobs require the same (...)

    #Facebook #sexisme #algorithme #biais #discrimination #femmes #travail

  • The NYPD used Clearview’s controversial facial recognition tool. Here’s what you need to know
    https://www.technologyreview.com/2021/04/09/1022240/clearview-ai-nypd-emails

    Newly-released emails show New York police have been widely using the controversial Clearview AI facial recognition system—and making misleading statements about it. It’s been a busy week for Clearview AI, the controversial facial recognition company that uses 3 billion photos scraped from the web to power a search engine for faces. On April 6, Buzzfeed News published a database of over 1,800 entities—including state and local police and other taxpayer-funded agencies such as health-care (...)

    #Clearview #algorithme #CCTV #biométrie #police #facial #reconnaissance #vidéo-surveillance #surveillance (...)

    ##NYPD

  • How beauty filters took over social media
    https://www.technologyreview.com/2021/04/02/1021635/beauty-filters-young-girls-augmented-reality-social-media

    The most widespread use of augmented reality isn’t in gaming : it’s the face filters on social media. The result ? A mass experiment on girls and young women. Veronica started using filters to edit pictures of herself on social media when she was 14 years old. She remembers everyone in her middle school being excited by the technology when it became available, and they had fun playing with it. “It was kind of a joke,” she says. “People weren’t trying to look good when they used the filters.” (...)

    #TikTok #Facebook #Instagram #MySpace #Snapchat #algorithme #technologisme #beauté #femmes #jeunesse #selfie (...)

    ##beauté ##SocialNetwork

  • How to poison the data that Big Tech uses to surveil you
    https://www.technologyreview.com/2021/03/05/1020376/resist-big-tech-surveillance-data

    Algorithms are meaningless without good data. The public can exploit that to demand change. Every day, your life leaves a trail of digital breadcrumbs that tech giants use to track you. You send an email, order some food, stream a show. They get back valuable packets of data to build up their understanding of your preferences. That data is fed into machine-learning algorithms to target you with ads and recommendations. Google cashes your data in for over $120 billion a year of ad revenue. (...)

    #Google #algorithme #activisme #[fr]Règlement_Général_sur_la_Protection_des_Données_(RGPD)[en]General_Data_Protection_Regulation_(GDPR)[nl]General_Data_Protection_Regulation_(GDPR) #BigData (...)

    ##[fr]Règlement_Général_sur_la_Protection_des_Données__RGPD_[en]General_Data_Protection_Regulation__GDPR_[nl]General_Data_Protection_Regulation__GDPR_ ##microtargeting

    • Data Leverage: A Framework for Empowering the Public in its
      Relationship with Technology Companies

      https://arxiv.org/pdf/2012.09995.pdf

      Many powerful computing technologies rely on implicit and explicit data contributions from the public. This dependency suggests a potential source of leverage for the public in its relationship with technology companies: by reducing, stopping, redirecting, or otherwise manipulating data contributions, the public can reduce the effectiveness of many lucrative technologies. In this paper, we synthesize emerging research that seeks to better understand and help people action this data leverage. Drawing on prior work in areas including machine learning, human-computer interaction, and fairness and accountability in computing, we present a framework for
      understanding data leverage that highlights new opportunities to change technology company behavior related to privacy, economic inequality, content moderation and other areas of societal concern. Our framework also points towards ways that policymakers can bolster data leverage as a means of changing the balance of power between the public and tech companies.

  • Is the new boom in digital art sales a genuine opportunity or a trap? | MIT Technology Review
    https://www.technologyreview.com/2021/03/25/1021215/nft-artists-scams-profit-environment-blockchain

    Artists are jumping into a market that will pay thousands for their work. But they’re running into scams, environmental concerns, and crypto hype.

    Anna Podedworna first heard about NFTs a month or so ago, when a fellow artist sent her an Instagram message trying to convince her to get on board. She found it really off-putting, like a pitch for a pyramid scheme. He had the best of intentions, she thought: NFTs, or non-fungible tokens, are basically just a way of selling and buying anything digital, including art, that’s supported by cryptocurrency. Despite Podedworna’s initial reaction, she started researching whether they might provide some alternative income.

    She’s still on the fence, but NFTs have become an unavoidable subject for anyone earning a living as a creative person online. Some promise that NFTs are part of a digital revolution that will democratize fame and give creators control. Others point to the environmental impact of crypto and worry about unrealistic expectations set by, say, the news that digital artist Beeple had sold a JPG of his collected works for $69 million in a Christie’s auction.

    Newcomers must untangle practical, logistical, and ethical conundrums if they want to enter the fray before the current wave of interest passes. And there’s a question lingering in the background: Is the NFT craze benefiting digital artists, or are artists helping to make wealthy cryptocurrency holders even richer?

    #NFT #Art_numérique #Cryptoart #Arnaque #Cryptomonnaies #Idéologie_propriétaire

  • Scientists plan to drop limits on how far human embryos are grown in the lab | MIT Technology Review
    https://www.technologyreview.com/2021/03/16/1020879/scientists-14-day-limit-stem-cell-human-embryo-research/?truid=a497ecb44646822921c70e7e051f7f1a

    As technology for manipulating embryonic life accelerates, researchers want to get rid of their biggest stop sign.

    Antonio Regalado
    March 16, 2021

    Pushing the limits: For the last 40 years, scientists have agreed never to allow human embryos to develop beyond two weeks in their labs. Now a key scientific body is ready to do away with the 14-day limit. The International Society for Stem Cell Research has prepared draft recommendations to move such research out of a category of “prohibited” scientific activities and into a class of research that can be permitted after ethics review and depending on national regulations.

    Why? Scientists are motivated to grow embryos longer in order to study—and potentially manipulate—the development process. They believe discoveries could come from studying embryos longer, for example improvements to IVF or finding clues to the causes of birth defects. But such techniques raise the possibility of someday gestating animals outside the womb until birth, a concept called ectogenesis. And the long-term growth of embryos could create a platform to explore the genetic engineering of humans.

    #Cellules_souches #Biotechnologies #Embryons_humains #Hubris

  • He got Facebook hooked on AI. Now he can’t fix its misinformation addiction, by Karen Hao | MIT Technology Review
    https://www.technologyreview.com/2021/03/11/1020600/facebook-responsible-ai-misinformation

    The company’s AI algorithms gave it an insatiable habit for lies and hate speech. Now the man who built them can’t fix the problem.

    #facebook #AI