• Troll farms reached 140 million Americans a month on Facebook before 2020 election | MIT Technology Review

    As of October 2019, around 15,000 Facebook pages with a majority US audience were being run out of Kosovo and Macedonia, known bad actors during the 2016 election.
    Collectively, those troll-farm pages—which the report treats as a single page for comparison purposes—reached 140 million US users monthly and 360 million global users weekly.

    (ça date d’il y a un an et qqs)

  • A startup says it’s begun releasing particles in the atmosphere, in an effort to tweak the climate | MIT Technology Review

    A startup claims it has launched weather balloons that may have released reflective sulfur particles in the stratosphere, potentially crossing a controversial barrier in the field of solar geoengineering.

    Geoengineering refers to deliberate efforts to manipulate the climate by reflecting more sunlight back into space, mimicking a natural process that occurs in the aftermath of large volcanic eruptions. In theory, spraying sulfur and similar particles in sufficient quantities could potentially ease global warming.

    It’s not technically difficult to release such compounds into the stratosphere. But scientists have mostly (though not entirely) refrained from carrying out even small-scale outdoor experiments. And it’s not clear that any have yet injected materials into that specific layer of the atmosphere in the context of geoengineering-related research.

    That’s in part because it’s highly controversial. Little is known about the real-world effect of such deliberate interventions at large scales, but they could have dangerous side effects. The impacts could also be worse in some regions than others, which could provoke geopolitical conflicts.

    #géoingénierie #climat #startup #écologie #solutionnisme_technologique #ingénieurs

  • The biggest technology failures of 2022 | MIT Technology Review

    We’re back with our latest list of the worst technologies of the year. Think of these as anti-breakthroughs, the sort of mishaps, misuses, miscues, and bad ideas that lead to technology failure. This year’s disastrous accomplishments range from deadly pharmaceutical chemistry to a large language model that was jeered off the internet.

    One theme that emerges from our disaster list is how badly policy—the rules, processes, institutions, and ideals that govern technology’s use—can let us down. In China, a pervasive system of pandemic controls known as “zero covid” came to an abrupt and unexpected end. On Twitter, Elon Musk intentionally destroyed the site’s governing policies, replacing them with a puckish and arbitrary mix of free speech, personal vendettas, and appeals to the right wing of US politics. In the US, policy failures were evident in the highest levels of overdose deaths ever recorded, many of them due to a 60-year-old chemical compound: fentanyl.

    The impact of these technologies could be measured in the number of people affected. More than a billion people in China are now being exposed to the virus for the first time; 335 million on Twitter are watching Musk’s antics; and fentanyl killed 70,000 in the US. In each of these messes, there are important lessons about why technology fails.

    #Technologie #Régulation

  • The AI myth Western lawmakers get wrong | MIT Technology Review

    While the US and the EU may differ on how to regulate tech, their lawmakers seem to agree on one thing: the West needs to ban AI-powered social scoring.

    As they understand it, social scoring is a practice in which authoritarian governments—specifically China—rank people’s trustworthiness and punish them for undesirable behaviors, such as stealing or not paying back loans. Essentially, it’s seen as a dystopian superscore assigned to each citizen. 

    The EU is currently negotiating a new law called the AI Act, which will ban member states, and maybe even private companies, from implementing such a system.

    The trouble is, it’s “essentially banning thin air,” says Vincent Brussee, an analyst at the Mercator Institute for China Studies, a German think tank.

    Back in 2014, China announced a six-year plan to build a system rewarding actions that build trust in society and penalizing the opposite. Eight years on, it’s only just released a draft law that tries to codify past social credit pilots and guide future implementation. 

    There have been some contentious local experiments, such as one in the small city of Rongcheng in 2013, which gave every resident a starting personal credit score of 1,000 that can be increased or decreased by how their actions are judged. People are now able to opt out, and the local government has removed some controversial criteria. 

    But these have not gained wider traction elsewhere and do not apply to the entire Chinese population. There is no countrywide, all-seeing social credit system with algorithms that rank people.

    As my colleague Zeyi Yang explains, “the reality is, that terrifying system doesn’t exist, and the central government doesn’t seem to have much appetite to build it, either.” 

    What has been implemented is mostly pretty low-tech. It’s a “mix of attempts to regulate the financial credit industry, enable government agencies to share data with each other, and promote state-sanctioned moral values,” Zeyi writes. 

    Kendra Schaefer, a partner at Trivium China, a Beijing-based research consultancy, who compiled a report on the subject for the US government, couldn’t find a single case in which data collection in China led to automated sanctions without human intervention. The South China Morning Post found that in Rongcheng, human “information gatherers” would walk around town and write down people’s misbehavior using a pen and paper. 

    The myth originates from a pilot program called Sesame Credit, developed by Chinese tech company Alibaba. This was an attempt to assess people’s creditworthiness using customer data at a time when the majority of Chinese people didn’t have a credit card, says Brussee. The effort became conflated with the social credit system as a whole in what Brussee describes as a “game of Chinese whispers.” And the misunderstanding took on a life of its own. 

    The irony is that while US and European politicians depict this as a problem stemming from authoritarian regimes, systems that rank and penalize people are already in place in the West. Algorithms designed to automate decisions are being rolled out en masse and used to deny people housing, jobs, and basic services. 

    For example in Amsterdam, authorities have used an algorithm to rank young people from disadvantaged neighborhoods according to their likelihood of becoming a criminal. They claim the aim is to prevent crime and help offer better, more targeted support. 

    But in reality, human rights groups argue, it has increased stigmatization and discrimination. The young people who end up on this list face more stops from police, home visits from authorities, and more stringent supervision from school and social workers.

    It’s easy to take a stand against a dystopian algorithm that doesn’t really exist. But as lawmakers in both the EU and the US strive to build a shared understanding of AI governance, they would do better to look closer to home. Americans do not even have a federal privacy law that would offer some basic protections against algorithmic decision making. 

    There is also a dire need for governments to conduct honest, thorough audits of the way authorities and companies use AI to make decisions about our lives. They might not like what they find—but that makes it all the more crucial for them to look.

    #Chine #Crédit_social

  • China just announced a new social credit law. Here’s what it says. | MIT Technology Review

    The West has largely gotten China’s social credit system wrong. But draft legislation introduced in November offers a more accurate picture of the reality.
    By Zeyi Yangarchive page
    November 22, 2022

    Tech Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more here.

    It’s easier to talk about what China’s social credit system isn’t than what it is. Ever since 2014, when China announced a six-year plan to build a system to reward actions that build trust in society and penalize the opposite, it has been one of the most misunderstood things about China in Western discourse. Now, with new documents released in mid-November, there’s an opportunity to correct the record.

    For most people outside China, the words “social credit system” conjure up an instant image: a Black Mirror–esque web of technologies that automatically score all Chinese citizens according to what they did right and wrong. But the reality is, that terrifying system doesn’t exist, and the central government doesn’t seem to have much appetite to build it, either. 

    Instead, the system that the central government has been slowly working on is a mix of attempts to regulate the financial credit industry, enable government agencies to share data with each other, and promote state-sanctioned moral values—however vague that last goal in particular sounds. There’s no evidence yet that this system has been abused for widespread social control (though it remains possible that it could be wielded to restrict individual rights). 

    While local governments have been much more ambitious with their innovative regulations, causing more controversies and public pushback, the countrywide social credit system will still take a long time to materialize. And China is now closer than ever to defining what that system will look like. On November 14, several top government agencies collectively released a draft law on the Establishment of the Social Credit System, the first attempt to systematically codify past experiments on social credit and, theoretically, guide future implementation. 

    Yet the draft law still left observers with more questions than answers. 

    “This draft doesn’t reflect a major sea change at all,” says Jeremy Daum, a senior fellow of the Yale Law School Paul Tsai China Center who has been tracking China’s social credit experiment for years. It’s not a meaningful shift in strategy or objective, he says. 

    Rather, the law stays close to local rules that Chinese cities like Shanghai have released and enforced in recent years on things like data collection and punishment methods—just giving them a stamp of central approval. It also doesn’t answer lingering questions that scholars have about the limitations of local rules. “This is largely incorporating what has been out there, to the point where it doesn’t really add a whole lot of value,” Daum adds. 

    So what is China’s current system actually like? Do people really have social credit scores? Is there any truth to the image of artificial-intelligence-powered social control that dominates Western imagination? 

    First of all, what is “social credit”?
    When the Chinese government talks about social credit, the term covers two different things: traditional financial creditworthiness and “social creditworthiness,” which draws data from a larger variety of sectors.

    Related Story

    Bias isn’t the only problem with credit scores—and no, AI can’t help
    The biggest-ever study of real people’s mortgage data shows that predictive tools used to approve or reject loans are less accurate for minorities.
    The former is a familiar concept in the West: it documents individuals’ or businesses’ financial history and predicts their ability to pay back future loans. Because the market economy in modern China is much younger, the country lacks a reliable system to look up other people’s and companies’ financial records. Building such a system, aimed to help banks and other market players make business decisions, is an essential and not very controversial mission. Most Chinese policy documents refer to this type of credit with a specific word: “征信” (zhengxin, which some scholars have translated to “credit reporting”).

    The latter—“social creditworthiness”—is what raises more eyebrows. Basically, the Chinese government is saying there needs to be a higher level of trust in society, and to nurture that trust, the government is fighting corruption, telecom scams, tax evasion, false advertising, academic plagiarism, product counterfeiting, pollution …almost everything. And not only will individuals and companies be held accountable, but legal institutions and government agencies will as well.

    This is where things start to get confusing. The government seems to believe that all these problems are loosely tied to a lack of trust, and that building trust requires a one-size-fits-all solution. So just as financial credit scoring helps assess a person’s creditworthiness, it thinks, some form of “social credit” can help people assess others’ trustworthiness in other respects. 

    As a result, so-called “social” credit scoring is often lumped together with financial credit scoring in policy discussions, even though it’s a much younger field with little precedent in other societies. 

    What makes it extra confusing is that in practice, local governments have sometimes mixed up these two. So you may see a regulation talking about how non-financial activities will hurt your financial credit, or vice versa. (In just one example, the province of Liaoning said in August that it’s exploring how to reward blood donation in the financial credit system.) 

    But on a national level, the country seems to want to keep the two mostly separate, and in fact, the new draft law addresses them with two different sets of rules.

    Has the government built a system that is actively regulating these two types of credit?
    The short answer is no. Initially, back in 2014, the plan was to have a national system tracking all “social credit” ready by 2020. Now it’s almost 2023, and the long-anticipated legal framework for the system was just released in the November 2022 draft law. 

    That said, the government has mostly figured out the financial part. The zhengxin system—first released to the public in 2006 and significantly updated in 2020—is essentially the Chinese equivalent of American credit bureaus’ scoring and is maintained by the country’s central bank. It records the financial history of 1.14 billion Chinese individuals (and gives them credit scores), as well as almost 100 million companies (though it doesn’t give them scores). 

    On the social side, however, regulations have been patchy and vague. To date, the national government has built only a system focused on companies, not individuals, which aggregates data on corporate regulation compliance from different government agencies. Kendra Schaefer, head of tech policy research at the Beijing-based consultancy Trivium China, has described it in a report for the US government’s US-China Economic and Security Review Commission as “roughly equivalent to the IRS, FBI, EPA, USDA, FDA, HHS, HUD, Department of Energy, Department of Education, and every courthouse, police station, and major utility company in the US sharing regulatory records across a single platform.” The result is openly searchable by any Chinese citizen on a recently built website called Credit China.

    But there is some data on people and other types of organizations there, too. The same website also serves as a central portal for over three dozen (sometimes very specific) databases, including lists of individuals who have defaulted on a court judgment, Chinese universities that are legitimate, companies that are approved to build robots, and hospitals found to have conducted insurance fraud. Nevertheless, the curation seems so random that it’s hard to see how people could use the portal as a consistent or comprehensive source of data.

    How will a social credit system affect Chinese people’s everyday lives?
    The idea is to be both a carrot and a stick. So an individual or company with a good credit record in all regulatory areas should receive preferential treatment when dealing with the government—like being put on a priority list for subsidies. At the same time, individuals or companies with bad credit records will be punished by having their information publicly displayed, and they will be banned from participating in government procurement bids, consuming luxury goods, and leaving the country.

    The government published a comprehensive list detailing the permissible punishment measures last year. Some measures are more controversial; for example, individuals who have failed to pay compensation decided by the court are restricted from traveling by plane or sending their children to costly private schools, on the grounds that these constitute luxury consumption. The new draft law upholds a commitment that this list will be updated regularly. 

    So is there a centralized social credit score computed for every Chinese citizen?
    No. Contrary to popular belief, there’s no central social credit score for individuals. And frankly, the Chinese central government has never talked about wanting one. 

    So why do people, particularly in the West, think there is? 
    Well, since the central government has given little guidance on how to build a social credit system that works in non-financial areas, even in the latest draft law, it has opened the door for cities and even small towns to experiment with their own solutions. 

    As a result, many local governments are introducing pilot programs that seek to define what social credit regulation looks like, and some have become very contentious.

    The best example is Rongcheng, a small city with only half a million in population that has implemented probably the most famous social credit scoring system in the world. In 2013, the city started giving every resident a base personal credit score of 1,000 that can be influenced by their good and bad deeds. For example, in a 2016 rule that has since been overhauled, the city decided that “spreading harmful information on WeChat, forums, and blogs” meant subtracting 50 points, while “winning a national-level sports or cultural competition” meant adding 40 points. In one extreme case, one resident lost 950 points in the span of three weeks for repeatedly distributing letters online about a medical dispute.

    Such scoring systems have had very limited impact in China, since they have never been elevated to provincial or national levels. But when news of pilot programs like Rongcheng’s spread to the West, it understandably rang an alarm for activist groups and media outlets—some of which mistook it as applicable to the whole population. Prominent figures like George Soros and Mike Pence further amplified that false idea. 

    How do we know those pilot programs won’t become official rules for the whole country?
    No one can be 100% sure of that, but it’s worth remembering that the Chinese central government has actually been pushing back on local governments’ rogue actions when it comes to social credit regulations. 

    In December 2020, China’s state council published a policy guidance responding to reports that local governments were using the social credit system as justification for punishing even trivial actions like jaywalking, recycling incorrectly, and not wearing masks. The guidance asks local governments to punish only behaviors that are already illegal under China’s current legislative system and not expand beyond that. 

    “When [many local governments] encountered issues that are hard to regulate through business regulations, they hoped to draw support from solutions involving credits,” said Lian Weiliang, an official at China’s top economic planning authority, at a press conference on December 25, 2020. “These measures are not only incompatible with the rule of law, but also incompatible with the need of building creditworthiness in the long run.” 

    And the central government’s pushback seems to have worked. In Rongcheng’s case, the city updated its local regulation on social credit scores and allowed residents to opt out of the scoring program; it also removed some controversial criteria for score changes. 

    Is there any advanced technology, like artificial intelligence, involved in the system?
    For the most part, no. This is another common myth about China’s social credit system: people imagine that to keep track of over a billion people’s social behaviors, there must be a mighty central algorithm that can collect and process the data.

    But that’s not true. Since there is no central system scoring everyone, there’s not even a need for that kind of powerful algorithm. Experts on China’s social credit system say that the entire infrastructure is surprisingly low-tech. While Chinese officials sometimes name-drop technologies like blockchain and artificial intelligence when talking about the system, they never talk in detail about how these technologies might be utilized. If you check out the Credit China website, it’s no more than a digitized library of separate databases. 

    “There is no known instance in which automated data collection leads to the automated application of sanctions without the intervention of human regulators,” wrote Schaefer in the report. Sometimes the human intervention can be particularly primitive, like the “information gatherers” in Rongcheng, who walk around the village and write down fellow villagers’ good deeds by pen.

    Related Story

    Who needs democracy when you have data?
    Here’s how China rules using data, AI, and internet surveillance.
    However, as the national system is being built, it does appear there’s the need for some technological element, mostly to pool data among government agencies. If Beijing wants to enable every government agency to make enforcement decisions based on records collected by other government agencies, that requires building a massive infrastructure for storing, exchanging, and processing the data. 

    To this end, the latest draft law talks about the need to use “diverse methods such as statistical methods, modeling, and field certification” to conduct credit assessments and combine data from different government agencies. “It gives only the vaguest hint that it’s a little more tech-y,” says Daum.

    How are Chinese tech companies involved in this system?
    Because the system is so low-tech, the involvement of Chinese tech companies has been peripheral. “Big tech companies and small tech companies … play very different roles, and they take very different strategies,” says Shazeda Ahmed, a postdoctoral researcher at Princeton University, who spent several years in China studying how tech companies are involved in the social credit system.

    Smaller companies, contracted by city or provincial governments, largely built the system’s tech infrastructure, like databases and data centers. On the other hand, large tech companies, particularly social platforms, have helped the system spread its message. Alibaba, for instance, helps the courts deliver judgment decisions through the delivery addresses it collects via its massive e-commerce platform. And Douyin, the Chinese version of TikTok, partnered with a local court in China to publicly shame individuals who defaulted on court judgments. But these tech behemoths aren’t really involved in core functions, like contributing data or compiling credit appraisals.

    “They saw it as almost like a civic responsibility or corporate social responsibility: if you broke the law in this way, we will take this data from the Supreme People’s Court, and we will punish you on our platform," says Ahmed.

    There are also Chinese companies, like Alibaba’s fintech arm Ant Group, that have built private financial credit scoring products. But the result, like Alibaba’s Sesame Credit, is more like a loyalty rewards program, according to several scholars. Since the Sesame Credit score is mostly calculated on the basis of users’ purchase history and lending activities on Alibaba’s own platforms, the score is not reliable enough to be used by external financial institutions and has very limited effect on individuals.

    Given all this, should we still be concerned about the implications of building a social credit system in China?
    Yes. Even if there isn’t a scary algorithm that scores every citizen, the social credit system can still be problematic.

    The Chinese government did emphasize that all social-credit-related punishment has to adhere to existing laws, but laws themselves can be unjust in the first place. “Saying that the system is an extension of the law only means that it is no better or worse than the laws it enforces. As China turns its focus increasingly to people’s social and cultural lives, further regulating the content of entertainment, education, and speech, those rules will also become subject to credit enforcement,” Daum wrote in a 2021 article.

    Moreover, “this was always about making people honest to the government, and not necessarily to each other,” says Ahmed. When moral issues like honesty are turned into legal issues, the state ends up having the sole authority in deciding who’s trustworthy. One tactic Chinese courts have used in holding “discredited individuals” accountable is encouraging their friends and family to report their assets in exchange for rewards. “Are you making society more trustworthy by ratting out your neighbor? Or are you building distrust in your very local community?” she asks.

    But at the end of the day, the social credit system does not (yet) exemplify abuse of advanced technologies like artificial intelligence, and it’s important to evaluate it on the facts. The government is currently seeking public feedback on the November draft document for one month, though there’s no expected date on when it will pass and become law. It could still take years to see the final product of a nationwide social credit system.

    #Chine #Crédit_social

  • YouTube is launching Shorts videos for your TV | MIT Technology Review

    YouTube Shorts, the video website’s TikTok-like feature, has become one of its latest obsessions, with more than 1.5 billion users watching short-form content on their devices every month.

    And now YouTube wants to expand that number by bringing full-screen, vertical videos into your TV, MIT Technology Review can reveal.

    From today, users worldwide will see a row of videos from Shorts high up their display on YouTube’s smart TV apps. The videos, which will be integrated into the standard homepage of YouTube’s TV app and will sit alongside longer, landscape videos, are presented on the basis of previous watch history, much as in the YouTube Shorts tab on cell phones and the YouTube website.

    “It is challenging taking a format that’s traditionally a mobile format and finding the right way to bring it to life on TV,” says Brynn Evans, UX director for the YouTube app on TV.

    The time spent developing the TV app integration is testament to the importance of Shorts to YouTube, says Melanie Fitzgerald, UX director at YouTube Community and Shorts. “Seeing the progression of short-form video over several years, from Vine to to TikTok to Instagram and to YouTube, it’s very clear this format is here to stay.”
    Related Story
    The YouTube baker fighting back against deadly “craft hacks”

    Ann Reardon spends her time debunking dangerous activities that go viral on the platform—but the craze shows no signs of abating.

    One major challenge the designers behind YouTube Shorts’ TV integration had to consider was the extent to which Shorts videos should be allowed to autoplay. At present, the initial design will require viewers to manually scroll through Shorts videos once they’re playing and move on to the next one by pressing the up and down arrows on their TV remote.

    “One piece we were playing with was how much do we want this to be a fully lean-back experience, where you turn it on and Shorts cycle through,” says Evans, whose team decided against that option at launch but does not rule out changing future iterations.

    The design presents a single Shorts video at a time in the center of the TV screen, surrounded by white space that changes color depending on the overall look of the video.

    One thing YouTube didn’t test—at least as of now? Filling the white space with ads. YouTube spokesperson Susan Cadrecha tells MIT Tech Review that the experience will initially be ad-free. The spokesperson did say that ads would likely be added at some point, but how those would be integrated into the Shorts on TV experience was not clear.

    Likewise, the YouTube Shorts team is investigating how to integrate comments into TV viewing for future iterations of the app. “For a mobile format like this, you’d be able to maybe use your phone as a companion and leave some comments and they can appear on TV,” says Evans.

    YouTube’s announcement follows TikTok’s own move into developing a TV app. First launched in February 2021 in France, Germany, and the UK and expanded into the United States and elsewhere in November that year, TikTok’s smart TV app hasn’t largely altered how the main app works. (Nor, arguably, has it become an irreplaceable part of people’s living room habits.)

    However, the shift to fold Shorts into the YouTube experience on TV suggests how important YouTube feels the short-form model is to its future. “It’s very clearly a battle for attention across devices,” says Andrew A. Rosen, founder and principal at media analyst Parqor. “The arrival of Shorts and TikTok on connected TVs makes the competitive landscape that much more complex.” Having ceded a head start to TikTok, YouTube now seems determined to play catchup.

    The team behind the initiative still isn’t fully certain how adding short-form video into the YouTube on TV experience will be embraced. “It still remains to be seen how and when people will consume Shorts,” admits Evans—though she tells MIT Tech Review that informal polling and qualitative surveys, plus tests within the Google community, suggest “a very positive impression of Shorts from people who are watching YouTube on TV.” (YouTube declined to share its own data on much time the average user currently spends watching YouTube content on TV but did point to Nielsen data showing that viewers worldwide spent 700 million hours a day on that activity.)

    “Will it be a game-changer in the living room? Yes and no,” says Rosen. “Yes in the sense that it will turn 15-second to 60-second clips into competition for every legacy media streaming service, and Netflix is betting billions on content to be consumed on those same TVs. No, because it’s not primed to become a new default of consumption.”
    by Chris Stokel-Walker

    #YouTube #Shorts #Télévision #Médias #Média_formats

  • Here’s how a Twitter engineer says it will break in the coming weeks | MIT Technology Review

    One insider says the company’s current staffing isn’t able to sustain the platform.

    Chris Stokel-Walker
    November 8, 2022

    On November 4, just hours after Elon Musk fired half of the 7,500 employees previously working at Twitter, some people began to see small signs that something was wrong with everyone’s favorite hellsite. And they saw it through retweets.

    Twitter introduced retweets in 2009, turning an organic thing people were already doing—pasting someone else’s username and tweet, preceded by the letters RT—into a software function. In the years since, the retweet and its distant cousin the quote tweet (which launched in April 2015) have become two of the most common mechanics on Twitter.

    But on Friday, a few users who pressed the retweet button saw the years roll back to 2009. Manual retweets, as they were called, were back.

    The return of the manual retweet wasn’t Elon Musk’s latest attempt to appease users. Instead, it was the first public crack in the edifice of Twitter’s code base—a blip on the seismometer that warns of a bigger earthquake to come.

    A massive tech platform like Twitter is built upon very many interdependent parts. “The larger catastrophic failures are a little more titillating, but the biggest risk is the smaller things starting to degrade,” says Ben Krueger, a site reliability engineer who has more than two decades of experience in the tech industry. “These are very big, very complicated systems.” Krueger says one 2017 presentation from Twitter staff includes a statistic suggesting that more than half the back-end infrastructure was dedicated to storing data.

    While many of Musk’s detractors may hope the platform goes through the equivalent of thermonuclear destruction, the collapse of something like Twitter happens gradually. For those who know, gradual breakdowns are a sign of concern that a larger crash could be imminent. And that’s what’s happening now.
    It’s the small things

    Whether it’s manual RTs appearing for a moment before retweets slowly morph into their standard form, ghostly follower counts that race ahead of the number of people actually following you, or replies that simply refuse to load, small bugs are appearing at Twitter’s periphery. Even Twitter’s rules, which Musk linked to on November 7, went offline temporarily under the load of millions of eyeballs. In short, it’s becoming unreliable.

    Estimates from Bot Sentinel suggest that more than 875,000 users deactivated their accounts between October 27 and November 1, while half a million more were suspended.

    “Sometimes you’ll get notifications that are a little off,” says one engineer currently working at Twitter, who’s concerned about the way the platform is reacting after vast swathes of his colleagues who were previously employed to keep the site running smoothly were fired. (That last sentence is why the engineer has been granted anonymity to talk for this story.) After struggling with downtime during its “Fail Whale” days, Twitter eventually became lauded for its team of site reliability engineers, or SREs. Yet this team has been decimated in the aftermath of Musk’s takeover. “It’s small things, at the moment, but they do really add up as far as the perception of stability,” says the engineer.

    The small suggestions of something wrong will amplify and multiply as time goes on, he predicts—in part because the skeleton staff remaining to handle these issues will quickly burn out. “Round-the-clock is detrimental to quality, and we’re already kind of seeing this,” he says.

    Twitter’s remaining engineers have largely been tasked with keeping the site stable over the last few days, since the new CEO decided to get rid of a significant chunk of the staff maintaining its code base. As the company tries to return to some semblance of normalcy, more of their time will be spent addressing Musk’s (often taxing) whims for new products and features, rather than keeping what’s already there running.

    This is particularly problematic, says Krueger, for a site like Twitter, which can have unforeseen spikes in user traffic and interest. Krueger contrasts Twitter with online retail sites, where companies can prepare for big traffic events like Black Friday with some predictability. “When it comes to Twitter, they have the possibility of having a Black Friday on any given day at any time of the day,” he says. “At any given day, some news event can happen that can have significant impact on the conversation.” Responding to that is harder to do when you lay off up to 80% of your SREs—a figure Krueger says has been bandied about within the industry but which MIT Technology Review has been unable to confirm. The Twitter engineer agreed that the percentage sounded “plausible.”

    That engineer doesn’t see a route out of the issue—other than reversing the layoffs (which the company has reportedly already attempted to roll back somewhat). “If we’re going to be pushing at a breakneck pace, then things will break,” he says. “There’s no way around that. We’re accumulating technical debt much faster than before—almost as fast as we’re accumulating financial debt.”
    The list grows longer

    He presents a dystopian future where issues pile up as the backlog of maintenance tasks and fixes grows longer and longer. “Things will be broken. Things will be broken more often. Things will be broken for longer periods of time. Things will be broken in more severe ways,” he says. “Everything will compound until, eventually, it’s not usable.”

    Twitter’s collapse into an unusable wreck is some time off, the engineer says, but the telltale signs of process rot are already there. It starts with the small things: “Bugs in whatever part of whatever client they’re using; whatever service in the back end they’re trying to use. They’ll be small annoyances to start, but as the back-end fixes are being delayed, things will accumulate until people will eventually just give up.”

    Krueger says that Twitter won’t blink out of life, but we’ll start to see a greater number of tweets not loading, and accounts coming into and out of existence seemingly at a whim. “I would expect anything that’s writing data on the back end to possibly have slowness, timeouts, and a lot more subtle types of failure conditions,” he says. “But they’re often more insidious. And they also generally take a lot more effort to track down and resolve. If you don’t have enough engineers, that’s going to be a significant problem.”

    The juddering manual retweets and faltering follower counts are indications that this is already happening. Twitter engineers have designed fail-safes that the platform can fall back on so that the functionality doesn’t go totally offline but cut-down versions are provided instead. That’s what we’re seeing, says Krueger.

    Alongside the minor malfunctions, the Twitter engineer believes that there’ll be significant outages on the horizon, thanks in part to Musk’s drive to reduce Twitter’s cloud computing server load in an attempt to claw back up to $3 million a day in infrastructure costs. Reuters reports that this project, which came from Musk’s war room, is called the “Deep Cuts Plan.” One of Reuters’s sources called the idea “delusional,” while Alan Woodward, a cybersecurity professor at the University of Surrey, says that “unless they’ve massively overengineered the current system, the risk of poorer capacity and availability seems a logical conclusion.”
    Brain drain

    Meanwhile, when things do go kaput, there’s no longer the institutional knowledge to quickly fix issues as they arise. “A lot of the people I saw who were leaving after Friday have been there nine, 10, 11 years, which is just ridiculous for a tech company,” says the Twitter engineer. As those individuals walked out of Twitter offices, decades of knowledge about how its systems worked disappeared with them. (Those within Twitter, and those watching from the sidelines, have previously argued that Twitter’s knowledge base is overly concentrated in the minds of a handful of programmers, some of whom have been fired.)

    To be fair, it was already aging out of relevance before Musk took over.

    Unfortunately, teams stripped back to their bare bones (according to those remaining at Twitter) include the tech writers’ team. “We had good documentation because of [that team],” says the engineer. No longer. When things go wrong, it’ll be harder to find out what has happened.

    Getting answers will be harder externally as well. The communications team has been cut down from between 80 and 100 to just two people, according to one former team member who MIT Technology Review spoke to. “There’s too much for them to do, and they don’t speak enough languages to deal with the press as they need to,” says the engineer.

    When MIT Technology Review reached out to Twitter for this story, the email went unanswered.

    Musk’s recent criticism of Mastodon, the open-source alternative to Twitter that has piled on users in the days since the entrepreneur took control of the platform, invites the suggestion that those in glass houses shouldn’t throw stones. The Twitter CEO tweeted, then quickly deleted, a post telling users, “If you don’t like Twitter anymore, there is awesome site [sic] called Masterbatedone [sic].” Accompanying the words was a physical picture of his laptop screen open on Paul Krugman’s Mastodon profile, showing the economics columnist trying multiple times to post. Despite Musk’s attempt to highlight Mastodon’s unreliability, its success has been remarkable: nearly half a million people have signed up since Musk took over Twitter.

    It’s happening at the same time that the first cracks in Twitter’s edifice are starting to show. It’s just the beginning, expects Krueger. “I would expect to start seeing significant public-facing problems with the technology within six months,” he says. “And I feel like that’s a generous estimate.”
    by Chris Stokel-Walker

    #Twitter #Equipe_technique

  • The smart city is a perpetually unrealized utopia | MIT Technology Review

    While urban theorists somewhat myopically trace the concept of the “smart city” back to the 1990s, when IBM arguably first coined the term, the CAB’s research represents one of the earliest large-scale efforts to model the urban environment through “big data.” Utilizing a combination of computerized data gathering and storage, statistical cluster analysis techniques, aerial-based color infrared photography (what we today call remote sensing), and direct “on the ground” (i.e., driving around the city) validation of the aerial images, the CAB’s analysis was decidedly different from previous attempts. The CAB partitioned the city into clusters representing social-geographic features that sound straight out of today’s social media playbook: “LA singles,” “the urban poor,” “1950s-styled suburbs.” What the cluster analysis truly revealed were correlations between socioeconomic forces that could be used as predictors for which neighborhoods were falling into poverty and “urban blight.”

    Though innovative for the time, the CAB’s harnessing of punch cards and computer-based databases was not an isolated endeavor. It was part of a much larger set of postwar experiments focused on reimagining the urban through computational processes. The urban theorist Kevin Lynch’s 1960 Image of the City spurred years of research into cognitive science on how we map typological elements in urban space (paths, edges, nodes, districts, and landmarks). Cyberneticians such as Jay Forrester at MIT sought to apply complex systems dynamics by way of computer simulations to understand the feedback loops within urban development, involving everything from population and housing to the influence of industry on growth. With Forrester, Lynch, and others, the foundations for smart cities were being laid, just as sensing and computing were entering into the public consciousness.

    The contemporary vision of the smart city is by now well known. It is, in the words of IBM, “one of instrumentation, interconnectedness, and intelligence.” “Instrumentation” refers to sensor technologies, while “interconnectedness” describes the integration of sensor data into computational platforms “that allow the communication of such information among various city services.” A smart city is only as good as the imagined intelligence that it either produces or extracts. The larger question, however, is what role human intelligence has in the network of “complex analytics, modeling, optimization, visualization services, and last but certainly not least, AI” that IBM announced. The company actually trademarked the term “smarter cities” in November 2011, underlining the reality that such cities would no longer fully belong to those who inhabited them.

    When we assume that data is more important than the people who created it, we reduce the scope and potential of what diverse human bodies can bring to the “smart city” of the present and future. But the real “smart” city consists not only of commodity flows and information networks generating revenue streams for the likes of Cisco or Amazon. The smartness comes from the diverse human bodies of different genders, cultures, and classes whose rich, complex, and even fragile identities ultimately make the city what it is.

    Chris Salter is an artist and professor of immersive arts at the Zurich University of the Arts. His newest book, Sensing Machines: How Sensors Shape Our Everyday Life, has just been published by MIT Press.

    #Smart_cities #Senseurs #Réseaux #Urbanisme

  • A new vision of artificial intelligence for the people | MIT Technology Review

    Data sovereignty is thus the latest example of Indigenous resistance—against colonizers, against the nation-state, and now against big tech companies. “The nomenclature might be new, the context might be new, but it builds on a very old history,” Kukutai says.

  • How the AI industry profits from catastrophe | MIT Technology Review

    Appen is among dozens of companies that offer data-labeling services for the AI industry. If you’ve bought groceries on Instacart or looked up an employer on Glassdoor, you’ve benefited from such labeling behind the scenes. Most profit-maximizing algorithms, which underpin e-commerce sites, voice assistants, and self-driving cars, are based on deep learning, an AI technique that relies on scores of labeled examples to expand its capabilities. 

    The insatiable demand has created a need for a broad base of cheap labor to manually tag videos, sort photos, and transcribe audio. The market value of sourcing and coordinating that “ghost work,” as it was memorably dubbed by anthropologist Mary Gray and computational social scientist Siddharth Suri, is projected to reach $13.7 billion by 2030.

    Venezuela’s crisis has been a boon for these companies, which suddenly gained some of the cheapest labor ever available. But for Venezuelans like Fuentes, the rise of this fast-growing new industry in her country has been a mixed blessing. On one hand, it’s been a lifeline for those without any other options. On the other, it’s left them vulnerable to exploitation as corporations have lowered their pay, suspended their accounts, or discontinued programs in an ongoing race to offer increasingly low-cost services to Silicon Valley.

    “There are huge power imbalances,” says Julian Posada, a PhD candidate at the University of Toronto who studies data annotators in Latin America. “Platforms decide how things are done. They make the rules of the game.”

    To a growing chorus of experts, the arrangement echoes a colonial past when empires exploited the labor of more vulnerable countries and extracted profit from them, further impoverishing them of the resources they needed to grow and develop.

    It was, of all things, the old-school auto giants that caused the data-labeling industry to explode.

    German car manufacturers, like Volkswagen and BMW, were panicked that the Teslas and Ubers of the world threatened to bring down their businesses. So they did what legacy companies do when they encounter fresh-faced competition: they wrote blank checks to keep up.

    The tech innovation of choice was the self-driving car. The auto giants began pouring billions into their development, says Schmidt, pushing the needs for data annotation to new levels.

    Like all AI models built on deep learning, self-driving cars need millions, if not billions, of labeled examples to be taught to “see.” These examples come in the form of hours of video footage: every frame is carefully annotated to identify road markings, vehicles, pedestrians, trees, and trash cans for the car to follow or avoid. But unlike AI models that might categorize clothes or recommend news articles, self-driving cars require the highest levels of annotation accuracy. One too many mislabeled frames can be the difference between life and death.

    For over a decade, Amazon’s crowdworking platform Mechanical Turk, or MTurk, had reigned supreme. Launched in 2005, it was the de facto way for companies to access low-wage labor willing to do piecemeal work. But MTurk was also a generalist platform: as such, it produced varied results and couldn’t guarantee a baseline of quality.

    For some tasks, Scale first runs client data through its own AI systems to produce preliminary labels before posting the results to Remotasks, where human workers correct the errors. For others, according to company training materials reviewed by MIT Technology Review, the company sends the data straight to the platform. Typically, one layer of human workers takes a first pass at labeling; then another reviews the work. Each worker’s pay is tied to speed and accuracy, which eggs them on to complete tasks more quickly yet fastidiously.

    Initially, Scale sought contractors in the Philippines and Kenya. Both were natural fits, with histories of outsourcing, populations that speak excellent English and, crucially, low wages. However, around the same time, competitors such as Appen, Hive Micro, and Mighty AI’s Spare5 began to see a dramatic rise in signups from Venezuela, according to Schmidt’s research. By mid-2018, an estimated 200,000 Venezuelans had registered for Hive Micro and Spare5, making up 75% of their respective workforces.

    The group now pools tasks together. Anytime a task appears in one member’s queue, that person copies the task-specific URL to everyone else. Anyone who clicks it can then claim the task as their own, even if it never showed up in their own queue. The system isn’t perfect. Each task has a limited number of units, such as the number of images that need to be labeled, which disappear faster when multiple members claim the same task in parallel. But Fuentes says so long as she’s clicked the link before it goes away, the platform will let her complete whatever units are left, and Appen will pay. “We all help each other out,” she says.

    The group also keeps track of which client IDs should be avoided. Some clients are particularly harsh in grading task performance, which can cause a devastating account suspension. Nearly every member of the group has experienced at least one, Fuentes says. When it happens, you lose your access not only to new tasks but to any earnings that haven’t been withdrawn.

    The time it happened to Fuentes, she received an email saying she had completed a task with “dishonest answers.” When she appealed, customer service confirmed it was an administrative error. But it still took months of pleading, using Google Translate to write messages in English, before her account was reinstated, according to communications reviewed by MIT Technology Review. (“We … have several initiatives in place to increase the response time,” Golden says. “The reality is that we have thousands of requests a day and respond based on priority.”)

    Simala Leonard, a computer science student at the University of Nairobi who studies AI and worked several months on Remotasks, says the pay for data annotators is “totally unfair.” Google’s and Tesla’s self-driving-car programs are worth billions, he says, and algorithm developers who work on the technology are rewarded with six-figure salaries.

    In parallel with the rise of platforms like Scale, newer data-labeling companies have sought to establish a higher standard for working conditions. They bill themselves as ethical alternatives, offering stable wages and benefits, good on-the-job training, and opportunities for career growth and promotion.

    But this model still accounts for only a tiny slice of the market. “Maybe it improves the lives of 50 workers,” says Milagros Miceli, a PhD candidate at the Technical University of Berlin who studies two such companies, “but it doesn’t mean that this type of economy as it’s structured works in the long run.”

    Such companies are also constrained by players willing to race to the bottom. To keep their prices competitive, the firms similarly source workers from impoverished and marginalized populations—low-income youth, refugees, people with disabilities—who remain just as vulnerable to exploitation, Miceli says.

    #Intelligence_artificelle #Annotation #Tags #Etiquetage #Nouvelle_exploitation #Data_colonialisme

  • Online “auctions” of women are just the latest attacks on Muslims in India | MIT Technology Review

    Qurat-Ul-Ain Rehbar, a journalist based in Indian-administered Kashmir, was traveling when a friend called to tell her that she had been put up for sale. She was told that someone had taken a publicly available picture and created a profile, describing her as the “deal of the day” in a fake auction. 

    Rehbar was one of more than 100 Muslim women whose names and photographs were displayed on the fake auction site, which was hosted anonymously on GitHub in early January. 

    Following a massive social media backlash, GitHub took down the website, which was called “Bulli Bai”—a slur against Muslim women. But the event was only one of the latest online incidents targeting Muslims in India—and Muslim women in particular, many of whom have been vocal about the rising tide of Hindu nationalism since Prime Minister Narendra Modi came to power in 2014. 

    In July of last year, another fake auction site, called “Sulli Deals,” displayed profiles of more than 80 Muslim women. In the social audio app Clubhouse, Hindu men are “auctioning” off parts of Muslim women’s bodies and openly issuing rape threats. And in December, Hindu leaders organized an event in the city of Haridwar calling for genocide against Muslims. Soon after, videos containing provocative speeches went viral on social media.  

    In the first few weeks of January, police made arrests related to both online auction sites. But all told, critics say, the Indian government is not doing nearly enough to stem the targeting of Muslim women online. “If our government continues to remain silent in the face of this kind of hate-mongering, the message it will send out is that such criminal behavior targeting minorities will go unpunished,” says Geeta Seshu, founder of the Free Speech Collective, an initiative of journalists, lawyers, and civil society activists.

    An independent hate crime tracker documented more than 400 hate crimes against Muslims in India over four years, until his Twitter account was suspended in 2021.

    Muslim women targeted by the auction sites have included journalists, activists, lawyers, politicians, radio hosts, pilots, and scholars; they’re active on social media and speak out about issues, and specifically about rising Islamophobia in India. “I think the attack was to silence those who are vocal on social media,” Rehbar says. “This was a hate crime against Muslim women particularly.”

    Law enforcement has moved slowly in these cases, especially last year’s Sulli Deals case, says N.S. Nappinai, a lawyer with the Supreme Court of India and founder of Cyber Saathi, an initiative focusing on cybersecurity. “If law enforcement had acted faster, the copycat may possibly have been avoided,” Nappinai says.

    The slow action is part of a larger pattern, says Meenakshi Ganguly, South Asia director at Human Rights Watch. Authorities are quick to accuse government critics, she says, but “hate speech and violent actions by government supporters are seldom prosecuted.” 

    Social media companies, which have the ability to take down offensive posts and stem misinformation, are not filling the void. “Tech companies take down content based on their community guidelines and local laws. In this case both were violated,” says Krishnesh Bapat, a Centre for Communication Governance Digital Fellow at the Internet Freedom Foundation in Delhi. “GitHub, to the best of my knowledge, does not proactively take down content. It does so only after it receives a complaint and took longer in this case.” GitHub did not respond to a request for comment about its policies.

    In India almost all forms of online harassment fall under the general category of cyberbullying. India’s Information Technology Act, 2000, commonly known as the Cyber Law, governs online abuse. The act was intended to address e-commerce but was adjusted in 2008 to cover cybercrimes as well. Harassment can also fall under the country’s overall penal code, says Nappinai, which can help protect victims in serious cases. 

    Nevertheless, some say the country’s online laws need revision. Anushka Jain, a lawyer with the Internet Freedom Foundation, believes the digital world has changed too much for the law to be effective. “Some of the provisions of the [Cyber] Act have become redundant and incapable of addressing the currently persisting issues and rapidly evolving changes and threats,” she says. The government, she adds, needs a holistic approach to cyber policy, including stricter laws. 

    In addition to harassment, Muslims in India are also struggling with misinformation online. For example, last September, ID Fresh, a halal-certified food products company owned by a Muslim family, faced a large-scale misinformation campaign on social media claiming that the company mixes cow bones and calf rennet to increase the volume of ready-to-cook batter and urging “every single Hindu” to avoid the products. The company faced a boycott and saw its sales drop; it had to launch its own campaign in response to set the record straight. 

    So far, there seems to be little movement to change the situation from either tech companies or the Indian government. That has left little remedy for victims like commercial pilot Hana Mohsin Khan, who took to Twitter to express her anger when she saw her picture in the January auction. “Muslim women were yet again targeted. Yet again there will be no action,” she wrote. “We are caught in a never ending cycle of anger and anguish. Every. Single. Day.”

    Safina Nabi is an independent multimedia journalist from South Asia based in Kashmir.

    #Islamophobie #Inde #Femmes_enchères #Machisme

  • Why the balance of power in tech is shifting toward workers | MIT Technology Review

    A record number of tech worker unions formed in the US last year. They’re part of a global effort.

    Something has changed for the tech giants. Even as they continue to hold tremendous influence in our daily lives, a growing accountability movement has begun to check their power. The movement is being led, in large part, by tech workers themselves, who are seeking reform of how these companies do business, treat their employees, and conduct themselves as global citizens.

    Concerns and anger over tech companies’ impact in the world is nothing new, of course. What’s changed is that workers are increasingly getting organized.

    To understand how advocacy and organizing within the tech industry work now, you have to go back to 2018, the year of the Techlash. Three important things happened that year. First a Cambridge Analytica whistleblower came forward with allegations of data misuse at Facebook. Then thousands of Google employees fought against Project Maven, an AI initiative created to enhance military drones. The year culminated in a massive, global Google walkout spurred by New York Times’ revelation of a $90 million exit payout to Android creator Andy Rubin following allegations of sexual misconduct.

    “The walkout, I think, cleared a space for everybody to scream in the streets,” says Claire Stapleton, one of the organizers.

    Read the full story.

    —Jane Lytvynenko

    #Syndicalisme #Economie_numérique #Plateformes #Travail

  • This company says it’s developing a system that can recognize your face from just your DNA | MIT Technology Review

    A police officer is at the scene of a murder. No witnesses. No camera footage. No obvious suspects or motives. Just a bit of hair on the sleeve of the victim’s jacket. DNA from the cells of one strand is copied and compared against a database. No match comes back, and the case goes cold.

    Corsight AI, a facial recognition subsidiary of the Israeli AI company Cortica, purports to be devising a solution for that sort of situation by using DNA to create a model of a face that can then be run through a facial recognition system. It is a task that experts in the field regard as scientifically untenable.

    Corsight unveiled its “DNA to Face” product in a presentation by chief executive officer Robert Watts and executive vice president Ofer Ronen intended to court financiers at the Imperial Capital Investors Conference in New York City on December 15. It was part of the company’s overall product road map, which also included movement and voice recognition. The tool “constructs a physical profile by analyzing genetic material collected in a DNA sample,” according to a company slide deck viewed by surveillance research group IPVM and shared with MIT Technology Review.
    A photo of Corsight’s investor presentation showing its product roadmap that features “voice to face”, “DNA to face” and “movement” as an expansion of its face recognition capabilities.

    Corsight declined a request to answer questions about the presentation and its product road map. “We are not engaging with the press at the moment as the details of what we are doing are company confidential,” Watts wrote in an email.

    But marketing materials show that the company is focused on government and law enforcement applications for its technology. Its advisory board consists only of James Woolsey, a former director of the CIA, and Oliver Revell, a former assistant director of the FBI.

    To support MIT Technology Review’s journalism, please consider becoming a subscriber.

    The science that would be needed to support such a system doesn’t yet exist, however, and experts say the product would exacerbate the ethical, privacy, and bias problems facial recognition technology already causes. More worryingly, it’s a signal of the industry’s ambitions for the future, where face detection becomes one facet of a broader effort to identify people by any available means—even inaccurate ones.

    This story was jointly reported with Don Maye of IPVM who said “this presentation was the first time IPVM became aware of a company attempting to commercialize a face recognition product associated with a DNA sample.”
    A checkered past

    Corsight’s idea is not entirely new. Human Longevity, a “genomics-based, health intelligence” company founded by Silicon Valley celebrities Craig Venter and Peter Diamandis, claimed to have used DNA to predict faces in 2017. MIT Technology Review reported then that experts, however, were doubtful. A former employee of Human Longevity said the company can’t pick a person out of a crowd using a genome, and Yaniv Erlich, chief science officer of the genealogy platform MyHeritage, published a response laying out major flaws in the research.

    A small DNA informatics company, Parabon NanoLabs, provides law enforcement agencies with physical depictions of people derived from DNA samples through a product line called Snapshot, which includes genetic genealogy as well as 3D renderings of a face. (Parabon publishes some cases on its website with comparisons between photos of people the authorities are interested in finding and renderings the company has produced.)

    Parabon’s computer-generated composites also come with a set of phenotypic characteristics, like eye and skin color, that are given a confidence score. For example, a composite might say that there’s an 80% chance the person being sought has blue eyes. Forensic artists also amend the composites to create finalized face models that incorporate descriptions of nongenetic factors, like weight and age, whenever possible.

    Parabon’s website claims its software is helping solve an average of one case per week, and Ellen McRae Greytak, the company’s director of bioinformatics, says it has solved over 600 cases in the past seven years, though most are solved with genetic genealogy rather than composite analysis. Greytak says the company has come under criticism for not publishing its proprietary methods and data; she attributes that to a “business decision.”

    Parabon does not package face recognition AI with its phenotyping service, and it stipulates that its law enforcement clients should not use the images it generates from DNA samples as an input into face recognition systems.
    Related Story
    The pandemic is testing the limits of face recognition

    Government use of face ID systems exploded during the pandemic—but tying it to critical services has left some people locked out at the moment they needed help the most.

    Parabon’s technology “doesn’t tell you the exact number of millimeters between the eyes or the ratio between the eyes, nose, and mouth,” Greytak says. Without that sort of precision, facial recognition algorithms cannot deliver accurate results—but deriving such precise measurements from DNA would require fundamentally new scientific discoveries, she says, and “the papers that have tried to do prediction at that level have not had a lot of success.” Greytak says Parabon only predicts the general shape of someone’s face (though the scientific feasibility of such prediction has also been questioned).

    Police have been known to run forensic sketches based on witness descriptions through facial recognition systems. A 2019 study from Georgetown Law’s Center on Privacy and Technology found that at least half a dozen police agencies in the US “permit, if not encourage” using forensic sketches, either hand drawn or computer generated, as input photos for face recognition systems. AI experts have warned that such a process likely leads to lower levels of accuracy.

    Corsight also has been criticized in the past for exaggerating the capabilities and accuracy of its face recognition system, which it calls the “most ethical facial recognition system for highly challenging conditions,” according to a slide deck presentation available online. In a technology demo for IPVM last November, Corsight CEO Watts said that Corsight’s face recognition system can “identify someone with a face mask—not just with a face mask, but with a ski mask.” IPVM reported that using Corsight’s AI on a masked face rendered a 65% confidence score, Corsight’s own measure of how likely it is that the face captured will be matched in its database, and noted that the mask is more accurately described as a balaclava or neck gaiter, as opposed to a ski mask with only mouth and eye cutouts.

    Broader issues with face recognition technology’s accuracy have been well-documented (including by MIT Technology Review). They are more pronounced when photographs are poorly lit or taken at extreme angles, and when the subjects have darker skin, are women, or are very old or very young. Privacy advocates and the public have also criticized facial recognition technology, particularly systems like Clearview AI that scrape social media as part of their matching engine.

    Law enforcement use of the technology is particularly fraught—Boston, Minneapolis, and San Francisco are among the many cities that have banned it. Amazon and Microsoft have stopped selling facial recognition products to police groups, and IBM has taken its face recognition software off the market.

    “The idea that you’re going to be able to create something with the level of granularity and fidelity that’s necessary to run a face match search—to me, that’s preposterous,” says Albert Fox Cahn, a civil rights lawyer and executive director of the Surveillance Technology Oversight Project, who works extensively on issues related to face recognition systems. “That is pseudoscience.”

    Dzemila Sero, a researcher in the Computational Imaging Group of Centrum Wiskunde & Informatica, the national research institute for mathematics and computer science in the Netherlands, says the science to support such a system is not yet sufficiently developed, at least not publicly. Sero says the catalog of genes required to produce accurate depictions of faces from DNA samples is currently incomplete, citing Human Longevity’s 2017 study.

    In addition, factors like the environment and aging have substantial effects on faces that can’t be captured through DNA phenotyping, and research has shown that individual genes don’t affect the appearance of someone’s face as much as their gender and ancestry does. “Premature attempts to implement this technique would likely undermine trust and support for genomic research and garner no societal benefit,” she told MIT Technology Review in an email.
    The Download
    Sign up for your daily dose of what’s up in emerging technology
    Enter your email
    Get updates and offers from MIT Technology Review
    By signing up, you agree to our Privacy Policy

    Sero has studied the reverse concept of Corsight’s system—“face to DNA” rather than “DNA to face”—by matching a set of 3D photographs with a DNA sample. In a paper in Nature, Sero and her team reported accuracy rates between 80% to 83%. Sero says her work should not be used by prosecutors as incriminating evidence, however, and that “these methods also raise undeniable risks of further racial disparities in criminal justice that warrant caution against premature application of the techniques until proper safeguards are in place.”

    Law enforcement depends on DNA data sets, predominantly the free ancestry website GEDmatch, which was instrumental in the search for the notorious “Golden State Killer.” But even DNA sampling, once considered the only form of scientifically rigorous forensic evidence by the US National Research Council, has recently come under criticism for problems with accuracy.

    Fox Cahn, who is currently suing the New York Police Department to obtain records related to bias in its use of facial recognition technology, says the impact of Corsight’s hypothetical system would be disastrous. “Gaming out the impact this is going to have, it augments every failure case for facial recognition,” says Fox Cahn. “It’s easy to imagine how this could be used in truly frightening and Orwellian ways.”
    The future of face recognition tech

    Despite such concerns, the market for face recognition technology is growing, and companies are jockeying for customers. Corsight is just one of many offering photo-matching services with flashy new features, regardless of whether they’ve been shown to work.

    Many of these new products look to integrate face recognition with another form of recognition. The Russia-based facial recognition company NtechLab, for example, offers systems that identify people based on their license plates as well as facial features, and founder Artem Kuharenko told MIT Technology Review last year that its algorithms try to “extract as much information from the video stream as possible.” In these systems, facial recognition becomes just one part of an apparatus that can identify people by a range of techniques, fusing personal information across connected databases into a sort of data panopticon.

    Corsight’s DNA to face system appears to be the company’s foray into building a futuristic, comprehensive surveillance package it can offer to potential buyers. But even as the market for such technologies expands, Corsight and others are at increased risk of commercializing surveillance technologies plagued by bias and inaccuracy.
    by Tate Ryan-Mosley

    #ADN #Police_scientifique #Reconnaissance_faciale #Hubris_technologique #Société_de_contrôle #Surveillance

  • The biggest technology failures of 2021 | MIT Technology Review

    We’ve never relied more on technology to solve our problems than we do now. Sometimes it works. Vaccines against covid-19 have cut the death toll. We’ve got virus tests and drugs, too.

    But this isn’t the story about what worked in 2021. This is MIT Technology Review’s annual list of cases where innovation went wrong. From the metaverse to Alzheimer’s drugs, the technologies on this are the ones that didn’t work (or that worked too well), the Eurekas we wish no one had ever had, the inventions spawned by the dark side of the human intellect. Read on.

    Biogen’s Alzheimer’s drug

    The best kind of medicine is inexpensive, safe, and effective. Think of setting a bone in a cast, filling a cavity, or administering a $2 polio vaccine. The worst medicine of 2021 is exactly the opposite. It’s Aduhelm—an Alzheimer’s drug that went on sale in June in the US at a yearly cost of around $56,400, without much evidence it helps patients, but with substantial risk of serious brain swelling.
    Related Story
    The biggest technology failures of 2020
    The covid pandemic made this the year we counted on technology more than ever. Here’s how it failed us.

    The drug, sold by Biogen, is an antibody that attaches to brain plaques. Aduhelm flopped in a large human trial, which showed no concrete benefit to patients with the brain disease. Yet the company and the US Food and Drug Administration decided to move forward in June, over the objections of the agency’s expert advisors. Several resigned. One, Aaron Kesselheim, called the episode “probably the worst drug approval decision in recent US history.”

    Yes, we need new treatments for Alzheimer’s. But this approval marked a concerning trend toward approving drugs using a weaker type of evidence known as “surrogate markers.” Because Aduhelm causes a measurable reduction in brain plaques—a marker of dementia—the FDA concluded there was “reasonable likelihood” it would benefit patients. One problem with such guesswork is that no one knows whether these plaques cause disease or are just among its symptoms.

    Aduhelm, the first new Alzheimer’s drug in 20 years, is already a fiasco. Few patients are getting it, Biogen’s sales are minuscule, and at least one person has died from brain swelling. Since the approval, the company has cut the drug’s price in half, and its research chief has abruptly resigned.

    Read more: “How an Unproven Alzheimer’s Drug Got Approved,” New York Times .

    Zillow’s house-buying algorithm

    “Don’t get high on your own supply” is a familiar business maxim. The real estate listing company Zillow did exactly that, with catastrophic results.

    The company’s real-estate listing site is popular, and so are its computer-generated house values, known as “Zestimates.” The company’s error was using its estimates to purchase homes itself, sight unseen, in order to flip them and collect transaction fees. Zillow soon learned that its algorithm didn’t correctly forecast changes in housing prices. And that wasn’t the only problem.

    Zillow was competing with other digital bidders, known as “iBuyers.” So it did what any house hunter determined to make a deal would do: it overpaid. By this year, Zillow was listing hundreds of homes for less than its own purchase price. In November, the company shuttered its iBuying unit Zillow Offers, cut 2,000 jobs, and took a $500 million write-off in what the Wall Street Journal termed “one of the sharpest recent American corporate retreats.”

    Zillow will stick to its original business of selling advertisements to real estate brokers. Its Zestimates still have a home on the site.

    Read more: “What Went Wrong with Zillow? A Real-Estate Algorithm Derailed Its Big Bet,” Wall Street Journal


    Ransomware is malicious software that kidnaps a company’s computer files by encrypting them. Criminals then demand money to restore access. It’s a booming business. Ransomware hit a new record in 2021 with more than 500 million attacks, according to cybersecurity company SonicWall.
    Related Story
    Why the ransomware crisis suddenly feels so relentless

    Attacks on major companies and critical infrastructure have panicked the US, but the roots of the problem go back years.

    The problem came to wider attention on May 7, 2021, when a ransomware group called DarkSide locked the files of Colonial Pipeline, which operates 5,500 miles of gasoline and fuel pipes stretching between Houston and New York. The company quickly paid more than $4 million in Bitcoin, but the disruption still caused temporary chaos at gas stations on the US East Coast.

    By attacking critical infrastructure, the gang drew more attention than it expected. The FBI tracked and seized back about half the Bitcoin ransom, and DarkSide later announced on its website that it was going out of business.

    As long as people pay ransoms, however, the criminals will be back.

    Space tourism

    If you’ve ever been to the Louvre in Paris, you’ve seen the crowds of wealthy tourists waving iPhones at the Mona Lisa, even if they can barely see it. The famous painting is now just a bucket-list item. Get there, snap a selfie, and then on to the next “experience.”

    Now a snapshot floating above planet Earth is what’s on the wish list for a few billionaires and their buddies. It’s called “space tourism,” but we wonder what the point is. Wikipedia defines it as “human space travel for recreational purposes.”

    It’s not exactly new: the first paying customer launched in 1984 on the space shuttle. But this year the trend expanded in clouds of burnt fuel as Virgin Galactic founder Richard Branson and then Jeff Bezos, the founder of Amazon, each rode vehicles up to the edges of space.

    It’s all about an exclusive experience. But, likes lots of favorite tourist spots, it could soon get crowded up there.

    Blue Origin, the space company started by Bezos, plans an “orbital reef,” a kind of office park circling the planet where people rent space to make films. On Virgin’s website, Branson says the reason for his space plane—with rides costing $200,000 and up—is to get “millions of kids all over the world” excited about “the possibility of them going to space one day.” Get your selfie sticks ready.

    Beauty filters

    This year, Facebook rebranded itself as “Meta,” signaling Mark Zuckerberg’s bet on the emerging virtual worlds of work and play. The appeal of digital reality is that you can be anyone and do anything.
    Related Story
    Beauty filters are changing the way young girls see themselves

    The most widespread use of augmented reality isn’t in gaming: it’s the face filters on social media. The result? A mass experiment on girls and young women.

    But early experience with one form of augmented reality at scale shows that different isn’t always better. We’re talking about beauty filters—apps that let people, often young girls, smooth their skin, thin their noses, and enlarge their eyes in digital images. These apps are not just gimmicks, like those that give you bunny ears. For some young women, they enforce false images they can’t live up to. The message kids are getting is not “Be yourself.”

    Beauty apps are available on Snapchat, TikTok, and Meta’s Instagram—and millions are using them. Meta has already barred some apps that encourage extreme weight loss or plastic surgery, acknowledging some problems. But this year a whistleblower, Frances Haugen, stepped forward to say that Zuckerberg’s company had further data showing that addictive use of Instagram—constantly posting images, seeking likes, and making comparisons—“harms children” and creates “a toxic environment for teens.”

    People feel bad when they use it, but they can’t stop. Beauty filters that make people look good but feel unhappy are a troubling start for the metaverse.

    Read more: “Beauty filters are changing the way young girls see themselves,” MIT Technology Review

    by Antonio Regalado

    #Technologie #Echec #Antisocial

  • The metaverse has a groping problem already | MIT Technology Review

    But not everything has been warm and fuzzy. According to Meta, on November 26, a beta tester reported something deeply troubling: she had been groped by a stranger on Horizon Worlds.

    #facebook #meta #metaverse #meta_metaverse #agression_sexuelle #vr #réalité_virtuelle #sécurité #insécurité

  • How #Facebook and #Google fund global misinformation | MIT Technology Review

    The tech giants are paying millions of dollars to the operators of clickbait pages, bankrolling the deterioration of #information ecosystems around the world.

    #putaclic #démocraties #états-unis « #leadership »

  • Why you should be more concerned about internet shutdowns | MIT Technology Review

    Deliberate internet shutdowns enacted by governments around the world are increasing in frequency and sophistication, according to a recent report. The study, published by Google’s Jigsaw project with the digital rights nonprofit Access Now and the censorship measurement company Censored Planet, says internet shutdowns are growing “exponentially”: out of nearly 850 shutdowns documented over the last 10 years, 768 have happened since 2016.

    “Internet shutdown” describes a category of activity to curtail access to information. I think when most people use the term, they’re referring to this total shutdown of the internet—which indeed we see, especially in certain countries over the last several years. But there is a spectrum of threats that are subtler but, in some ways, just as damaging as a total internet blackout. As this international consensus grows against complete internet shutdowns, we’re seeing an increase in this subtler, more targeted, and more low-grade shutting down and censorship.

  • Mathematicians are deploying algorithms to stop gerrymandering | MIT Technology Review

    With the 2020 US Census data release, states start the process of redrawing district maps. New computational tools will help hold politicians to account.

    Siobhan Roberts archive page

    August 12, 2021
    conceptual illustration of a map being cut up and taped together
    Alexander Glandien

    The maps for US congressional and state legislative races often resemble electoral bestiaries, with bizarrely shaped districts emerging from wonky hybrids of counties, precincts, and census blocks.

    It’s the drawing of these maps, more than anything—more than voter suppression laws, more than voter fraud—that determines how votes translate into who gets elected. “You can take the same set of votes, with different district maps, and get very different outcomes,” says Jonathan Mattingly, a mathematician at Duke University in the purple state of North Carolina. “The question is, if the choice of maps is so important to how we interpret these votes, which map should we choose, and how should we decide if someone has done a good job in choosing that map?”

    Over recent months, Mattingly and like-minded mathematicians have been busy in anticipation of a data release expected today, August 12, from the US Census Bureau. Every decade, new census data launches the decennial redistricting cycle—state legislators (or sometimes appointed commissions) draw new maps, moving district lines to account for demographic shifts.

    In preparation, mathematicians are sharpening new algorithms—open-source tools, developed over recent years—that detect and counter gerrymandering, the egregious practice giving rise to those bestiaries, whereby politicians rig the maps and skew the results to favor one political party over another. Republicans have openly declared that with this redistricting cycle they intend to gerrymander a path to retaking the US House of Representatives in 2022.

    Lizard politics

    The term “gerrymander” dates to 1812, when a Massachusetts district drawn to the advantage of Governor Elbridge Gerry was so strangely shaped that it was likened to a salamander. Thus, to “gerrymander” is to manipulate district boundaries with a political agenda, and thereby manipulate election outcomes.

    The use of computers to generate and gerrymander electoral maps became relatively common in the 1990s, although early redistricting software was prohibitively expensive, costing $500,000 to $1 million. Now the industry standard is Maptitude, made by Caliper. When the first Maptitude for Redistricting package was released, in the late 1990s, it cost $2,999. The current price ranges from $1,000 to $10,000, depending on the user’s needs.

    That the technology had advanced by leaps and bounds since the previous redistricting cycle only supercharged the outcome. “It made the gerrymanders drawn that year so much more lasting and enduring than any other gerrymanders in our nation’s history,” he says. “It’s the sophistication of the computer software, the speed of the computers, the amount of data available, that makes it possible for partisan mapmakers to put their maps through 60 or 70 different iterations and to really refine and optimize the partisan performance of those maps.”

    As Michael Li, a redistricting expert at the Brennan Center for Justice at the New York University’s law school, puts it: “What used to be a dark art is now a dark science.” And when manipulated maps are implemented in an election, he says, they are nearly impossible to overcome.

    “The five justices on the Supreme Court are the only ones who seemed to have trouble seeing how the math and models worked,” says Li. “State and other federal courts managed to apply it—this was not beyond the intellectual ability of the courts to handle, any more than a complex sex discrimination case is, or a complex securities fraud case. But five justices of the Supreme Court said, ‘This is too hard for us.’”

    “They also said, ‘This is not for us to fix—this is for the states to fix; this is for Congress to fix; it’s not for us to fix,’” says Li.
    Will it matter?

    As Daley sees it, the Supreme Court decision gives state lawmakers “a green light and no speed limit when it comes to the kind of partisan gerrymanders that they can enact when map-making later this month.” At the same time, he says, “the technology has improved to such a place that we can now use [it] to see through the technology-driven gerrymanders that are created by lawmakers.”

    #Election #Manipulation #Démocratie #Gerrymandering

  • TikTok changed the shape of some people’s faces without asking | MIT Technology Review

    Users noticed what appeared to be a beauty filter they hadn’t requested—and which they couldn’t turn off.

    Abby Ohlheiser
    June 10, 2021
    An user opening TikTok on his iPhone
    Lorenzo Di Cola/NurPhoto via AP

    “That’s not my face,” Tori Dawn thought, after opening TikTok to make a video in late May. The jaw reflected back on the screen was wrong, slimmer and more feminine. And when they waved their hand in front of the camera, blocking most of their face from the lens, their jaw appeared to pop back to normal. Was their skin also a little softer?

    On further investigation, it seemed as if the image was being run through a beauty filter in the TikTok app. Normally, Dawn keeps those filters off in livestreams and videos to around 320,000 followers. But as they flipped around the app’s settings, there was no way to disable the effect:. it seemed to be permanently in place, subtly feminizing Dawn’s features.
    Related Story
    Beauty filters are changing the way young girls see themselves

    The most widespread use of augmented reality isn’t in gaming: it’s the face filters on social media. The result? A mass experiment on girls and young women.

    “My face is pretty androgynous and I like my jawline,” Dawn said in an interview. “So when I saw that it was popping in and out, I’m like ‘why would they do that, why?’ This is one of the only things that I like about my face. Why would you do that?”

    Beauty filters are now a part of life online, allowing users to opt in to changing the face they present to the world on social media. Filters can widen eyes, plump up lips, apply makeup, and change the shape of the face, among other things. But it’s usually a choice, not forced on users—which is why Dawn and others who encountered this strange effect, were so angry and disturbed by it.

    Dawn told her followers about it in a video. “As long as that’s still a thing,” Dawn said, showing the effect to their jaw pop in and out on screen, “I don’t feel comfortable making videos because this is not what I look like, and I don’t know how to fix it.” The video got more than 300,000 views, they said, and was shared and duetted by other users who noticed the same thing.

    congrats tiktok I am super uncomfortable and disphoric now cuz of whatever the fuck this shit is
    ♬ original sound - Tori Dawn

    “Is that why I’ve been kind of looking like an alien lately?” said one.

    “Tiktok. Fix this,” said another.

    Videos like these circulated for days in late May, as a portion of TikTok’s users looked into the camera and saw a face that wasn’t their own. As the videos spread, many users wondered whether the company was secretly testing out a beauty filter on some users.
    An odd, temporary issue

    I’m a TikTok lurker, not a maker, so it was only after seeing Dawn’s video that I decided to see if the effect appeared on my own camera. Once I started making a video, the change to my jaw shape was obvious. I suspected, but couldn’t tell for sure, that my skin had been smoothed as well. I sent a video of it in action to coworkers and my Twitter followers, asking them to open the app and try the same thing on their own phones: from their responses, I learned that the effect only seemed to impact Android phones. I reached out to TikTok, and the effect stopped appearing two days later. The company later acknowledged in a short statement that there was an issue that had been resolved, but did not provide further details.
    Sign up for The Download - Your daily dose of what’s up in emerging technology
    Stay updated on MIT Technology Review initiatives and events?

    On the surface it was an odd, temporary issue that affected some users and not others. But it was also forcibly changing people’s appearances—an important glitch for an app that is used by around 100 million people in the US. So I also sent the video to Amy Niu, a PhD candidate at the University of Wisconsin who studies the psychological impact of beauty filters. She pointed out that in China, and some other places, some apps add a subtle beauty filter by default. When Niu uses apps like WeChat, she can only really tell that a filter is in place by comparing a photo of herself using her camera to the image produced in the app.

    A couple months ago, she said, she downloaded the Chinese version of TikTok, called Douyin. “When I turned off the beauty mode and filters, I can still see an adjustment to my face,” she said.

    Having beauty filters in an app isn’t necessarily a bad thing, Niu said, but app designers have a responsibility to consider how those filters will be used, and how they will change the people who use them. Even if it was a temporary bug, it could have an impact on how people see themselves.

    “People’s internalization of beauty standards, their own body image or whether they will intensify their appearance concerns,” Niu said, are all considerations.

    For Dawn, the strange facial effect was just one more thing to add to the list of frustrations with TikTok: “It’s been very reminiscent of a relationship with a narcissist because they love bomb you one minute, they’re giving you all these followers and all this attention and it feels so good,” they said. “And then for some reason they just, they’re just like, we’re cutting you off.”

    #Beauty_filters #Image_de_soi #Filtres #Image

  • Here’s what China wants from its next space station | MIT Technology Review

    “From my perspective, the Chinese government’s number one goal is its own survival,” says Hines. “And so these projects are very much aligned with those domestic interests, even if they don’t make a ton of sense in broader geopolitical considerations or have much in the way of scientific contributions.”