• The AI myth Western lawmakers get wrong | MIT Technology Review
    https://www.technologyreview.com/2022/11/29/1063777/the-ai-myth-western-lawmakers-get-wrong

    While the US and the EU may differ on how to regulate tech, their lawmakers seem to agree on one thing: the West needs to ban AI-powered social scoring.

    As they understand it, social scoring is a practice in which authoritarian governments—specifically China—rank people’s trustworthiness and punish them for undesirable behaviors, such as stealing or not paying back loans. Essentially, it’s seen as a dystopian superscore assigned to each citizen. 

    The EU is currently negotiating a new law called the AI Act, which will ban member states, and maybe even private companies, from implementing such a system.

    The trouble is, it’s “essentially banning thin air,” says Vincent Brussee, an analyst at the Mercator Institute for China Studies, a German think tank.

    Back in 2014, China announced a six-year plan to build a system rewarding actions that build trust in society and penalizing the opposite. Eight years on, it’s only just released a draft law that tries to codify past social credit pilots and guide future implementation. 

    There have been some contentious local experiments, such as one in the small city of Rongcheng in 2013, which gave every resident a starting personal credit score of 1,000 that can be increased or decreased by how their actions are judged. People are now able to opt out, and the local government has removed some controversial criteria. 

    But these have not gained wider traction elsewhere and do not apply to the entire Chinese population. There is no countrywide, all-seeing social credit system with algorithms that rank people.

    As my colleague Zeyi Yang explains, “the reality is, that terrifying system doesn’t exist, and the central government doesn’t seem to have much appetite to build it, either.” 

    What has been implemented is mostly pretty low-tech. It’s a “mix of attempts to regulate the financial credit industry, enable government agencies to share data with each other, and promote state-sanctioned moral values,” Zeyi writes. 

    Kendra Schaefer, a partner at Trivium China, a Beijing-based research consultancy, who compiled a report on the subject for the US government, couldn’t find a single case in which data collection in China led to automated sanctions without human intervention. The South China Morning Post found that in Rongcheng, human “information gatherers” would walk around town and write down people’s misbehavior using a pen and paper. 

    The myth originates from a pilot program called Sesame Credit, developed by Chinese tech company Alibaba. This was an attempt to assess people’s creditworthiness using customer data at a time when the majority of Chinese people didn’t have a credit card, says Brussee. The effort became conflated with the social credit system as a whole in what Brussee describes as a “game of Chinese whispers.” And the misunderstanding took on a life of its own. 

    The irony is that while US and European politicians depict this as a problem stemming from authoritarian regimes, systems that rank and penalize people are already in place in the West. Algorithms designed to automate decisions are being rolled out en masse and used to deny people housing, jobs, and basic services. 

    For example in Amsterdam, authorities have used an algorithm to rank young people from disadvantaged neighborhoods according to their likelihood of becoming a criminal. They claim the aim is to prevent crime and help offer better, more targeted support. 

    But in reality, human rights groups argue, it has increased stigmatization and discrimination. The young people who end up on this list face more stops from police, home visits from authorities, and more stringent supervision from school and social workers.

    It’s easy to take a stand against a dystopian algorithm that doesn’t really exist. But as lawmakers in both the EU and the US strive to build a shared understanding of AI governance, they would do better to look closer to home. Americans do not even have a federal privacy law that would offer some basic protections against algorithmic decision making. 

    There is also a dire need for governments to conduct honest, thorough audits of the way authorities and companies use AI to make decisions about our lives. They might not like what they find—but that makes it all the more crucial for them to look.

    #Chine #Crédit_social

  • China just announced a new social credit law. Here’s what it says. | MIT Technology Review
    https://www.technologyreview.com/2022/11/22/1063605/china-announced-a-new-social-credit-law-what-does-it-mean

    The West has largely gotten China’s social credit system wrong. But draft legislation introduced in November offers a more accurate picture of the reality.
    By Zeyi Yangarchive page
    November 22, 2022

    STEPHANIE ARNETT/MITTR; GETTY
    Tech Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more here.

    It’s easier to talk about what China’s social credit system isn’t than what it is. Ever since 2014, when China announced a six-year plan to build a system to reward actions that build trust in society and penalize the opposite, it has been one of the most misunderstood things about China in Western discourse. Now, with new documents released in mid-November, there’s an opportunity to correct the record.

    For most people outside China, the words “social credit system” conjure up an instant image: a Black Mirror–esque web of technologies that automatically score all Chinese citizens according to what they did right and wrong. But the reality is, that terrifying system doesn’t exist, and the central government doesn’t seem to have much appetite to build it, either. 

    Instead, the system that the central government has been slowly working on is a mix of attempts to regulate the financial credit industry, enable government agencies to share data with each other, and promote state-sanctioned moral values—however vague that last goal in particular sounds. There’s no evidence yet that this system has been abused for widespread social control (though it remains possible that it could be wielded to restrict individual rights). 

    While local governments have been much more ambitious with their innovative regulations, causing more controversies and public pushback, the countrywide social credit system will still take a long time to materialize. And China is now closer than ever to defining what that system will look like. On November 14, several top government agencies collectively released a draft law on the Establishment of the Social Credit System, the first attempt to systematically codify past experiments on social credit and, theoretically, guide future implementation. 

    Yet the draft law still left observers with more questions than answers. 

    “This draft doesn’t reflect a major sea change at all,” says Jeremy Daum, a senior fellow of the Yale Law School Paul Tsai China Center who has been tracking China’s social credit experiment for years. It’s not a meaningful shift in strategy or objective, he says. 

    Rather, the law stays close to local rules that Chinese cities like Shanghai have released and enforced in recent years on things like data collection and punishment methods—just giving them a stamp of central approval. It also doesn’t answer lingering questions that scholars have about the limitations of local rules. “This is largely incorporating what has been out there, to the point where it doesn’t really add a whole lot of value,” Daum adds. 

    So what is China’s current system actually like? Do people really have social credit scores? Is there any truth to the image of artificial-intelligence-powered social control that dominates Western imagination? 

    First of all, what is “social credit”?
    When the Chinese government talks about social credit, the term covers two different things: traditional financial creditworthiness and “social creditworthiness,” which draws data from a larger variety of sectors.

    Related Story

    Bias isn’t the only problem with credit scores—and no, AI can’t help
    The biggest-ever study of real people’s mortgage data shows that predictive tools used to approve or reject loans are less accurate for minorities.
    The former is a familiar concept in the West: it documents individuals’ or businesses’ financial history and predicts their ability to pay back future loans. Because the market economy in modern China is much younger, the country lacks a reliable system to look up other people’s and companies’ financial records. Building such a system, aimed to help banks and other market players make business decisions, is an essential and not very controversial mission. Most Chinese policy documents refer to this type of credit with a specific word: “征信” (zhengxin, which some scholars have translated to “credit reporting”).

    The latter—“social creditworthiness”—is what raises more eyebrows. Basically, the Chinese government is saying there needs to be a higher level of trust in society, and to nurture that trust, the government is fighting corruption, telecom scams, tax evasion, false advertising, academic plagiarism, product counterfeiting, pollution …almost everything. And not only will individuals and companies be held accountable, but legal institutions and government agencies will as well.

    This is where things start to get confusing. The government seems to believe that all these problems are loosely tied to a lack of trust, and that building trust requires a one-size-fits-all solution. So just as financial credit scoring helps assess a person’s creditworthiness, it thinks, some form of “social credit” can help people assess others’ trustworthiness in other respects. 

    As a result, so-called “social” credit scoring is often lumped together with financial credit scoring in policy discussions, even though it’s a much younger field with little precedent in other societies. 

    What makes it extra confusing is that in practice, local governments have sometimes mixed up these two. So you may see a regulation talking about how non-financial activities will hurt your financial credit, or vice versa. (In just one example, the province of Liaoning said in August that it’s exploring how to reward blood donation in the financial credit system.) 

    But on a national level, the country seems to want to keep the two mostly separate, and in fact, the new draft law addresses them with two different sets of rules.

    Has the government built a system that is actively regulating these two types of credit?
    The short answer is no. Initially, back in 2014, the plan was to have a national system tracking all “social credit” ready by 2020. Now it’s almost 2023, and the long-anticipated legal framework for the system was just released in the November 2022 draft law. 

    That said, the government has mostly figured out the financial part. The zhengxin system—first released to the public in 2006 and significantly updated in 2020—is essentially the Chinese equivalent of American credit bureaus’ scoring and is maintained by the country’s central bank. It records the financial history of 1.14 billion Chinese individuals (and gives them credit scores), as well as almost 100 million companies (though it doesn’t give them scores). 

    On the social side, however, regulations have been patchy and vague. To date, the national government has built only a system focused on companies, not individuals, which aggregates data on corporate regulation compliance from different government agencies. Kendra Schaefer, head of tech policy research at the Beijing-based consultancy Trivium China, has described it in a report for the US government’s US-China Economic and Security Review Commission as “roughly equivalent to the IRS, FBI, EPA, USDA, FDA, HHS, HUD, Department of Energy, Department of Education, and every courthouse, police station, and major utility company in the US sharing regulatory records across a single platform.” The result is openly searchable by any Chinese citizen on a recently built website called Credit China.

    But there is some data on people and other types of organizations there, too. The same website also serves as a central portal for over three dozen (sometimes very specific) databases, including lists of individuals who have defaulted on a court judgment, Chinese universities that are legitimate, companies that are approved to build robots, and hospitals found to have conducted insurance fraud. Nevertheless, the curation seems so random that it’s hard to see how people could use the portal as a consistent or comprehensive source of data.

    How will a social credit system affect Chinese people’s everyday lives?
    The idea is to be both a carrot and a stick. So an individual or company with a good credit record in all regulatory areas should receive preferential treatment when dealing with the government—like being put on a priority list for subsidies. At the same time, individuals or companies with bad credit records will be punished by having their information publicly displayed, and they will be banned from participating in government procurement bids, consuming luxury goods, and leaving the country.

    The government published a comprehensive list detailing the permissible punishment measures last year. Some measures are more controversial; for example, individuals who have failed to pay compensation decided by the court are restricted from traveling by plane or sending their children to costly private schools, on the grounds that these constitute luxury consumption. The new draft law upholds a commitment that this list will be updated regularly. 

    So is there a centralized social credit score computed for every Chinese citizen?
    No. Contrary to popular belief, there’s no central social credit score for individuals. And frankly, the Chinese central government has never talked about wanting one. 

    So why do people, particularly in the West, think there is? 
    Well, since the central government has given little guidance on how to build a social credit system that works in non-financial areas, even in the latest draft law, it has opened the door for cities and even small towns to experiment with their own solutions. 

    As a result, many local governments are introducing pilot programs that seek to define what social credit regulation looks like, and some have become very contentious.

    The best example is Rongcheng, a small city with only half a million in population that has implemented probably the most famous social credit scoring system in the world. In 2013, the city started giving every resident a base personal credit score of 1,000 that can be influenced by their good and bad deeds. For example, in a 2016 rule that has since been overhauled, the city decided that “spreading harmful information on WeChat, forums, and blogs” meant subtracting 50 points, while “winning a national-level sports or cultural competition” meant adding 40 points. In one extreme case, one resident lost 950 points in the span of three weeks for repeatedly distributing letters online about a medical dispute.

    Such scoring systems have had very limited impact in China, since they have never been elevated to provincial or national levels. But when news of pilot programs like Rongcheng’s spread to the West, it understandably rang an alarm for activist groups and media outlets—some of which mistook it as applicable to the whole population. Prominent figures like George Soros and Mike Pence further amplified that false idea. 

    How do we know those pilot programs won’t become official rules for the whole country?
    No one can be 100% sure of that, but it’s worth remembering that the Chinese central government has actually been pushing back on local governments’ rogue actions when it comes to social credit regulations. 

    In December 2020, China’s state council published a policy guidance responding to reports that local governments were using the social credit system as justification for punishing even trivial actions like jaywalking, recycling incorrectly, and not wearing masks. The guidance asks local governments to punish only behaviors that are already illegal under China’s current legislative system and not expand beyond that. 

    “When [many local governments] encountered issues that are hard to regulate through business regulations, they hoped to draw support from solutions involving credits,” said Lian Weiliang, an official at China’s top economic planning authority, at a press conference on December 25, 2020. “These measures are not only incompatible with the rule of law, but also incompatible with the need of building creditworthiness in the long run.” 

    And the central government’s pushback seems to have worked. In Rongcheng’s case, the city updated its local regulation on social credit scores and allowed residents to opt out of the scoring program; it also removed some controversial criteria for score changes. 

    Is there any advanced technology, like artificial intelligence, involved in the system?
    For the most part, no. This is another common myth about China’s social credit system: people imagine that to keep track of over a billion people’s social behaviors, there must be a mighty central algorithm that can collect and process the data.

    But that’s not true. Since there is no central system scoring everyone, there’s not even a need for that kind of powerful algorithm. Experts on China’s social credit system say that the entire infrastructure is surprisingly low-tech. While Chinese officials sometimes name-drop technologies like blockchain and artificial intelligence when talking about the system, they never talk in detail about how these technologies might be utilized. If you check out the Credit China website, it’s no more than a digitized library of separate databases. 

    “There is no known instance in which automated data collection leads to the automated application of sanctions without the intervention of human regulators,” wrote Schaefer in the report. Sometimes the human intervention can be particularly primitive, like the “information gatherers” in Rongcheng, who walk around the village and write down fellow villagers’ good deeds by pen.

    Related Story

    Who needs democracy when you have data?
    Here’s how China rules using data, AI, and internet surveillance.
    However, as the national system is being built, it does appear there’s the need for some technological element, mostly to pool data among government agencies. If Beijing wants to enable every government agency to make enforcement decisions based on records collected by other government agencies, that requires building a massive infrastructure for storing, exchanging, and processing the data. 

    To this end, the latest draft law talks about the need to use “diverse methods such as statistical methods, modeling, and field certification” to conduct credit assessments and combine data from different government agencies. “It gives only the vaguest hint that it’s a little more tech-y,” says Daum.

    How are Chinese tech companies involved in this system?
    Because the system is so low-tech, the involvement of Chinese tech companies has been peripheral. “Big tech companies and small tech companies … play very different roles, and they take very different strategies,” says Shazeda Ahmed, a postdoctoral researcher at Princeton University, who spent several years in China studying how tech companies are involved in the social credit system.

    Smaller companies, contracted by city or provincial governments, largely built the system’s tech infrastructure, like databases and data centers. On the other hand, large tech companies, particularly social platforms, have helped the system spread its message. Alibaba, for instance, helps the courts deliver judgment decisions through the delivery addresses it collects via its massive e-commerce platform. And Douyin, the Chinese version of TikTok, partnered with a local court in China to publicly shame individuals who defaulted on court judgments. But these tech behemoths aren’t really involved in core functions, like contributing data or compiling credit appraisals.

    “They saw it as almost like a civic responsibility or corporate social responsibility: if you broke the law in this way, we will take this data from the Supreme People’s Court, and we will punish you on our platform," says Ahmed.

    There are also Chinese companies, like Alibaba’s fintech arm Ant Group, that have built private financial credit scoring products. But the result, like Alibaba’s Sesame Credit, is more like a loyalty rewards program, according to several scholars. Since the Sesame Credit score is mostly calculated on the basis of users’ purchase history and lending activities on Alibaba’s own platforms, the score is not reliable enough to be used by external financial institutions and has very limited effect on individuals.

    Given all this, should we still be concerned about the implications of building a social credit system in China?
    Yes. Even if there isn’t a scary algorithm that scores every citizen, the social credit system can still be problematic.

    The Chinese government did emphasize that all social-credit-related punishment has to adhere to existing laws, but laws themselves can be unjust in the first place. “Saying that the system is an extension of the law only means that it is no better or worse than the laws it enforces. As China turns its focus increasingly to people’s social and cultural lives, further regulating the content of entertainment, education, and speech, those rules will also become subject to credit enforcement,” Daum wrote in a 2021 article.

    Moreover, “this was always about making people honest to the government, and not necessarily to each other,” says Ahmed. When moral issues like honesty are turned into legal issues, the state ends up having the sole authority in deciding who’s trustworthy. One tactic Chinese courts have used in holding “discredited individuals” accountable is encouraging their friends and family to report their assets in exchange for rewards. “Are you making society more trustworthy by ratting out your neighbor? Or are you building distrust in your very local community?” she asks.

    But at the end of the day, the social credit system does not (yet) exemplify abuse of advanced technologies like artificial intelligence, and it’s important to evaluate it on the facts. The government is currently seeking public feedback on the November draft document for one month, though there’s no expected date on when it will pass and become law. It could still take years to see the final product of a nationwide social credit system.

    #Chine #Crédit_social

  • YouTube is launching Shorts videos for your TV | MIT Technology Review
    https://www.technologyreview.com/2022/11/07/1062868/youtube-wants-to-take-on-tiktok-with-shorts-videos-for-your-tv/?truid=a497ecb44646822921c70e7e051f7f1a

    YouTube Shorts, the video website’s TikTok-like feature, has become one of its latest obsessions, with more than 1.5 billion users watching short-form content on their devices every month.

    And now YouTube wants to expand that number by bringing full-screen, vertical videos into your TV, MIT Technology Review can reveal.

    From today, users worldwide will see a row of videos from Shorts high up their display on YouTube’s smart TV apps. The videos, which will be integrated into the standard homepage of YouTube’s TV app and will sit alongside longer, landscape videos, are presented on the basis of previous watch history, much as in the YouTube Shorts tab on cell phones and the YouTube website.

    “It is challenging taking a format that’s traditionally a mobile format and finding the right way to bring it to life on TV,” says Brynn Evans, UX director for the YouTube app on TV.

    The time spent developing the TV app integration is testament to the importance of Shorts to YouTube, says Melanie Fitzgerald, UX director at YouTube Community and Shorts. “Seeing the progression of short-form video over several years, from Vine to Musical.ly to TikTok to Instagram and to YouTube, it’s very clear this format is here to stay.”
    Related Story
    The YouTube baker fighting back against deadly “craft hacks”

    Ann Reardon spends her time debunking dangerous activities that go viral on the platform—but the craze shows no signs of abating.

    One major challenge the designers behind YouTube Shorts’ TV integration had to consider was the extent to which Shorts videos should be allowed to autoplay. At present, the initial design will require viewers to manually scroll through Shorts videos once they’re playing and move on to the next one by pressing the up and down arrows on their TV remote.

    “One piece we were playing with was how much do we want this to be a fully lean-back experience, where you turn it on and Shorts cycle through,” says Evans, whose team decided against that option at launch but does not rule out changing future iterations.

    The design presents a single Shorts video at a time in the center of the TV screen, surrounded by white space that changes color depending on the overall look of the video.

    One thing YouTube didn’t test—at least as of now? Filling the white space with ads. YouTube spokesperson Susan Cadrecha tells MIT Tech Review that the experience will initially be ad-free. The spokesperson did say that ads would likely be added at some point, but how those would be integrated into the Shorts on TV experience was not clear.

    Likewise, the YouTube Shorts team is investigating how to integrate comments into TV viewing for future iterations of the app. “For a mobile format like this, you’d be able to maybe use your phone as a companion and leave some comments and they can appear on TV,” says Evans.

    YouTube’s announcement follows TikTok’s own move into developing a TV app. First launched in February 2021 in France, Germany, and the UK and expanded into the United States and elsewhere in November that year, TikTok’s smart TV app hasn’t largely altered how the main app works. (Nor, arguably, has it become an irreplaceable part of people’s living room habits.)

    However, the shift to fold Shorts into the YouTube experience on TV suggests how important YouTube feels the short-form model is to its future. “It’s very clearly a battle for attention across devices,” says Andrew A. Rosen, founder and principal at media analyst Parqor. “The arrival of Shorts and TikTok on connected TVs makes the competitive landscape that much more complex.” Having ceded a head start to TikTok, YouTube now seems determined to play catchup.

    The team behind the initiative still isn’t fully certain how adding short-form video into the YouTube on TV experience will be embraced. “It still remains to be seen how and when people will consume Shorts,” admits Evans—though she tells MIT Tech Review that informal polling and qualitative surveys, plus tests within the Google community, suggest “a very positive impression of Shorts from people who are watching YouTube on TV.” (YouTube declined to share its own data on much time the average user currently spends watching YouTube content on TV but did point to Nielsen data showing that viewers worldwide spent 700 million hours a day on that activity.)

    “Will it be a game-changer in the living room? Yes and no,” says Rosen. “Yes in the sense that it will turn 15-second to 60-second clips into competition for every legacy media streaming service, and Netflix is betting billions on content to be consumed on those same TVs. No, because it’s not primed to become a new default of consumption.”
    by Chris Stokel-Walker

    #YouTube #Shorts #Télévision #Médias #Média_formats

  • Here’s how a Twitter engineer says it will break in the coming weeks | MIT Technology Review
    https://www.technologyreview.com/2022/11/08/1062886/heres-how-a-twitter-engineer-says-it-will-break-in-the-coming-weeks/?truid=a497ecb44646822921c70e7e051f7f1a

    One insider says the company’s current staffing isn’t able to sustain the platform.
    By

    Chris Stokel-Walker
    November 8, 2022

    On November 4, just hours after Elon Musk fired half of the 7,500 employees previously working at Twitter, some people began to see small signs that something was wrong with everyone’s favorite hellsite. And they saw it through retweets.

    Twitter introduced retweets in 2009, turning an organic thing people were already doing—pasting someone else’s username and tweet, preceded by the letters RT—into a software function. In the years since, the retweet and its distant cousin the quote tweet (which launched in April 2015) have become two of the most common mechanics on Twitter.

    But on Friday, a few users who pressed the retweet button saw the years roll back to 2009. Manual retweets, as they were called, were back.

    The return of the manual retweet wasn’t Elon Musk’s latest attempt to appease users. Instead, it was the first public crack in the edifice of Twitter’s code base—a blip on the seismometer that warns of a bigger earthquake to come.

    A massive tech platform like Twitter is built upon very many interdependent parts. “The larger catastrophic failures are a little more titillating, but the biggest risk is the smaller things starting to degrade,” says Ben Krueger, a site reliability engineer who has more than two decades of experience in the tech industry. “These are very big, very complicated systems.” Krueger says one 2017 presentation from Twitter staff includes a statistic suggesting that more than half the back-end infrastructure was dedicated to storing data.

    While many of Musk’s detractors may hope the platform goes through the equivalent of thermonuclear destruction, the collapse of something like Twitter happens gradually. For those who know, gradual breakdowns are a sign of concern that a larger crash could be imminent. And that’s what’s happening now.
    It’s the small things

    Whether it’s manual RTs appearing for a moment before retweets slowly morph into their standard form, ghostly follower counts that race ahead of the number of people actually following you, or replies that simply refuse to load, small bugs are appearing at Twitter’s periphery. Even Twitter’s rules, which Musk linked to on November 7, went offline temporarily under the load of millions of eyeballs. In short, it’s becoming unreliable.

    Estimates from Bot Sentinel suggest that more than 875,000 users deactivated their accounts between October 27 and November 1, while half a million more were suspended.

    “Sometimes you’ll get notifications that are a little off,” says one engineer currently working at Twitter, who’s concerned about the way the platform is reacting after vast swathes of his colleagues who were previously employed to keep the site running smoothly were fired. (That last sentence is why the engineer has been granted anonymity to talk for this story.) After struggling with downtime during its “Fail Whale” days, Twitter eventually became lauded for its team of site reliability engineers, or SREs. Yet this team has been decimated in the aftermath of Musk’s takeover. “It’s small things, at the moment, but they do really add up as far as the perception of stability,” says the engineer.

    The small suggestions of something wrong will amplify and multiply as time goes on, he predicts—in part because the skeleton staff remaining to handle these issues will quickly burn out. “Round-the-clock is detrimental to quality, and we’re already kind of seeing this,” he says.

    Twitter’s remaining engineers have largely been tasked with keeping the site stable over the last few days, since the new CEO decided to get rid of a significant chunk of the staff maintaining its code base. As the company tries to return to some semblance of normalcy, more of their time will be spent addressing Musk’s (often taxing) whims for new products and features, rather than keeping what’s already there running.

    This is particularly problematic, says Krueger, for a site like Twitter, which can have unforeseen spikes in user traffic and interest. Krueger contrasts Twitter with online retail sites, where companies can prepare for big traffic events like Black Friday with some predictability. “When it comes to Twitter, they have the possibility of having a Black Friday on any given day at any time of the day,” he says. “At any given day, some news event can happen that can have significant impact on the conversation.” Responding to that is harder to do when you lay off up to 80% of your SREs—a figure Krueger says has been bandied about within the industry but which MIT Technology Review has been unable to confirm. The Twitter engineer agreed that the percentage sounded “plausible.”

    That engineer doesn’t see a route out of the issue—other than reversing the layoffs (which the company has reportedly already attempted to roll back somewhat). “If we’re going to be pushing at a breakneck pace, then things will break,” he says. “There’s no way around that. We’re accumulating technical debt much faster than before—almost as fast as we’re accumulating financial debt.”
    The list grows longer

    He presents a dystopian future where issues pile up as the backlog of maintenance tasks and fixes grows longer and longer. “Things will be broken. Things will be broken more often. Things will be broken for longer periods of time. Things will be broken in more severe ways,” he says. “Everything will compound until, eventually, it’s not usable.”

    Twitter’s collapse into an unusable wreck is some time off, the engineer says, but the telltale signs of process rot are already there. It starts with the small things: “Bugs in whatever part of whatever client they’re using; whatever service in the back end they’re trying to use. They’ll be small annoyances to start, but as the back-end fixes are being delayed, things will accumulate until people will eventually just give up.”

    Krueger says that Twitter won’t blink out of life, but we’ll start to see a greater number of tweets not loading, and accounts coming into and out of existence seemingly at a whim. “I would expect anything that’s writing data on the back end to possibly have slowness, timeouts, and a lot more subtle types of failure conditions,” he says. “But they’re often more insidious. And they also generally take a lot more effort to track down and resolve. If you don’t have enough engineers, that’s going to be a significant problem.”

    The juddering manual retweets and faltering follower counts are indications that this is already happening. Twitter engineers have designed fail-safes that the platform can fall back on so that the functionality doesn’t go totally offline but cut-down versions are provided instead. That’s what we’re seeing, says Krueger.

    Alongside the minor malfunctions, the Twitter engineer believes that there’ll be significant outages on the horizon, thanks in part to Musk’s drive to reduce Twitter’s cloud computing server load in an attempt to claw back up to $3 million a day in infrastructure costs. Reuters reports that this project, which came from Musk’s war room, is called the “Deep Cuts Plan.” One of Reuters’s sources called the idea “delusional,” while Alan Woodward, a cybersecurity professor at the University of Surrey, says that “unless they’ve massively overengineered the current system, the risk of poorer capacity and availability seems a logical conclusion.”
    Brain drain

    Meanwhile, when things do go kaput, there’s no longer the institutional knowledge to quickly fix issues as they arise. “A lot of the people I saw who were leaving after Friday have been there nine, 10, 11 years, which is just ridiculous for a tech company,” says the Twitter engineer. As those individuals walked out of Twitter offices, decades of knowledge about how its systems worked disappeared with them. (Those within Twitter, and those watching from the sidelines, have previously argued that Twitter’s knowledge base is overly concentrated in the minds of a handful of programmers, some of whom have been fired.)

    To be fair, it was already aging out of relevance before Musk took over.

    Unfortunately, teams stripped back to their bare bones (according to those remaining at Twitter) include the tech writers’ team. “We had good documentation because of [that team],” says the engineer. No longer. When things go wrong, it’ll be harder to find out what has happened.

    Getting answers will be harder externally as well. The communications team has been cut down from between 80 and 100 to just two people, according to one former team member who MIT Technology Review spoke to. “There’s too much for them to do, and they don’t speak enough languages to deal with the press as they need to,” says the engineer.

    When MIT Technology Review reached out to Twitter for this story, the email went unanswered.

    Musk’s recent criticism of Mastodon, the open-source alternative to Twitter that has piled on users in the days since the entrepreneur took control of the platform, invites the suggestion that those in glass houses shouldn’t throw stones. The Twitter CEO tweeted, then quickly deleted, a post telling users, “If you don’t like Twitter anymore, there is awesome site [sic] called Masterbatedone [sic].” Accompanying the words was a physical picture of his laptop screen open on Paul Krugman’s Mastodon profile, showing the economics columnist trying multiple times to post. Despite Musk’s attempt to highlight Mastodon’s unreliability, its success has been remarkable: nearly half a million people have signed up since Musk took over Twitter.

    It’s happening at the same time that the first cracks in Twitter’s edifice are starting to show. It’s just the beginning, expects Krueger. “I would expect to start seeing significant public-facing problems with the technology within six months,” he says. “And I feel like that’s a generous estimate.”
    by Chris Stokel-Walker

    #Twitter #Equipe_technique