BuzzFeed News | Breaking News | Original Reporting

https://www.buzzfeednews.com

  • I Thought My Job Was To Report On Tech In India. Instead, I’ve Watched Democracy Decline.
    https://www.buzzfeednews.com/article/pranavdixit/indian-government-using-tech-destroy-democracy

    I love writing about tech. But covering how a Hindu nationalist government is using it to destroy a secular democracy isn’t what I signed up for. I was in a cavernous college auditorium on the frigid winter afternoon in New Delhi in 2015 when Sundar Pichai, the CEO of Google, was selling the promise of India, his home country and the company’s largest market, to 2,000 high school and college students. “Part of the reason we’re all very interested in India is that it’s an amazingly young (...)

    #Google #Facebook #Netflix #Twitter #WhatsApp #censure #manipulation #technologisme (...)

    ##religion

  • Clearview AI Offered Thousands Of Cops Free Trials
    https://www.buzzfeednews.com/article/ryanmac/clearview-ai-local-police-facial-recognition

    A BuzzFeed News investigation has found that employees at law enforcement agencies across the US ran thousands of Clearview AI facial recognition searches — often without the knowledge of the public or even their own departments. A controversial facial recognition tool designed for policing has been quietly deployed across the country with little to no public oversight. According to reporting and data reviewed by BuzzFeed News, more than 7,000 individuals from nearly 2,000 public agencies (...)

    #Clearview #FBI #algorithme #CCTV #biométrie #racisme #facial #reconnaissance #vidéo-surveillance #discrimination #scraping #surveillance #ACLU (...)

    ##police

  • Conspiracy Theory Books About COVID Are All Over Amazon
    https://www.buzzfeednews.com/article/craigsilverman/amazon-covid-conspiracy-books

    Amazon is promoting COVID conspiracy books around the world. Conspiracy theorist David Icke’s lies about COVID-19 caused Facebook, Twitter, YouTube, and Spotify to ban him. But on Amazon, Icke, who believes in the existence of lizard people, is recommended reading. Despite being filled with misinformation about the pandemic, Icke’s book The Answer at one point ranked 30th on Amazon.com’s bestseller list for Communication & Media Studies. Its popularity is partly thanks to the e-commerce (...)

    #Amazon #algorithme #manipulation #COVID-19 #santé

    ##santé

  • David Brooks Resigns From The Aspen Institute
    https://www.buzzfeednews.com/article/craigsilverman/times-columnist-david-brooks-told-people-to-join-nextdoor

    David Brooks has resigned from his position at the Aspen Institute following reporting by BuzzFeed News about conflicts of interest between the star #New_York_Times columnist and funders of a program he led for the think tank.

    Eileen Murphy, a spokesperson for the Times, said in a statement that editors approved Brooks’s involvement with Aspen in 2018, when he launched a project called Weave. But current editors weren’t aware he was receiving a salary for Weave.

    “The current Opinion editors were unaware of this arrangement and have concluded that holding a paid position at Weave presents a conflict of interest for David in writing about the work of the project, its donors or the broader issues it focuses on,” Murphy said.

    #conflit_d'intérêt

  • This Clearview AI Patent Proposal Describes Using Facial Recognition For Dating
    https://www.buzzfeednews.com/article/carolinehaskins1/facial-recognition-clearview-patent-dating

    A Clearview AI Patent Application Describes Facial Recognition For Dating, And Identifying Drug Users And Homeless People A patent unveiled on Thursday describes several potential uses for Clearview AI, such as creating networks for people in industries like real estate or retail to “share headshots of high-risk individuals.” Clearview AI, the facial recognition company that claims it’s scraped 3 billion images from the internet to power its face-matching system, has proposed applying its (...)

    #Clearview #algorithme #biométrie #facial #reconnaissance

  • Facebook Pauses Ads For Gun Accessories And Military Gear After Complaints From Lawmakers And Employees
    https://www.buzzfeednews.com/article/ryanmac/facebook-pauses-ads-for-gun-accessories-and-military-gear

    “Out of an abundance of caution, we are temporarily banning ads promoting weapons accessories and protective equipment in the U.S. until at least January 22nd.” Following complaints from Senators and employees, Facebook on Saturday said it was temporarily halting ads for gun accessories and military gear in the US through next week’s inauguration of President-elect Joe Biden. The move follows a BuzzFeed News story that revealed the world’s largest social network displayed ads for gun (...)

    #Facebook #arme #publicité #violence

    ##publicité

  • Rioters Taking Over The Capitol Planned It Online
    https://www.buzzfeednews.com/article/janelytvynenko/trump-rioters-planned-online?scrolla=5eb6d68b7fedc32c19ef33b4

    On pro-Trump social media website Parler, chat app Telegram, and other corners of the the far-right internet, people discussed the Capitol Hill rally at which Trump spoke as the catalyst for a violent insurrection. They have been using those forums to plan an uprising in plain sight, one that they executed Wednesday afternoon, forcing Congress to flee its chambers as it met to certify the results of the election.

    “Extremists have for weeks repeatedly expressed their intentions to attend the January 6 protests, and unabashedly voiced their desire for chaos and violence online,” said Jared Holt, a visiting research fellow with DFRLab. “What we’ve witnessed is the manifestation of that violent online rhetoric into real-life danger.”

  • Facebook Employees Leaving As Hate Speech Festers
    https://www.buzzfeednews.com/article/ryanmac/facebook-rules-hate-speech-employees-leaving

    A departing Facebook employee said the social network’s failure to act on hate speech “makes it embarrassing to work here.” On Wednesday, a Facebook data scientist departed the social networking company after a two-year stint, leaving a farewell note for their colleagues to ponder. As part of a team focused on “Violence and Incitement,” they had dealt with some of the worst content on Facebook, and they were proud of their work at the company. Despite this, they said Facebook was simply not (...)

    #Facebook #modération #extrême-droite

  • China’s Camps Have Forced Labor And Growing US Market, by Alison Killing and Megha Rajagopalan (BuzzFeed)
    https://www.buzzfeednews.com/article/alison_killing/xinjiang-camps-china-factories-forced-labor

    China built its vast network of detention camps to do more than simply keep people behind bars.

    A BuzzFeed News investigation identified factories right inside many of Xinjiang’s internment compounds.

    These long, rectangular buildings with blue roofs are capable of putting thousands of Muslim detainees to work against their will.

    #satellite #narration_cartographique

  • Facebook Is Developing A Tool To Summarize News Articles
    https://www.buzzfeednews.com/article/ryanmac/facebook-news-article-summary-tools-brain-reader

    "We all get the privilege of seeing the future because we are making it.” Facebook told employees on Tuesday that it’s developing a tool to summarize news articles so users won’t have to read them. It also laid out early plans for a neural sensor to detect people’s thoughts and translate them into action. Those announcements and product demos were part of an end-of-year, companywide meeting at the social networking giant, whose year has been pockmarked by controversy, employee discontent, and (...)

    #Facebook #algorithme #manipulation #domination #modération

  • Facebook’s Metric For “Violence And Incitement Trends" Is Rising
    https://www.buzzfeednews.com/article/ryanmac/facebook-internal-metric-violence-incitement-rising-vote

    The metric, which assesses the potential for danger based on keywords, rose to 580 from 400 this week — a 45% increase. As votes are being tallied across the country to determine the next US president, internal Facebook data shows that the company has seen a significant increase in what it calls “violence and incitement trends.” In a post to a group on Facebook’s internal message board, one employee alerted their colleagues to a nearly 45% increase in the metric, which assesses the potential (...)

    #Facebook #algorithme #élections #violence #profiling #surveillance

  • Zoom Deleted Events Discussing Zoom “Censorship”
    https://www.buzzfeednews.com/article/janelytvynenko/zoom-deleted-events-censorship

    “Everyone working in higher education right now depends on Zoom and we cannot be in a position of allowing a corporate, third-party vendor to make these kinds of decisions,” Ross said. “It’s simply unsustainable.”

    #dictature

  • Un rapport sur la criminalité et la corruption du système bancaire mondial - World Socialist Web Site
    https://www.wsws.org/fr/articles/2020/09/23/bmcr-s23.html
    https://www.wsws.org/asset/b31011eb-0251-434e-a23f-d6f0674e4549/image.jpg

    Un rapport explosif publié dimanche par le site BuzzFeed News documente le rôle que les grandes banques américaines et internationales jouent sciemment dans le blanchiment et la circulation de milliers de milliards de dollars d’argent sale qui proviennent d’organisations terroristes, de cartels de la drogue et de divers criminels financiers internationaux.

    Le rapport est une mise en accusation sans appel non seulement contre les banques, mais aussi contre les gouvernements et les organismes de réglementation occidentaux, qui sont pleinement conscients des activités illégales, mais très lucratives des banques et les sanctionnent tacitement.

    BuzzFeed écrit que son enquête démontre « une vérité sous-jacente de l’ère moderne » : les réseaux par lesquels l’argent sale circule dans le monde sont devenus des artères vitales de l’économie mondiale. Ils permettent un système financier parallèle si vaste et si incontrôlé qu’il est devenu inextricable de l’économie dite légitime. Les banques de renom ont contribué à ce qu’il en soit ainsi. »

    « Le blanchiment d’argent est un crime qui rend possible d’autres crimes. Il peut accélérer l’inégalité économique, drainer les fonds publics, saper la démocratie et déstabiliser les nations — et les banques jouent un rôle clé. Certaines de ces personnes, en chemise blanche impeccable et en costume moulant, se nourrissent de la tragédie des personnes qui meurent dans le monde entier ». C’est Martin Woods, un ancien enquêteur sur les transactions suspectes de la société Wachovie, qui l’explique. »

    Le rapport poursuit en expliquant que « même après avoir été poursuivi ou condamné à des amendes pour faute financière, des banques telles que JPMorgan Chase, HSBC, Standard Chartered, Deutsche Bank et Bank of New York Mellon, ont continuéà transférer de l’argent pour des criminels

    Le rapport, intitulé« L’argent sale se déverse dans les banques les plus puissantes du monde », ne comprend qu’un petit échantillon censuré de la masse de rapports d’activités suspectes du média.

    Le gouvernement américain maintient une politique de secret total en ce qui concerne les rapports d’activité suspects, refusant de les divulguer même en réponse à des demandes de liberté d’information. Au début de l’année, le département du Trésor a publié une déclaration selon laquelle la divulgation non autorisée des rapports d’activité suspects est un crime.

  • Whistleblower Says Facebook Ignored Global Political Manipulation
    https://www.buzzfeednews.com/article/craigsilverman/facebook-ignore-political-manipulation-whistleblower-memo

    A 6,600-word internal memo from a fired Facebook data scientist details how the social network knew leaders of countries around the world were using their site to manipulate voters — and failed to act. Facebook ignored or was slow to act on evidence that fake accounts on its platform have been undermining elections and political affairs around the world, according to an explosive memo sent by a recently fired Facebook employee and obtained by BuzzFeed News. The 6,600-word memo, written by (...)

    #Facebook #manipulation #élections #modération #censure

  • Blanked-Out Spots On China’s Maps Helped Us Uncover Xinjiang’s Camps
    https://www.buzzfeednews.com/article/alison_killing/satellite-images-investigation-xinjiang-detention-camps

    China’s Baidu blanked out parts of its mapping platform. We used those locations to find a network of buildings bearing the hallmarks of prisons and internment camps in Xinjiang. Here’s how we did it. In the summer of 2018, as it became even harder for journalists to work effectively in Xinjiang, a far-western region of China, we started to look at how we could use satellite imagery to investigate the camps where Uighurs and other Muslim minorities were being detained. At the time we began, (...)

    #BaiduMaps #Islam #prison #surveillance #aérien

  • Facebook’s Kenosha Guard Militia Event Was Reported 455 Times. Moderators Said It Was Fine.
    https://www.buzzfeednews.com/article/ryanmac/kenosha-militia-facebook-reported-455-times-moderators

    CEO Mark Zuckerberg said that the reason the militia page and an associated event remained online after a shooting that killed two people was due to “an operational mistake.” In a companywide meeting on Thursday, Facebook CEO Mark Zuckerberg said that a militia page advocating for followers to bring weapons to an upcoming protest in Kenosha, Wisconsin, remained on the platform because of “an operational mistake.” The page and an associated event inspired widespread criticism of the company (...)

    #Facebook #algorithme #milice #élections #modération #violence #extrême-droite #QAnon

  • A “Bug” In Instagram’s Hashtags Has Been Favoring Donald Trump
    https://www.buzzfeednews.com/article/ryanmac/instagram-related-hashtags-favoring-trump-over-biden

    “A technical error caused a number of hashtags to not show related hashtags. We’ve disabled this feature while we investigate.” For at least the last two months, a key Instagram feature, which algorithmically pushes users toward supposedly related content, has been treating hashtags associated with President Donald Trump and presumptive Democratic presidential nominee Joe Biden in very different ways. Searches for Biden also return a variety of pro-Trump messages, while searches for (...)

    #Facebook #Instagram #élections #biais #discrimination #bug

  • Trump Disputing Election A Worry For Facebook Employees
    https://www.buzzfeednews.com/article/craigsilverman/facebook-zuckerberg-what-if-trump-disputes-election-results

    After months of debate and disagreement over the handling of inflammatory or misleading posts from Donald Trump, Facebook employees want CEO Mark Zuckerberg to explain what the company would do if the leader of the free world uses the social network to undermine the results of the 2020 US presidential election.

    “I do think we’re headed for a problematic scenario where Facebook is going to be used to aggressively undermine the legitimacy of the US elections, in a way that has never been possible in history,” one Facebook employee wrote in a group on Workplace, the company’s internal communication platform, earlier this week.

    For the past week, this scenario has been a topic of heated discussion inside Facebook and was a top question for its leader. Some 2,900 employees asked Zuckerberg to address it publicly during a company-wide meeting on Thursday, which he partly did, calling it “an unprecedented position.”

    Zuckerberg’s remarks came amid growing internal concerns about the company’s competence in handling misinformation, and the precautions it is taking to ensure its platform isn’t used to disrupt or mislead ahead of the US presidential election. Though Facebook says it has committed more money and resources to avoid repeating its failures during the 2016 election, some employees believe it isn’t enough. President Trump has already spent months raising questions about the legitimacy of the upcoming 2020 election, spreading misinformation about mail-in ballots, and declining to say if he’d accept the possibility of losing to Democratic nominee Joe Biden in November.

    In July, Trump told Fox News he wasn’t sure if he’d concede to Biden, casting doubt on whether there would be a peaceful transition of power if the former vice president wins the election. “I have to see. I’m not just going to say yes. I’m not going to say no,” the president said.

    On Facebook’s internal message boards, discussion about the Trump election question remained civil prior to Thursday’s all-hands meeting. Employees debated the merits of censoring a sitting president’s potentially false statements about election results with one person noting that “it would be a really troubling policy to apply globally.”

    “America can’t afford for Facebook to take a wait-and-see approach when it comes to the integrity of our democracy,” said Jesse Lehrich, a former foreign policy spokesperson to Hillary Clinton and cofounder of Accountable Tech, a nonprofit advocacy group. “Unless they proactively outline clear policies and enforcement mechanisms to safeguard the election, the platform will be weaponized to undermine it.”
    Do you work at Facebook or another technology company? We’d love to hear from you. Reach out at craig.silverman@buzzfeed.com, ryan.mac@buzzfeed.com, or via one of our tip line channels.

    On Thursday, Zuckerberg told employees that the increased use of mail-in ballots due to the pandemic will likely lead to a situation where election results will not be available “for days” or “for weeks.” He noted political figures and commentators may attempt to try to call an election early, in which case the company may label a post explaining that results are not yet final.

    Zuckerberg did not have a clear answer for what the company would do should Trump declared the election results invalid.

    “This is where we’re in unprecedented territory with the president saying some of the things that he’s saying that I find quite troubling,” he said. “We’re thinking through what policy may be appropriate here. This is obviously going to be a sensitive thing to work through.”

    While there are signs Facebook will stand up to Trump in cases where he violates its rules — as on Wednesday when it removed a video post from the president in which he claimed that children are “almost immune” to COVID-19 — there are others who suggest the company is caving to critical voices on the right. In another recent Workplace post, a senior engineer collected internal evidence that showed Facebook was giving preferential treatment to prominent conservative accounts to help them remove fact-checks from their content.

    The company responded by removing his post and restricting internal access to the information he cited. On Wednesday the engineer was fired, according to internal posts seen by BuzzFeed News.
    “Intervening in Fact-Checks”

    With heightened internal tensions and morale at a low point, concerns about how the company handles fact-checked content have exploded in an internal Workplace group dedicated to misinformation policy.

    Last Friday, at another all-hands meeting, employees asked Zuckerberg how right-wing publication Breitbart News could remain a Facebook News partner after sharing a video that promoted unproven treatments and said masks were unnecessary to combat the novel coronavirus. The video racked up 14 million views in six hours before it was removed from Breitbart’s page, though other accounts continued to share it.

    Zuckerberg danced around the question but did note that Breitbart could be removed from the company’s news tab if it were to receive two strikes for publishing misinformation within 90 days of each other. (Facebook News partners, which include dozens of publications such as BuzzFeed News and the Washington Post, receive compensation and placement in a special news tab on the social network.)

    “This was certainly one strike against them for misinformation, but they don’t have others in the last 90 days,” Zuckerberg said. “So by the policies that we have, which by the way I think are generally pretty reasonable on this, it doesn’t make sense to remove them.”

    But some of Facebook’s own employees gathered evidence they say shows Breitbart — along with other right-wing outlets and figures including Turning Point USA founder Charlie Kirk, Trump supporters Diamond and Silk, and conservative video production nonprofit Prager University — has received special treatment that helped it avoid running afoul of company policy. They see it as part of a pattern of preferential treatment for right-wing publishers and pages, many of which have alleged that the social network is biased against conservatives.

    “We defer to third-party fact-checkers on the rating that a piece of content receives," Facebook spokesperson Liz Bourgeois said in a statement. “When a fact checker applies a rating, we apply a label and demotion. But we are responsible for how we manage our internal systems for repeat offenders. We apply additional system wide penalties for multiple false ratings, including demonetization and the inability to advertise, unless we determine that one or more of those ratings does not warrant additional consequences.”

    On July 22, a Facebook employee posted a message to the company’s internal misinformation policy group noting that some misinformation strikes against Breitbart had been cleared by someone at Facebook seemingly acting on the publication’s behalf.

    “A Breitbart escalation marked ‘urgent: end of day’ was resolved on the same day, with all misinformation strikes against Breitbart’s page and against their domain cleared without explanation,” the employee wrote.

    The same employee said a partly false rating applied to an Instagram post from Charlie Kirk was flagged for “priority” escalation by Joel Kaplan, the company’s vice president of global public policy. Kaplan once served in George W. Bush’s administration and drew criticism for publicly supporting Brett Kavanaugh’s controversial nomination to the Supreme Court.

    Aaron Sharockman, the executive director of PolitiFact, told BuzzFeed News a contact at Facebook did call to discuss Kirk’s post.

    “We had a call with them where they wanted to know how this post was aligned with the program,” Sharockman said. “Was this just a minor inaccuracy or was it something that we thought was something that had potential harmful effects?”

    PolitiFact did not change its rating on the post. “We stuck to our guns there,” Sharockman said.

    Past Facebook employees, including Yaël Eisenstat, Facebook’s former global election ads integrity lead, have expressed concerns with Kaplan’s influence over content enforcement decisions. She previously told BuzzFeed News a member of Kaplan’s Washington policy team attempted to influence ad enforcement decisions for an ad placed by a conservative organization.

    Facebook did not respond to questions about why Kaplan would personally intervene in matters like this.

    These and other interventions appear to be in violation of Facebook’s official policy, which requires publishers wishing to dispute a fact check rating to contact the Facebook fact-checking partner responsible.

    “It appears that policy people have been intervening in fact-checks on behalf of exclusively right-wing publishers, to avoid them getting repeat-offender status,” wrote another employee in the company’s internal “misinformation policy” discussion group.

    Individuals that spoke out about the apparent special treatment of right-wing pages have also faced consequences. In one case, a senior Facebook engineer collected multiple instances of conservative figures receiving unique help from Facebook employees, including those on the policy team, to remove fact-checks on their content. His July post was removed because it violated the company’s “respectful communication policy.”

    After the engineer’s post was removed, the related internal “tasks” he’d cited as examples of the alleged special treatment were made private and inaccessible to employees, according to a Workplace post from another employee.

    “Personally this makes me so angry and ashamed of this company,” wrote the employee in support of their colleague.

    The engineer joined the company in 2016 and most recently worked on Instagram. He left the company on Wednesday. One employee on an internal thread seen by BuzzFeed News said that they received permission from the engineer to say that the dismissal “was not voluntary.”

    In an internal post before his dismissal, the engineer said he was “told to expect to be contacted by legal and HR if my post is found to be violating other policies in addition to Respectful Comms.”

    Facebook denied the employee had been terminated for the post but said it was because “they broke the company’s rules.”

    “We have an open culture and encourage employees to speak out about concerns they have," Bourgeois said.

    News of his firing caused some Facebook employees to say that they now fear speaking critically about the company in internal discussions. One person said they were deleting old posts and comments, while another said this was “hardly the first time the respectful workplace guidelines have been used to snipe a prominent critic of company policies/ethics.”

    “[He] was a conscience of this company, and a tireless voice for us doing the right thing,” said another employee.

    The terminated employee declined to comment and asked not to be named for fear of repercussions.

    “Partner Sensitivity”

    The internal evidence gathered by the engineer aligns with the experience of a journalist who works for one of Facebook’s US fact-checking partners. They told BuzzFeed News that conservative pages often complain directly to the company.

    “Of the publishers that don’t follow the procedure, it seems to be mostly ones on the right. Instead of appealing to the fact-checker they immediately call their rep at Facebook,” said the journalist, who declined to be named because they were not authorized to speak publicly. “They jump straight up and say ‘censorship, First Amendment, freedom.’”

    “I think Facebook is a bit afraid of them because of the Trump administration,” they added.

    Facebook typically assigns dedicated partner managers to pages with large followings or big ad budgets. They help their clients maximize their use of the platform. But in the cases identified in the engineer’s post, partner reps appear to have sought preferential treatment for right-wing publishers. This resulted in phone calls to fact-checking partners from people at Facebook, and instances where misinformation strikes appear to have been removed from content without a fact-checker’s knowledge or involvement.

    A Facebook employee’s July 22 post restating the engineer’s findings identified multiple cases in which a fact-check complaint from a right-wing page was escalated and in some cases resolved in the account’s favor the same day.

    According to the post, Joel Kaplan flagged a fact check of a Charlie Kirk Instagram post for resolution “ASAP/before 12 p.m. ET”. This same employee claimed PragerU’s partner manager was part of a “two weeks long effort” to prevent the site from being given Repeat Offender status, a designation that would have limited its reach and advertising privileges.

    Citing “partner sensitivity," the rep noted that PragerU runs a lot of ads, and argued that the content in question qualified as opinion and was therefore exempt from being fact-checked.

    Facebook did not answer questions about why a partner manager would cite ad volume as a reason for not acting against a group of pages.

    Sharockman said PolitiFact’s contacts at Facebook have never asked them to change a rating. But Facebook reps do reach out to discuss whether a post is opinion or otherwise outside of the scope of the program.

    “We have had discussions where our partners have asked us, ‘Why did you fact check this? Why did you come to this conclusion? How do you think it fits within the scope of Facebook’s rules or regulations within fact-checking?’” he said.

    “We sometimes reach out to fact-checkers to clarify our guidelines and scope of the program,” Bourgeois said.

    Mark Provost manages multiple large progressive Facebook pages, including The Other 98%, one of the biggest on the platform. He said his Facebook partner manager is far less responsive than what the Facebook employee documented for Provost’s counterparts on the right. And he’s not aware of any case where Facebook contacted a fact-checker on his behalf.

    “We don’t get a message back for 10 days,” Provost said of Facebook. “I imagine the right wing is getting a way better deal.”

    His Other 98% page is currently at risk of deletion because it has three strikes. Provost said one strike is for a hate speech violation three months ago, though as of now no one at Facebook has told him what the offending post was. His partner representative said they would look into it, according to Provost. That was last week and he says he still hasn’t received an update.

    “I’m getting so frustrated with this,” Provost said. “The best solution would be if Facebook were as responsive as the fact-checking companies that they’ve assigned.”

    Provost said Facebook’s fact-checking partners are easy to deal with when he contacts them to dispute a rating. In some cases, he provided additional proof of his claim to get a rating changed; in other cases, he corrected the offending post and the false rating was removed.

    The fact-checker who spoke to BuzzFeed News said most page owners follow the policy and contact them to dispute a rating. But in some cases, they hear directly from the Facebook partner manager assigned to work with fact-checkers.

    “They will ask us, ‘Could you take a look at this again? Are you sure?’” the checker said.

    In other cases, Facebook itself will quietly remove a fact-check applied by one of its partners. That appears to be what happened with a March 25 post from Diamond and Silk. The duo wrote on Facebook, “How the hell is allocating 25 million dollars in order to give a raise to house members, that don’t give a damn about Americans, going to help stimulate America’s economy? Tell me how? #PutAmericansBackToWorkNow.”

    Lead Stories, a Facebook fact-checking partner, rated the post false. Diamond and Silk initially followed the proper procedure and appealed directly to Lead Stories.

    “They were detailed in their appeal and we replied promptly,” Alan Duke, the editor-in-chief of Lead Stories, told BuzzFeed News. “The result was that we decided that a false rating initially given their content should be revised downward to ‘partly false.’”

    But that partly false rating apparently didn’t satisfy Diamond and Silk, who reportedly lost their online Fox News show after spreading coronavirus misinformation. As detailed in the July 22 internal Facebook post, Diamond and Silk appealed to their partner manager, who opened an internal ticket for the issue. The Facebook rep argued the post was opinion and warned that the duo “is extremely sensitive and has not hesitated going public about their concerns around alleged conservative bias on Facebook.”

    While the partly false fact-check is still visible on their post, employees said Facebook removed the strike against their account internally.

    The Facebook employee said “someone described on the task as ‘Policy/Leadership’ made the call to not only completely remove this strike, but also the one from January.”

    PolitiFact’s Sharockman said he doesn’t know what goes on inside Facebook when it comes to applying or removing strikes for misinformation. All the Facebook fact-checking partners can do is try to be honest, even if the other parties involved aren’t.

    “We’re trying to be intellectually honest and thoughtful and deliberate and open and transparent. And no one else is. Everyone else is using the system to their personal benefit.” ●
    August 6, 2020, at 8:42 p.m.

    Correction: The Lead Stories fact-check for a March post from Diamond and Silk is displayed in the “Related Articles” section of their post. This story incorrectly said it had been removed.