By Darshatha Gamage
Philosopher Bertrand Russell, in his seminal work On the Nature of Truth, posed two enduring questions:
“In any inquiry into the nature of truth, two questions meet us on the threshold: In what sense, if any, is truth dependent upon mind? Are there many different truths, or is there only the Truth?”
These questions have challenged humanity’s greatest thinkers, with countless theories emerging over time. Yet, the pursuit of truth has only grown more complex, especially in the digital age. The concept of truth in online spaces is particularly contentious. While the fundamental nature of truth online is no different from that offline, its dissemination — both as truth and falsehood — occurs at an extraordinary pace and scale.
Against this backdrop, Meta CEO Mark Zuckerberg’s recent policy announcements[1] have reignited debates about how truth is defined, managed, and protected in the digital age.
In a video statement, Zuckerberg emphasized the need to reduce errors, simplify policies, and restore free expression on Meta’s platforms. To that end, Meta plans to replace its third-party fact-checking process in the U.S. with a community-driven model similar to Community Notes on X (formerly Twitter). He also accused current fact-checking mechanisms of political bias. Further, Zuckerberg announced policy changes that would ease restrictions on discussions about immigration and gender.
While these changes are shaped by the political dynamics of the United States, their consequences will ripple across the globe. Millions of users, particularly in non-English speaking regions of the developing world, are likely to bear the greatest burden. These regions often lack robust digital protections, adequate fact-checking infrastructure, and representation in global tech policymaking.
The implications of these shifts are profound. Russell’s questions—whether truth depends on the mind and whether it is singular or plural—resonate in this modern context, where truth is increasingly shaped by algorithms, user biases, and corporate decisions. Balancing free expression, political neutrality, and accountability remains an ethical and practical challenge.
The urgency of this issue is underscored by recent events. The deluge of disinformation surrounding the Los Angeles wildfires in California is a stark reminder of the critical need for robust mechanisms to counter online falsehoods. In response to Zuckerberg’s announcement, the International Fact-Checking Network (IFCN) wrote an open letter[2] asserting that “access to truth fuels freedom of speech, empowering communities to align their choices with their values.” The letter countered Zuckerberg’s claims, noting that Meta itself determines how flagged content is downranked or labeled—not the fact-checkers—disproving the assertion that fact-checkers are censoring speech.
While fact-checking systems are not flawless, research suggests they play a meaningful role in reducing the impact of false claims. Social psychologist Sander van der Linden, who advised Facebook’s fact-checking program in 2022, emphasized that studies consistently show fact-checking helps mitigate misinformation[3].
Meta’s decision to replace professional fact-checking with an unproven community-based system appears motivated either by a desire for political goodwill or to cut costs, potentially weakening efforts to combat disinformation globally.
Evidence from similar models raises concerns. A 2024 report by the Center for Countering Digital Hate found the Community Notes system on ‘X’ ineffective in addressing misleading election claims[4]. Of 283 misleading posts with proposed notes, 209 had accurate notes that were not widely visible to users. Misleading posts also received 13 times more views than the corrective community notes.
Experts, Alex Mahadevan and Alexios Mantzarlis, concluded that Community Notes’ impact on election-related misinformation was “marginal at best.”[5] The IFCN has proposed an interoperable model where community-driven notes complement professional fact-checking—a hybrid approach with greater potential for promoting accurate information.
The risks of relying solely on a community-based model are particularly acute in markets like Sri Lanka. Historically, Meta platforms have played a role in amplifying harmful disinformation in the country. Facebook apologized for its role in inciting violence during the 2018 riots in Digana, Kandy.
Similarly, after the 2019 Easter attacks, disinformation fueled unrest, including the infamous sterilization and Dr Shafi scandals. Local organization Hashtag Generation documented 288 instances of misinformation and disinformation during the Presidential Election 2024, with instances of using paid promotions to amplify falsehoods.
The Asian Network for Free Elections (ANFREL) highlighted the pervasive spread of disinformation during the presidential[6] and parliamentary[7] election period in Sri Lanka, underscoring the fragility of the information landscape in a country recovering from political and economic crises.
Content moderation policies on gender-based violence also warrant scrutiny. Reports from Hashtag Generation reveal that technology-facilitated gender-based violence is the most prevalent form of abuse on Meta platforms, targeting women, LGBTQI+ individuals, and female political candidates. During elections in 2024, female candidates and female supporters faced significant online harassment.
The Sri Lankan government previously cited such issues to justify its controversial Online Safety Act, which critics viewed as a tool for suppressing freedoms. If Meta fails to address these challenges, governments may exploit such failures to impose restrictive controls on online spaces.
Meta’s policy shift rekindles the philosophical discourse on post-truth politics, which emerged during Donald Trump’s first term as President. Ironically, Zuckerberg’s advocacy for free speech now aligns with Trump, whom Meta banned in January 2021 for inciting violence. These contradictions highlight the challenges of reconciling corporate priorities with public accountability.
The broader implications are troubling. As Francis Fukuyama observed in 2017, the traditional remedy for bad information—providing better information—fails in a social media landscape dominated by trolls and bots[8]. The solution to misinformation requires a multifaceted approach.
As Angie Drobnic Holan noted in 2024, “Fact-checking is deeply embedded in the ideals of free speech and free expression.[9]” She goes on to state, “But misinformation isn’t a problem that can be solved with a single approach. Saying fact-checking doesn’t work is a bit like saying we should get rid of firefighters because buildings are still catching fire.”
Russell’s enduring questions about truth now intersect with the challenges of combating falsehoods in the digital age. The task is not to discard tools like fact-checking but to improve them, ensuring they evolve alongside the ever-changing landscape of misinformation. In the quest for truth, the digital age demands vigilance, innovation, and a commitment to both accuracy and equity.
The author can be reached at darshathagamage@gmail.com.
Factum is an Asia-Pacific focused think tank on International Relations, Tech Cooperation, Strategic Communications, and Climate Outreach accessible via www.factum.lk.
The views expressed here are the author’s own and do not necessarily reflect the organization’s.
[1] https://www.facebook.com/share/v/15cYnanUz1/
[2] https://www.poynter.org/ifcn/2025/an-open-letter-to-mark-zuckerberg-from-the-worlds-fact-checkers-nine-years-later/
[3] https://www.nature.com/articles/d41586-025-00027-0
[4] https://counterhate.com/research/rated-not-helpful-x-community-notes/
[5] https://www.poynter.org/commentary/2024/x-community-notes-role-2024-presidential-election/
[6] https://anfrel.org/interim-report-of-the-anfrel-ieeom-to-the-2024-sri-lankan-presidential-election/
[7] https://anfrel.org/interim-report-of-the-anfrel-international-election-observation-mission-ieom-to-the-2024-sri-lanka-parliamentary-elections/
[8] https://www.project-syndicate.org/magazine/the-emergence-of-a-post-fact-world-by-francis-fukuyama-2017-01
[9] https://www.poynter.org/commentary/2024/fact-checking-is-not-censorship