The digital landscape has become a crucial battlefield for the definition and application of the principles of freedom of expression and online security. Social media platforms, from simple means of connection, have become global arenas where hate speech, disinformation and political violence can thrive, with tangible and often devastating consequences in the real world. Recent news from Ars Technica outlines a complex and constantly evolving framework, highlighting tensions between technological giants, governments, users and civil society. From the controversial decisions of Elon Musk on X (ex Twitter) to Meta moderation policies, from legal battles in California to global challenges, it is evident that the issue of content moderation is not only a technological question, but a profound ethical, political and economic dilemma. This article aims to analyze these dynamics in depth, exploring how platforms are facing – or not – the proliferation of harmful content, external and internal pressures that shape their decisions, and the impact of all this on the future of digital communication and democracy itself. The age in which we live is defined by the ability, or inability, of these powerful entities to balance freedom of speech with the need to protect communities online and offline from the damage caused by an extremist and violent speech. The implications of each decision, political or technological, reverberate far beyond the digital boundaries, affecting public debate, the mental health of individuals and social stability. It will be essential to examine the often contrasting responses of platforms, reactions of users and regulators, and emerging trends that are redefining the boundaries of the allowed in the global digital ecosystem. In an increasingly interconnected world, understanding these dynamics is more than ever an impelling necessity for all digital citizens.
The Ascension of X (ex Twitter) and the Remodelling of Content Moderation under Elon Musk
The acquisition of Twitter by Elon Musk and its subsequent transformation into X marked a dramatic turning point in the debate on content moderation. With a rhetoric marked by “absolute freedom of speech”, Musk has dismantled most of the existing infrastructure and trust and security teams, including the Trust and Safety Council and relations with numerous independent researchers. This radical change has had immediate and visible consequences, as evidenced by the numerous reports of an exponential increase in hate speech on the platform, in particular anti-Semitism, which according to several researches would more than double. Musk's decisions to restore previously suspended accounts, many of which associated with conspiracy theories and extremist groups, have created a more permissive environment for proliferation of problematic content. The platform has also faced severe criticisms for the management of specific cases, such as Hitler’s praise by the Grok generative AI, developed by the X itself, and the subsequent affirmation that such “filtri awakened” they had been removed. This story highlighted the intrinsic vulnerabilities of the integration of IA technologies without robust ethical supervision. In addition, X adopted an aggressive strategy against its critics, intent on what Musk called “thermonuclear causes” against organizations and researchers monitoring hate speech and disinformation on the platform, such as Media Matters. These legal actions have been interpreted by many as an attempt to intimidate independent research and suffocate criticism, effectively blocking the sharing of identity of controversial figures such as neo-Nazi cartoonists and suspending accounts of journalists investigating these issues. X's legal strategy, including attempts to move cases into more favorable jurisdictions, has often met the skepticism of judges, who have raised arguments “insipids” of the platform. Such events did not only fuel the mistrust of advertisers, leading to significant boycotts that severely affected X advertising revenues, forcing managing director Linda Yaccarino to stand between Musk's vision and commercial needs. The transition of X from a more structured approach to moderation to a less restrictive one has thus generated a shock wave that continues to reverberate on the whole sector, raising fundamental questions on the role of platforms in safeguarding public debate and in the fight against digital extremism.
The Legal and Regular Battle: Challenges to Platforms and New Legislative Horizons
The legal and regulatory context in which social media platforms operate is constantly evolving, with governments and international bodies seeking to impose greater responsibility in content moderation. In the United States, California was on the front line in an effort to legislate on this subject, with a Social Media Moderation Act obliging platforms to publish their community standards and report on their application. However, this law faced strong legal opposition from X, who, after a victory in California, filed a similar cause to block a copy-paste law in New York. The continuous legal battles, with judges who have sometimes shown themselves “perplex” by the defeats of X and ordered injunctions, emphasize the fluidity and complexity of this field. At international level, pressure is equally intense. Australia, for example, ordered Twitter to fight hate speech or risk daily fines, up to $700,000 in Australia. The European Union, with its request for transparency reports on the removal of content, has seen Google, Meta and TikTok defeat the Austrian plan to fight hate speech, but the imminent entry into force of the Digital Services Act (DSA) promises to impose more stringent moderation obligations and heavy penalties for defaults. Section 230 of the Communications Decency Act in the United States continues to be a fundamental pillar that guarantees ample immunity to third party content decision-making platforms, making the causes against them, such as the one intent on black YouTuber against YouTube, a “battle uphill”. This legal shield is often criticized because, according to some, it would induce insufficient moderation, while others defend it as essential for freedom of expression online. The discussion also intensifies on political violence: the assertions of former President Trump on left-wing political violence ignore the facts, according to an analysis, which shows how right-wing violence is more frequent and lethal. This disconnection between rhetoric and reality accentuates the need for a more rigorous analysis of the role of platforms in the dissemination of content that can incite such violence. Investigations by the DOJ, such as those requested by Texas legislators against Smithsonian, indicate a growing political will to extend control and responsibility far beyond social platforms. Ultimately, the balance between the protection of freedom of expression and the prevention of harm is a global legislative and judicial challenge that requires innovative answers and a constant adaptation to the rapid pace of technological change. The stake is the ability of our companies to manage public debate in a digital age permeated by information (and disinformation) of all kinds, which can have real and lasting effects on social and political fabric.
The Great Platforms Beyond X: Meta, Google, YouTube, Reddit and their Odio Speech Management
While X is often in the spotlight for its controversial moderation policies, other major technological platforms such as Meta (Facebook, Instagram), Google (YouTube) and Reddit face immense and continuous challenges in managing hate speech and disinformation. Meta, for example, has announced the elimination of programs of diversity and inclusion (DEI), claiming that such initiatives have become “too loaded”, while claiming to want to find other ways to hire employees from different backgrounds. This move has raised doubts about the company’s commitment to diversity and, by extension, its ability to effectively moderate a hate speech that often targets marginalized groups. Meta’s story is full of moderate disputes, including cases where employees implored the company not to allow politicians to bypass the rules, with whistleblower documents that suggest executive interventions to keep post online. This shows an intrinsic tension between profit, political pressure and social responsibility. External pressures, such as the advertising boycotts of giants such as Coke, Pepsi, Starbucks and Verizon, have sometimes forced Meta to act, as in 2020, when the company began to label violations of rules, even with years of delay compared to the demands of civil society. YouTube, owned by Google, also had its share of problems, as when it temporarily restricted a channel that invoked abortion of black women's pregnancies, but left online other similar videos, demonstrating a non-existent coherence in its moderation policy. Another significant case was the decision to allow publication of “dead to russian invaders” on Facebook in some countries, labeling violent language as “political expression”, but with the warning that he could not target civilians. Reddit, once considered a safe port for greater freedom of expression, has also tightened its policies, banning notoriously problematic communities as ♪ in 2020, stating that the site “is not to attack marginalized or vulnerable groups”, a decision that led to the closure of Reddit clone as Voat, a refuge for hate speech and QAnon. Even Twitch, a streaming platform, has taken legal action against anonymous users for “hate raids”, coordinated attacks against black streamers and LGBTQIA+. These examples show that although the challenges are universal, the responses of the platforms vary enormously, often influenced by legal considerations, economic pressures and sensitivity of the moment, reflecting a managerial complexity that goes far beyond the simple application of a regulation. The debate continues to be fueled also by the demands of clarity by doctors, nurses and scientists, who criticized Zuckerberg for Facebook policies, highlighting how the public's confidence is continually tested by platform decisions. Regulatory and ethics remain fundamental pillars in this complicated scenario, with industry navigating between innovation and social responsibility, often reactively rather than proactively.
The Ecosystem of Alt-Tech and Crypt Platforms: Refugees for the Extremist Address
Parallel to the mainstream platforms that fight to moderate content, an ecosystem has emerged growing in platforms “alt-tech” and encrypted chat apps that act as shelters for individuals and groups whose opinions were considered too extreme or violent for larger services. This phenomenon is a significant challenge to content moderation, as it moves the problem rather than solving it, making it more difficult for law enforcement forces and researchers to monitor and counteract extremism. Platforms like Gab, Parler and Voat have become synonymous with freedom of expression without restrictions, attracting users banned from sites like Twitter and Reddit for violations of hate speech policy. Voat, for example, described as a community paradise considered too racist or hateful for Reddit, has finally closed in 2020, but its existence has demonstrated the demand for such spaces. Parler experienced a dramatic rise and fall: after being deactivated by cloud service providers such as Amazon following the US Capitol assault, his CEO admitted that the site could never recover, although he later claimed a return, but without a certain date. Gab, on the other hand, continued to operate despite the controversy, and also faced a massive hacker attack in 2021, the “GabLeaks”, which exposed data of 15,000 accounts, including 70,000 messages, revealing the nature of the content it hosted. More recently, a shift of neo-Nazis and other extremist groups towards encrypted chat apps such as SimpleX Chat and Telegram. SimpleX Chat, in particular, boasts of having no way for law enforcement agencies to track user identity, offering a level of anonymity that attracts those who want to avoid control. Telegram, although a broader platform, has long been known for its permissiveness to extremist channels, despite occasional efforts to remove particularly obvious content. This trend towards fragmentation and use of encrypted platforms poses a deep dilemma for governments and security bodies: how to counter the radicalization and planning of illegal activities when managers operate in impenetrable digital bubbles? The very nature of encryption end-to-end, although fundamental to the privacy of citizens, it makes it extremely difficult to balance this right with the need to prevent serious crimes. The existence of these digital shelters not only complicates moderation efforts, but also amplifies the risk of isolation and radicalization of users, creating echo chamber in which extremist narratives can consolidate without contradictory, placing a significant threat to social cohesion and public security. The challenge is therefore twofold: on the one hand, to induce mainstream platforms to greater responsibility, on the other, to confront the reality that a part of the most harmful speech will continue to migrate towards darker and less accessible corners of the web, making vigilance and prevention increasingly complex and laminated.
The Impact on Research, Journalism and Transparency in Digital Era
The integrity of the digital information ecosystem depends largely on the ability of researchers and journalists to independently investigate the dynamics of platforms, the dissemination of information and the proliferation of hate speech. However, the current climate, especially under the management of Elon Musk on X, has created a hostile environment that seriously threatens this critical function. It has been reported that more than 100 researchers have interrupted their X studies for fear of being sued by Elon Musk, with researches tracking hate speech, child safety and misinformation. This intimidating action has serious implications, since it deprives the public and political decision makers of essential data to understand problems and develop effective solutions. Without the ability to independently monitor these trends, it becomes extremely difficult to assess the effectiveness of platform moderation policies, or their absence, and to empower companies for their social impact. Similarly, investigative journalism was hit. X suspended accounts that denounced the identity of a alleged neo-Nazi cartoonist (Stonetoss), effectively blocking journalists and researchers from sharing crucial information for understanding online extremist networks. These actions not only limit freedom of press, but also create a dangerous precedent that could discourage further investigations on problematic figures and movements. Transparency, a key pillar for public confidence in digital platforms, has been seriously compromised. Access to data, essential for research and journalism, has been hampered, and platforms have become less open about their moderation practices. This informational blackout makes citizens and legislators less equipped to understand what is really happening online. The importance of independent research cannot be underestimated; it is through these studies that can be identified pattern to spread disinformation, understand the tactics of malicious actors and measure the impact of platforms on polarization and mental health. The data collected by researchers and journalists were historically fundamental to inform public debate, guide legislative efforts and push companies to improve their practices. When these sources of information are silenced or intimidated, a vacuum is created that can be filled with distorted narratives or a lack of critical understanding. In an era in which social platforms deeply influence public opinion and politics, the suppression of independent research and journalism is a threat not only to transparency but to democracy itself, making it more difficult to separate the “signal from noise” as Ars Technica has always tried to do, in a sea of information increasingly cloudy and controlled.
Violence Policy and Disinformation: The Role of Platforms in the Amplification of Real Conflicts
Digital platforms are not just passive information containers, but powerful narrative amplifiers that can directly influence political violence and disinformation in the real world. The link between online content and offline events has become increasingly evident, as demonstrated by events such as the assault on the United States Capitol. Research indicates that right-wing political violence is more frequent and lethal than left-wing violence, a reality that contrasts with statements often without the foundation of some political leaders. This discrepancy emphasizes how disinformation can be used to deviate attention and sow further division, with platforms playing a crucial, intentional or not role, in the dissemination of these misleading narratives. The question of disinformation is deeply intertwined with the moderation of hate speech. When platforms loosen their policies, allowing marginal or extremist content to thrive, they create a fertile ground for the spread of conspiracy theories and false news that can inflame tensions and incite violence. The example of X that has restored previously banned accounts for disinformation, or that has suspended research on hate speech, contributes to creating an environment where extreme narratives can gain traction without due control. Debates of the European Parliament “truth” and on “fact-checking” became highly politicized, with some New York legislators who accused X of failing to control the facts of Elon Musk himself. This demonstrates the difficulty of applying objective truth standards when the same leadership of the platform is part of the controversy. In addition, the ability of platforms to influence public speech is such that even decisions on language permissivity may have significant repercussions. Authorization of phrases such as “dead to russian invaders” on Facebook, even with the limitations that could not take on civilian targets, shows the complexity of defining the boundaries between political expression and hate or violence. The role of platforms in the amplification of conflicts is not only limited to explicit political violence, but also extends to the promotion of narratives that erode confidence in institutions, science and traditional media. This undermines the ability of companies to deal with complex problems based on shared facts, making it more difficult to build consensus on critical issues. Platforms are called to recognize their immense influence and to act with greater responsibility, not only to prevent direct violence, but also to mitigate the spread of disinformation that can have corrosive and long-term effects on the health of democracy and social stability. The creation of effective mechanisms for fact-checking, a coherent and transparent moderation and the promotion of authoritative sources are fundamental steps to counter this rampant phenomenon, but the political will of companies and governments to implement them remains the biggest challenge.
The Pressure of Advertisers and Economic Responsibility of Companies
In a digital economy increasingly dominated by advertising, the pressure exerted by advertisers is one of the most effective mechanisms to push social media platforms to improve their moderation policies. Companies, sensitive to their brand image and public perception, are reluctant to associate their products and services with content that incite hatred, misinformation or violence. In 2020, giants like Coca-Cola, Pepsi, Starbucks and Verizon joined an advertising boycott against Facebook, driven by groups for civil rights that denounced the inability of the platform to address hate speech. This collective action forced Facebook to implement new policies, such as the labelling of violations of rules, demonstrating the power of “bottom line” in guiding change. In the same way, X has faced an hemorrhage of advertisers since Musk’s acquisition, further accelerated by the controversial management of hate speech and the reintroduction of extremist accounts. The accusation of Media Matters that X would ignore the terms of service and promoted ads on Nazi posts, has only exacerbated the crisis of trust. X's suspension of a pro-Nazi account occurred only after two brands had stopped advertising, highlighting how economic reaction is often the action catalyst. This interdependence between advertising revenues and moderation policies poses platforms in the face of a dilemma: balance the promise of “absolute freedom of speech” with the need to maintain a safe and attractive environment for advertisers. X’s CEO Linda Yaccarino found herself having to fight Musk’s battles, including unpaid bill management and defense of controversial policies, in an effort to reassure the advertising market. The economic cost of insufficient moderation is not limited to the loss of direct advertising revenue; it also extends to long-term reputational damage, the distrust of users and the potential government regulation. When platforms fail in their moderation duty, as in the case of Twitter that risked salt fines in Australia, economic consequences can be directed and serious. The pressure of advertisers therefore acts as an important counterweight to trends towards wild deregulation, forcing technological companies to consider the social impact of their decisions, even if driven primarily by economic incentives. This economic responsibility, while not replacing the need for ethical and legal regulation, provides a crucial control mechanism that, when practiced in a concerted manner, can lead to significant improvements in the fight against hate speech and online disinformation, forcing platforms to prioritize the security and integrity of their digital environment to protect their financial interests.
The Challenge of Algoritmic Mode vs. Human: Limits, Bias and Scalability
Content moderation on digital platforms is a constantly evolving field, characterized by a complex and often problematic interaction between artificial intelligence (IA) and human operators. Both approaches have limits, bias and scalability challenges that make the creation of a perfect moderation system an elusive goal. On the one hand, AI and algorithms have become indispensable tools to address the huge volume of content generated every second. The ability of robots “capture more” hate speech, as evidenced by Facebook’s settlements with moderators, suggests that automation can quickly identify and remove large amounts of offensive material. However, AI is far from being infallible. The algorithms may lack understanding of context, satire or cultural nuances, leading to false positives or, worse, to false negatives that allow the speech of hate to proliferate. Grok’s example, X’s AI, which praised Hitler, is a flashing illustration of intrinsic risks when “filtri awakened” (meaning as ethical and safety filters) are disabled or insufficient. AI can also amplify the pre-existing biases in the data with which it was trained, leading to an iniquitable or discriminatory moderation towards specific groups of users or topics. On the other hand, human moderation, although endowed with greater contextual understanding and judgment ability, is intrinsically not scalable in front of the billion daily posts. Human moderators are also subject to immense psychological and emotional stress, exposed daily to traumatic and violent content. Agreements such as that of Facebook that compensated moderators with 52 million dollars for the psychological damage suffered, highlight the human cost of this essential work. Musk's decision to focus on cost cutting and spam bot removal, as suggested by a report indicating that “Musk that annoys Google could trigger even more abuse on Twitter”, has led to a drastic reduction in human moderation personnel, accentuating the dependence on an AI still immature or inadequately configured. The challenge lies in finding an optimal balance: using AI to identify the greater volume of problematic content and reporting the most complex cases to human moderators, while ensuring that they are properly supported and that their decisions are consistent and transparent. It is also essential to invest in the research and development of the most sophisticated IA, capable of better understanding the context and being less prone to bias. The synergy between technology and human supervision is essential to build an effective and just moderation system. Without a holistic approach that addresses both technological and human limits, platforms will continue to fight against the growing tide of harmful content, exposing their users and society to unacceptable risks, perpetuating the vicious circle in which more content is detected, more to be captured, in a ceaseless struggle for the integrity of digital space.
The Future of Freedom of Expression Online and the Search for a Sustainable Balance
The debate on content moderation and online freedom of expression is intended to intensify, while society seeks to navigate between the extremes of excessive control and digital anarchy. Finding a sustainable balance that protects both individual freedom of expression and the security and well-being of online communities is one of the most urgent challenges of our time. The future will require a multilateral approach involving platforms, governments, civil society and users themselves. From the point of view of the platforms, it is imperative that they recognize their role not as simple technology providers, but as custodians of digital public spaces with immense social responsibility. This means investing significantly in moderation, both human and algorithmic, ensuring transparency on their policies and practices and collaborating with researchers rather than intimidating them. X’s decision to allow brands to block ads from appearing next to specific profiles, although economically motivated, indicates a potential path towards greater customization of the “feed” of ads, but does not solve the wider problem of proliferation of harmful content. On the other hand, governments will have to continue exploring regulatory paths that are effective without suffocating innovation or freedom of speech. This includes reviewing laws such as Section 230 in the United States, implementing regulations such as DSA in Europe and international cooperation to address the transnational nature of hate speech. Laws must be clear, applicable and based on principles that balance rights and responsibilities. Civil society, including human rights groups and non-governmental organizations, will continue to play a vital role in monitoring platforms, supporting user rights and pushing for greater responsibility. The strength of advertisers boycotts, often triggered by such groups, has proven to be a powerful tool for change. Finally, users themselves have a critical role. Digital education, the ability to discern disinformation and individual responsibility in how you interact online are fundamental to create a healthier environment. The understanding that freedom of expression is not absolute and involves responsibility is a pillar of mature digital citizenship. The debate on web decentralization, with platforms such as Mastodon or Bluesky proposing alternative models, could offer long-term solutions, but their adoption on a large scale and the ability to manage moderation problems remain uncertain. The search for a sustainable balance will require continuous dialogue, adaptability to new technological threats and a shared commitment to protect democratic values in a constantly evolving digital age. It is a difficult but necessary path to ensure that the future of freedom of expression online is a space that promotes constructive dialogue and not the incitement to hatred and division.
Conclusion: Navigate the Complexity of the Online Speech in the Digital Era
The journey through the landscape of content moderation reveals a complex, fragmented and constantly evolving reality, where challenges often exceed the available solutions. Social media platforms, from the largest and most influential as X and Meta, alt-tech shelters and encrypted apps, are at the heart of an intense global debate that interweaves freedom of expression, online security, corporate responsibility and democratic stability. We have seen how leading decisions such as Elon Musk can radically reshape the approach to moderation, with direct consequences on the increase of hate speech and the distrust of advertisers. Legal and regulatory battles, from those in California to those in Australia and the EU, demonstrate the growing determination of governments to believe responsible platforms, despite industry resistance and intrinsic complexity of balancing free movement of ideas with harm prevention. Platforms, though with their different strategies, have all been called into question for their management of disinformation and political violence, recognizing their role as narrative amps that can have a tangible and sometimes tragic impact in the real world. The pressure of advertisers has proved to be a powerful mechanism, though imperfect, to push companies towards greater responsibility, highlighting the indissoluble link between ethics and profit in the digital age. The technological challenge of effective moderation remains a central issue: the interaction between IA and human moderators, with their respective limits and bias, emphasizes the need to invest in hybrid and sustainable approaches that protect the psychological well-being of operators. Finally, the impact on research and journalism, with the suppression of transparency and intimidation of critics, is a fundamental threat to our collective ability to understand and address these issues. The future of the online speech will depend on the ability of all actors – platforms, governments, civil society and users – to find a sustainable balance that enhances freedom of expression without compromising the security and dignity of people. There is no unique and definitive solution, but only a continuous path of adaptation, innovation and ethical commitment to shape a digital environment that reflects the values of an open and inclusive society. The complexity of online speech in the digital age is not only a technical or legal challenge; it is a fundamental challenge for our own civilization, which requires constant vigilance and collective action to navigate its pitfalls and exploit its potential.



