By Anastasios Arampatzis

The European Union’s Digital Services Act (DSA) stands as a landmark effort to bring greater order to the often chaotic realm of the internet. This sweeping legislation aims to establish clear rules and responsibilities for online platforms, addressing a range of concerns from consumer protection to combatting harmful content. Yet, within the DSA’s well-intentioned provisions lies a fundamental tension that continues to challenge democracies worldwide: how do we ensure a safer, more civil online environment without infringing on the essential liberties of free expression?

This blog delves into the complexities surrounding the DSA’s provisions on chat and content control. We’ll explore how the fight against online harms, including the spread of misinformation and deepfakes, must be carefully weighed against the dangers of censorship and the chilling of legitimate speech. It’s a balancing act with far-reaching consequences for the future of our digital society.

Online Harms and the DSA’s Response

The digital realm, for all its promise of connection and knowledge, has become a breeding ground for a wide range of online harms.  Misinformation and disinformation campaigns erode trust and sow division, while hate speech fuels discrimination and violence against marginalized groups. Cyberbullying shatters lives, particularly those of vulnerable young people. The DSA acknowledges these dangers and seeks to address them head-on.

The DSA places new obligations on online platforms, particularly Very Large Online Platforms (VLOPs) with a significant reach.  These requirements include:

  • Increased transparency: Platforms must explain how their algorithms work and the criteria they use for recommending and moderating content.
  • Accountability: Companies will face potential fines and sanctions for failing to properly tackle illegal and harmful content.
  • Content moderation: Platforms must outline clear policies for content removal and implement effective, user-friendly systems for reporting problematic content.

The goal of these DSA provisions is to create a more responsible digital ecosystem where harmful content is less likely to flourish and where users have greater tools to protect themselves.

The Censorship Concern

While the DSA’s intentions are admirable, its measures to combat online harms raise legitimate concerns about censorship and the potential suppression of free speech. History is riddled with instances where the fight against harmful content has served as a pretext to silence dissenting voices, critique those in power, or suppress marginalized groups.

Civil society organizations have stressed the need for DSA to include clear safeguards to prevent its well-meaning provisions from becoming tools of censorship. It’s essential to have precise definitions of  “illegal” or  “harmful” content – those that directly incite violence or break existing laws. Overly broad definitions risk encompassing satire, political dissent, and artistic expression, which are all protected forms of speech.

Suppressing these forms of speech under the guise of safety can have a chilling effect, discouraging creativity, innovation, and the open exchange of ideas vital to a healthy democracy. It’s important to remember that what offends one person might be deeply important to another. The DSA must tread carefully to avoid empowering governments or platforms to unilaterally decide what constitutes acceptable discourse.

Deepfakes and the Fight Against Misinformation

Deepfakes, synthetic media manipulated to misrepresent reality, pose a particularly insidious threat to the integrity of information. Their ability to make it appear as if someone said or did something they never did has the potential to ruin reputations, undermine trust in institutions, and even destabilize political processes.

The DSA rightfully recognizes the danger of deepfakes and places an obligation on platforms to make efforts to combat their harmful use. However, this is a complex area where the line between harmful manipulation and legitimate uses can become blurred. Deepfake technology can also be harnessed for satire, parody, or artistic purposes.

The challenge for the DSA lies in identifying deepfakes created with malicious intent while protecting those generated for legitimate forms of expression. Platforms will likely need to develop a combination of technological detection tools and human review mechanisms to make these distinctions effectively.

The Responsibility of Tech Giants

When it comes to spreading harmful content and the potential for online censorship, a large portion of the responsibility falls squarely on the shoulders of major online platforms. These tech giants play a central role in shaping the content we see and how we interact online.

The DSA directly addresses this immense power by imposing stricter requirements on the largest platforms, those deemed Very Large Online Platforms. These requirements are designed to promote greater accountability and push these platforms to take a more active role in curbing harmful content.

A key element of the DSA is the push for transparency. Platforms will be required to provide detailed explanations of their content moderation practices, including the algorithms used to filter and recommend content. This increased visibility aims to prevent arbitrary or biased decision-making and offers users greater insight into the mechanisms governing their online experiences.

Protecting Free Speech – Where do We Draw the Line?

The protection of free speech is a bedrock principle of any democratic society. It allows for the robust exchange of ideas, challenges to authority, and provides a voice for those on the margins. Yet, as the digital world has evolved, the boundaries of free speech have become increasingly contested.

The DSA represents an honest attempt to navigate this complex terrain, but it’s vital to recognize that there are no easy answers. The line between harmful content and protected forms of expression is often difficult to discern. The DSA’s implementation must include strong safeguards informed by fundamental human rights principles to ensure space for diverse opinions and critique.

In this effort, we should prioritize empowering users. Investing in media literacy education and promoting tools for critical thinking are essential in helping individuals become more discerning consumers of online information.

Conclusion

The Digital Services Act signals an important turning point in regulating the online world. The struggle to balance online safety and freedom of expression is far from over. The DSA provides a strong foundation but needs to be seen as a step in an ongoing process, not a final solution. To ensure a truly open, democratic, and safe internet, we need continuing vigilance, robust debate, and the active participation of both individuals and civil society.