We published an article on Nomiki Bibliothiki's website about CSAR

This is a crucial period for the proposed European Regulation on preventing and combating child sexual abuse online (Child Sexual Abuse Regulation -CSAR).

In the context of these important developments, our co-founding members Elefterios Chelioudakis and Stefanos Vitoratos prepared a detailed article for the NB Daily website in order to highlight the relevant issues.

We would like to thank the editorial team of Nomiki Bibliothiki for their cooperation.

You can read the article here.


Digital Services Act: Striking a Balance Between Online Safety and Free Expression

By Anastasios Arampatzis

The European Union’s Digital Services Act (DSA) stands as a landmark effort to bring greater order to the often chaotic realm of the internet. This sweeping legislation aims to establish clear rules and responsibilities for online platforms, addressing a range of concerns from consumer protection to combatting harmful content. Yet, within the DSA’s well-intentioned provisions lies a fundamental tension that continues to challenge democracies worldwide: how do we ensure a safer, more civil online environment without infringing on the essential liberties of free expression?

This blog delves into the complexities surrounding the DSA’s provisions on chat and content control. We’ll explore how the fight against online harms, including the spread of misinformation and deepfakes, must be carefully weighed against the dangers of censorship and the chilling of legitimate speech. It’s a balancing act with far-reaching consequences for the future of our digital society.

Online Harms and the DSA’s Response

The digital realm, for all its promise of connection and knowledge, has become a breeding ground for a wide range of online harms.  Misinformation and disinformation campaigns erode trust and sow division, while hate speech fuels discrimination and violence against marginalized groups. Cyberbullying shatters lives, particularly those of vulnerable young people. The DSA acknowledges these dangers and seeks to address them head-on.

The DSA places new obligations on online platforms, particularly Very Large Online Platforms (VLOPs) with a significant reach.  These requirements include:

  • Increased transparency: Platforms must explain how their algorithms work and the criteria they use for recommending and moderating content.
  • Accountability: Companies will face potential fines and sanctions for failing to properly tackle illegal and harmful content.
  • Content moderation: Platforms must outline clear policies for content removal and implement effective, user-friendly systems for reporting problematic content.

The goal of these DSA provisions is to create a more responsible digital ecosystem where harmful content is less likely to flourish and where users have greater tools to protect themselves.

The Censorship Concern

While the DSA’s intentions are admirable, its measures to combat online harms raise legitimate concerns about censorship and the potential suppression of free speech. History is riddled with instances where the fight against harmful content has served as a pretext to silence dissenting voices, critique those in power, or suppress marginalized groups.

Civil society organizations have stressed the need for DSA to include clear safeguards to prevent its well-meaning provisions from becoming tools of censorship. It’s essential to have precise definitions of  “illegal” or  “harmful” content – those that directly incite violence or break existing laws. Overly broad definitions risk encompassing satire, political dissent, and artistic expression, which are all protected forms of speech.

Suppressing these forms of speech under the guise of safety can have a chilling effect, discouraging creativity, innovation, and the open exchange of ideas vital to a healthy democracy. It’s important to remember that what offends one person might be deeply important to another. The DSA must tread carefully to avoid empowering governments or platforms to unilaterally decide what constitutes acceptable discourse.

Deepfakes and the Fight Against Misinformation

Deepfakes, synthetic media manipulated to misrepresent reality, pose a particularly insidious threat to the integrity of information. Their ability to make it appear as if someone said or did something they never did has the potential to ruin reputations, undermine trust in institutions, and even destabilize political processes.

The DSA rightfully recognizes the danger of deepfakes and places an obligation on platforms to make efforts to combat their harmful use. However, this is a complex area where the line between harmful manipulation and legitimate uses can become blurred. Deepfake technology can also be harnessed for satire, parody, or artistic purposes.

The challenge for the DSA lies in identifying deepfakes created with malicious intent while protecting those generated for legitimate forms of expression. Platforms will likely need to develop a combination of technological detection tools and human review mechanisms to make these distinctions effectively.

The Responsibility of Tech Giants

When it comes to spreading harmful content and the potential for online censorship, a large portion of the responsibility falls squarely on the shoulders of major online platforms. These tech giants play a central role in shaping the content we see and how we interact online.

The DSA directly addresses this immense power by imposing stricter requirements on the largest platforms, those deemed Very Large Online Platforms. These requirements are designed to promote greater accountability and push these platforms to take a more active role in curbing harmful content.

A key element of the DSA is the push for transparency. Platforms will be required to provide detailed explanations of their content moderation practices, including the algorithms used to filter and recommend content. This increased visibility aims to prevent arbitrary or biased decision-making and offers users greater insight into the mechanisms governing their online experiences.

Protecting Free Speech – Where do We Draw the Line?

The protection of free speech is a bedrock principle of any democratic society. It allows for the robust exchange of ideas, challenges to authority, and provides a voice for those on the margins. Yet, as the digital world has evolved, the boundaries of free speech have become increasingly contested.

The DSA represents an honest attempt to navigate this complex terrain, but it’s vital to recognize that there are no easy answers. The line between harmful content and protected forms of expression is often difficult to discern. The DSA’s implementation must include strong safeguards informed by fundamental human rights principles to ensure space for diverse opinions and critique.

In this effort, we should prioritize empowering users. Investing in media literacy education and promoting tools for critical thinking are essential in helping individuals become more discerning consumers of online information.

Conclusion

The Digital Services Act signals an important turning point in regulating the online world. The struggle to balance online safety and freedom of expression is far from over. The DSA provides a strong foundation but needs to be seen as a step in an ongoing process, not a final solution. To ensure a truly open, democratic, and safe internet, we need continuing vigilance, robust debate, and the active participation of both individuals and civil society.


The Looming Disinformation Crisis: How AI is Weaponizing Misinformation in the Age of Elections

By Anastasios Arampatzis

Misinformation is as old as politics itself. From forged pamphlets to biased newspapers, those seeking power have always manipulated information. Today, a technological revolution threatens to take disinformation to unprecedented levels. Generative AI tools, capable of producing deceptive text, images, and videos, give those who seek to mislead an unprecedented arsenal. In 2024, as a record number of nations hold elections, including the EU Parliamentary elections in June, the very foundations of our democracies tremble as deepfakes and tailored propaganda threaten to drown out truth.

Misinformation in the Digital Age

In the era of endless scrolling and instant updates, misinformation spreads like wildfire on social media. It’s not just about intentionally fabricated lies; it’s the half-truths, rumors, and misleading content that gain momentum, shaping our perceptions and sometimes leading to real-world consequences.

Think of misinformation as a distorted funhouse mirror. Misinformation is false or misleading information presented as fact, regardless of whether there’s an intent to deceive. It can be a catchy meme with a dubious source, a misquoted scientific finding, or a cleverly edited video that feeds a specific narrative.  Unlike disinformation, which is a deliberate spread of falsehoods, misinformation can creep into our news feeds even when shared with good intentions.

How the Algorithms Push the Problem

Social media platforms are driven by algorithms designed to keep us engaged. They prioritize content that triggers strong emotions – outrage, fear, or click-bait-worthy sensationalism.  Unfortunately, the truth is often less exciting than emotionally charged misinformation. These algorithms don’t discriminate based on accuracy; they fuel virality. With every thoughtless share or angry comment, we further amplify misleading content.

The Psychology of Persuasion

It’s easy to blame technology, but the truth is we humans are wired in ways that make us susceptible to misinformation. Here’s why:

  • Confirmation Bias: We tend to favor information that confirms what we already believe, even if it’s flimsy. If something aligns with our worldview, we’re less likely to question its validity.
  • Lack of Critical Thinking: In a fast-paced digital world, many of us lack the time or skills to fact-check every claim we encounter. Pausing to assess the credibility of a source or the logic of an argument is not always our default setting.

How Generative AI Changes the Game

Generative AI models learn from massive datasets, enabling them to produce content indistinguishable from human-created work. Here’s how this technology complicates the misinformation landscape:

  • Deepfakes: AI-generated videos can convincingly place people in situations they never were or make them say things they never did. This makes it easier to manufacture compromising or inflammatory “evidence” to manipulate public opinion.
  • Synthetic Text: AI tools can churn out large amounts of misleading text, like fake news articles or social media posts designed to sound authentic. This can overwhelm fact-checking efforts.
  • Cheap and Easy Misinformation: The barrier to creating convincing misinformation keeps getting lower. Bad actors don’t need sophisticated technical skills; simple AI tools can amplify their efforts.

The Dangers of Misinformation

The impact of misinformation goes well beyond hurt feelings. It can:

  • Pollute Public Discourse: Misinformation hinders informed debate. It leads to misunderstandings about important issues and makes finding consensus difficult.
  • Erode Trust: When we can’t agree on basic facts, trust in institutions, science, and even the democratic process breaks down.
  • Targeted Manipulation: AI tools can allow for highly personalized misinformation campaigns that prey on specific vulnerabilities or biases of individuals and groups.
  • Influence Decisions: Misinformation can influence personal decisions, including voting for less qualified candidates or promoting radical agendas.

What Can Be Done?

There is no single, easy answer for combating the spread of misinformation. Disinformation thrives in a complicated web of human psychology, technological loopholes, and political agendas. However, recognizing these challenges is the first step toward building effective solutions.  Here are some crucial areas to focus on:

  • Boosting Tech Literacy: In a digital world, the ability to distinguish reliable sources from questionable ones is paramount. Educational campaigns, workshops, and accessible online resources should aim to teach the public how to spot red flags for fake news: sensational headlines, unverified sources, poorly constructed websites, or emotionally charged language.
  • Investing in Fact-Checking: Supporting independent fact-checking organizations is key. These act as vital watchdogs, scrutinizing news, politicians’ claims, and viral content.  Media outlets should consider prominently labeling content that has been verified or clearly marking potentially misleading information.
  • Balancing Responsibility & Freedom: Social media companies and search engines bear significant responsibility for curbing the flow of misinformation. The EU’s Digital Services Act (DSA) underscores this responsibility, placing requirements on platforms to tackle harmful content. However, this is a delicate area, as heavy-handed censorship can undermine free speech. Strategies such as demoting unreliable sources, partnering with fact-checkers, and providing context about suspicious content can help, but finding the right balance is an ongoing struggle, even in the context of evolving regulations like the DSA.
  • The Importance of Personal Accountability: Even with institutional changes, individuals play a vital role. It’s essential to be skeptical, ask questions about where information originates, and be mindful of the emotional reactions a piece of content stirs up. Before sharing anything, verify it with a reliable source. Pausing and thinking critically can break the cycle of disinformation.

The fight against misinformation is a marathon, not a sprint. As technology evolves, so too must our strategies. We must remain vigilant to protect free speech while safeguarding the truth.


From Clean Monday to Cyber Cleanliness: Bridging Traditions with Modern Cyber Hygiene Practices

By Anastasios Arampatzis and Ioannis Vassilakis

In the heart of Greek tradition lies Clean Monday, which marks the beginning of Lent leading to Easter and symbolizes a fresh start, encouraging cleanliness, renewal, and preparation for the season ahead. This day, celebrated with kite flying, outdoor activities, and cleansing the soul, carries profound significance in purifying one’s life in all aspects.

Just as Clean Monday invites us to declutter our homes and minds, there exists a parallel in the digital realm that often goes overlooked: cyber hygiene. Maintaining a clean and secure online presence is imperative in an era where our lives are intertwined with the digital world more than ever.

Understanding Cyber Hygiene

Cyber hygiene refers to the practices and steps that individuals take to maintain system health and improve online security. These practices are akin to personal hygiene routines; just as regular handwashing can prevent the spread of illness, everyday cyber hygiene practices can protect against cyber threats such as malware, phishing, and identity theft.

The importance of cyber hygiene cannot be overstated. In today’s interconnected world, a single vulnerability can lead to a cascade of negative consequences, affecting not just the individual but also organizations and even national security. The consequences of neglecting cyber hygiene can be severe:

  • Data breaches.
  • Identity theft.
  • Loss of privacy.

As we celebrate Clean Monday and its cleansing rituals, we should also adopt cyber hygiene practices to prepare for a secure and private digital future free from cyber threats.

Clean Desk and Desktop Policies – The Foundation of Cyber Cleanliness

Just as Clean Monday encourages us to purge our homes of unnecessary clutter, a clean desk and desktop policy is the cornerstone of maintaining a secure and efficient workspace, both physically and digitally. These policies are not just about keeping a tidy desk; they’re about safeguarding sensitive information from prying eyes and ensuring that critical data isn’t lost amidst digital clutter.

  • Clean Desk Policy ensures that sensitive documents, notes, and removable storage devices are secured when not in use or when an employee leaves their desk. It’s about minimizing the risk of sensitive information falling into the wrong hands, intentionally or accidentally.
  • Clean Desktop Policy focuses on the digital landscape, advocating for a well-organized computer desktop. This means regularly archiving or deleting unused files, managing icons, and ensuring that sensitive information is not exposed through screen savers or unattended open documents.

The benefits of these policies are profound:

  • Reduced risk of information theft.
  • Increased efficiency and enhanced productivity.
  • Enhanced professional image and competence.

The following simple tips can help you maintain cleanliness:

  1.     Implement a Routine: Just as the rituals of Clean Monday are ingrained in our culture, incorporate regular clean-up routines for physical and digital workspaces.
  2.     Secure Sensitive Information: Use locked cabinets for physical documents and password-protected folders for digital files.
  3.     Adopt Minimalism: Keep only what you need on your desk and desktop. Archive or delete old files and dispose of unnecessary paperwork.

Navigating the Digital Landscape: Ad Blockers and Cookie Banners

Using ad blockers and understanding cookie banners are essential for maintaining a clean and secure online browsing experience. As we carefully select what to keep in our homes, we must also choose what to allow into our digital spaces.

  • Ad Blockers prevent advertisements from being displayed on websites. While ads can be a source of information and revenue for site owners, they can also be intrusive, slow down web browsing, and sometimes serve as a vector for malware.
  • Cookie Banners inform users about a website’s use of cookies. Understanding and managing these consents can significantly enhance your online privacy and security.

To achieve a cleaner browsing experience:

  • Choose reputable ad-blocking software that balances effectiveness with respect for websites’ revenue models. Some ad blockers allow non-intrusive ads to support websites while blocking harmful content.
  • Take the time to read and understand what you consent to when you agree to a website’s cookie policy. Opt for settings that minimize tracking and personal data collection where possible.
  • Regularly review and clean up your browser’s permissions and stored cookies to ensure your online environment remains clutter-free and secure.

 

Cultivating Caution in Digital Interactions

In the same way that Clean Monday prompts us to approach our physical and spiritual activities with mindfulness and care, we must also navigate our digital interactions with caution and deliberateness. While brimming with information and connectivity, the digital world also harbors risks such as phishing scams, malware, and data breaches.

  • Verify Before You Click: Ensure the authenticity of websites before entering sensitive information, and be skeptical of emails or messages from unknown sources.
  • Use BCC in Emails When Appropriate: Sending emails, especially to multiple recipients, should be handled carefully to protect everyone’s privacy. Using Blind Carbon Copy (BCC) ensures that recipients’ email addresses are not exposed to everyone on the list.
  • Recognize and Avoid Phishing Attempts: Phishing emails are the digital equivalent of wolves in sheep’s clothing, often masquerading as legitimate requests. Learning to recognize these attempts can protect you from giving away sensitive information to the wrong hands.
  • Embrace skepticism in your online interactions: Ask yourself whether information shared is necessary, whether links are safe to click, and whether personal data needs to be disclosed.

Implementing a Personal Cyber Cleanliness Routine

Drawing inspiration from the rituals of Clean Monday, establishing a personal routine for cyber cleanliness is beneficial and essential for maintaining digital well-being. The following steps can help show a cleaner digital life.

  • Enable Multi-Factor Authentication (MFA) wherever it is possible to keep unauthorized users out of personal accounts.
  • Periodically review privacy settings on social media and other online platforms to ensure you only share what you intend to.
  • Unsubscribe from unused services, delete old emails and remove unnecessary files to reduce the cognitive load and make it easier to focus on what’s important.
  • Just as Clean Monday marks a time for physical and spiritual cleansing, set specific times throughout the year for digital clean-ups.
  • Keep abreast of the latest in cybersecurity to ensure your practices are up-to-date. Knowledge is power, particularly when it comes to protecting yourself online.
  • Share your knowledge and habits with friends, family, and colleagues. Just as traditions like Clean Monday are passed down, so too can habits of cyber cleanliness.

Embracing a Future of Digital Cleanliness and Renewal

The principles of Clean Monday can also be applied to our digital lives. Maintaining a healthy, secure digital environment is a continuous commitment and requires regular maintenance. We take proactive steps toward securing our personal and professional data by implementing clean desk and desktop policies, navigating the digital landscape with caution, and cultivating a routine of personal cyber cleanliness. Let us embrace this opportunity for a digital clean-up and create a safer digital world for all.