Andrea Loreggia Giovanni Sartor

Artificial Intelligence and the moderation of digital platforms

Are you already subscribed?
Login to check whether this content is already included on your personal or institutional subscription.

Abstract

The amount of data that is generated and uploaded on the Internet is increasing exponentially. This makes manual moderation unfeasible: humans are unable to control all the information that is published on online platforms. In this work, we analyze the available technologies for automating the moderation of content in social networks, blogs, and other platforms that allow users to publish and exchange data. We discuss the role of automation in moderation, focusing on automated filters meant to exclude or classify user-generated materials. We will examine the significance of filtering techniques for the purpose of maintaining safe digital environments, where infringement of rights is prevented and mitigated, while users can freely express their opinion and exercise their rights. We shall conclude this work by analyzing the present normative framework at the European level. We will argue in favor of a normative system that encourages the adoption of automatic techniques for filtering but at the same time, we suggest that the limitations of the technology and the possibilities of abuses should be taken into account. These arguments suggest an approach where providers are required to adopt effective state-of-the-art technologies but are not made liable for every failure of these technologies

Keywords

  • filtering technologies
  • moderation
  • artificial intelligence
  • governance of AI
  • consumer protection

Preview

Article first page

What do you think about the recent suggestion?

Trova nel catalogo di Worldcat