Religions, traditions, cultures, beliefs, social norms, values, ideologies: these are all fictions created and lived by people. They enable co-operation between people without a personal connection. The predisposition to "storybelieving" distinguishes humans from the animal world and is one of the most decisive human evolutionary mechanisms.
Such fictions are characterised by the fact that they are not accessible to proof of truth and correctness or are even obviously fictitious, that they are believed and lived even if they are not legally binding, and that they can develop disruptive potential without necessarily violating legal norms as such. Insofar as such fictions are not unlawful, belief in them and questioning them is in some cases even protected by (fundamental) law.
The global networking of people on online platforms and AI-generated content means that we are dealing with a new dimension of possibilities for creating socially relevant narratives and questioning existing ones. This harbours disruptive potential, as it can lead to the collapse of cooperating communities or the emergence of new (belief) communities.
Risk minimisation obligations
The EU legislator has taken a stance on this challenge by defining the systemic risk of so-called "very large online platforms" in the Digital Service Act as, among other things, (foreseeable) adverse effects on fundamental rights, social debate, electoral processes, public security and serious adverse effects on personal well-being, in addition to the dissemination of illegal content. Providers are obliged to identify and appropriately minimise such risks with particular regard to fundamental rights. Insofar as an interaction or generated content of an AI system is involved, the AI Act provides for comprehensive transparency and disclosure obligations in some cases. For users (but not AI bots that cannot be assigned to a person), this means that a platform provider must ensure the pluralistic emergence of new narratives and the questioning of established narratives within the legal boundaries (including through the use of AI) while respecting their fundamental rights. However, if this can lead to systemic risks for society, public safety or mental health, platform providers must minimise these by taking appropriate measures, which can prove to be both significant and difficult.
This text first appeared in Austria Innovativ 3/24.
Author