In view of the ever-increasing economic importance of AI systems on the one hand and the threat of fines of up to EUR 35 million or 7 percent of the company's total global annual turnover in the previous financial year in the event of violations of the regulation on the other, companies can use this lead time of up to two years to familiarise themselves with the AI Regulation. The responsible Austrian Regulatory Authority for Broadcasting and Telecommunications (RTR) has set up an AI service centre, which serves as a point of contact and information hub for the general public on the topic of AI. The AI Regulation applies to providers that place AI systems on the market or put them into operation in the Union, regardless of whether these providers are established in the Union or in a third country, as well as to users of AI systems that terminate in the Union and to providers and users of AI systems that are established or resident in a third country if the result produced by the system is used in the Union.
The AI Regulation essentially focuses on the risk potential of AI systems, which are categorised into four risk categories: unacceptable risk (prohibited AI practices), high risk (high-risk AI systems), limited risk (AI systems intended for interaction with natural persons) and minimal or no risk. Prohibited practices include social scoring and the use of AI systems that utilise techniques of subliminal influence outside of a person's awareness. High-risk AI systems include critical infrastructure (e.g. transport), safety components of products (e.g. AI application in robot-assisted surgery) and education or training that can determine access to education and the professional course of a person's life (e.g. assessment of exams). AI systems that interact with natural persons are subject to special transparency obligations, for example deepfakes must be labelled as such.
First published in Horizont on 05.04.2024