The breakthrough in artificial intelligence raises concerns about influence campaigns, as far fewer human and financial resources will be needed to run large-scale disinformation campaigns. It has now become clear that AI systems themselves are capable of misinforming, and thus can significantly undermine public trust in democratic institutions.

According to a summary published by the European Parliament Research Service (EPRS) in 2023, the breakthrough of generative artificial intelligence (hereafter: AI) is a cause for concern in relation to influence campaigns, as far fewer human and financial resources will be needed to carry out large-scale disinformation campaigns. . The EPRS document provides several studies and research as examples, among them. The Organization for Economic Co-operation and Development (OECD) emphasizes that the combination of linguistic models of artificial intelligence (by receiving, processing and storing information as knowledge, understanding and creating language) and disinformation can already lead to a large degree of deception and can significantly impair the public trust in democratic institutions. It has now become clear that AI systems themselves can misinform, a study found that Google's AI tool Bard produced convincingly false information in 78 out of 100 narratives tested in the research (Bard is a generative artificial intelligence-based chatbot that can produce human-sounding texts, articles and social media content in response to user requests and questions).

The new version of ChatGPT, ChatGPT-4, is even more prone to generating false information (can respond with false and misleading information 100 times out of 100), so it is more convincing in terms of danger than its predecessor, ChatGPT-3.5 (which generated 80 times out of 100 false answers). The results show that a chatbot - or a similar tool that uses the same underlying technology - can now be effectively used to spread misinformation on a wide scale, the political impact and significance of which is unpredictable. The latter case also highlights, in light of NewsGuard's findings, that OpenAI introduced a much more powerful version of its AI technology (due to market competition) before fixing the most critical bugs of the earlier version. Such companies promise that they are aware of the risks and that they are trying to manage them: for example, OpenAI, which is behind ChatGPT, has announced that it will monitor the use of the application in order to effectively filter out activities that influence politics.

Big Tech platforms have opportunities to curb fake news, especially based on the experience of the Brexit campaign or the 2016 and 2020 US presidential elections. In the case of these platforms, however, there is still no uniform regulation, which is mainly explained by the differences between European and American legal thinking. The campaign for the 2024 US presidential election is facing the ever-increasing challenges of AI-based disinformation, so for example, the robocall impersonating US President Joe Biden is already causing particular alarm (due to the audio deepfake). The phone message digitally "spoofs" the current president's voice while echoing one of Biden's signature lines: The robocall urged New Hampshire residents not to vote in the January 2024 Democratic primary, prompting state authorities to launch an investigation. in connection with the possible suppression (withholding) of voters.

Wasim Khaled, CEO of software development company Blackbird.AI, said the ease with which fake audio can be created and distributed complicates an already hyper-polarized political landscape, undermines trust in the media and allows anyone to claim to have "fabricated evidence" based on facts. China's ByteDance, the owner of the TikTok platform, recently unveiled StreamVoice, an artificial intelligence-based tool that transforms a user's voice into any desired alternative in real time. The Digital Services Act (DSA for short), i.e. Regulation (EU) 2022/2065, updates the e-commerce directive of 2000 on illegal content, transparent advertising and misinformation, also aiming to introduce competition law at the most important points after the self-regulation that has been typical until now obligation should apply to public political communication on social media platforms. The decree was announced in the Official Journal of the European Union on October 27, 2022, its basic rules have been applicable since November 16, 2022, and the vast majority of them have been applicable since February 17, 2024. The DSA encourages the largest platforms to be active, which will be obliged to analyze annually how their services, especially their algorithmic (artificial intelligence-based) recommendation system that ranks content, has an impact on the election process.

The European Commission presented its proposal on April 21, 2021, and then on December 9, 2023, the Presidency of the Council and the negotiators of the European Parliament reached a provisional agreement regarding the proposal on harmonized rules for artificial intelligence (hereafter: AI Codex). The main idea is to regulate the ability of artificial intelligence to cause harm to society along a risk-based approach.

In the weeks following the provisional agreement, work will continue at the expert level to finalize the details of the new regulation, after which the presidency will submit the compromise text to the representatives of the member states for approval. Both institutions must confirm the entire text, and it must undergo a legal-linguistic check before official adoption by the co-legislators.

The purpose of the draft regulation is to ensure that AI systems placed and used on the European market are safe and respect fundamental rights and EU values, and to encourage investment and innovation in AI in Europe.

The draft is the first legislative proposal in the world that deals with this topic, and thus, like the GDPR, it can become a global standard for other jurisdictions in the field of AI regulation. By defining these standards, the EU aims to pave the way for a global approach that promotes artificial intelligence that:

  • ethical
  • safe
  • reliable

According to preamble (16) of the AI ​​Code, artificial intelligence-based manipulative techniques can be used to induce or deceive people into unwanted behavior by inducing them to make decisions that undermine and impair their autonomy, decision-making and free choice . The marketing, deployment or use of AI systems whose purpose is to distort human behavior to a significant extent, most likely causing physical or mental harm, is extremely dangerous and should therefore be prohibited. Such AI systems use subliminal components such as audio, visual, and video stimuli that cannot be perceived by the person because the stimuli are beyond human perception, or use other subliminal techniques that undermine or impair the person's autonomy , decision-making or free choice that people are not aware of, or even if they are aware of it, they cannot control it or resist it, for example in the case of machine-brain brain interfaces or virtual reality.

An AI office will be created within the European Commission to oversee the most advanced AI models, contribute to the promotion of standards and testing practices, and enforce common rules in all member states. The MI office is assisted by a scientific advisory board of independent experts. The AI ​​Board, composed of representatives of the Member States, will serve as a coordination platform and an advisory body to the Commission, and will assign an important role to the Member States in the implementation of the Regulation, including the development of codes of practice for basic models.

Finally, in a few words about the sanctions, the fines imposed for violations of the Code are set as a percentage of the offending company's global annual sales in the previous financial year or a predetermined amount, whichever is higher. This

  • EUR 35 million or 7% in case of illegal behavior related to prohibited AI applications,
  • EUR 15 million or 3% in case of violation of the obligations stipulated in the code,
  • and EUR 7.5 million or 1.5% for providing incorrect information.

At the same time, the temporary agreement establishes more proportionate upper limits for administrative fines that can be imposed on SMEs and start-up innovative enterprises in the event that these enterprises violate the provisions of the code. Natural or legal persons may submit a complaint to the competent market surveillance authority due to non-compliance with the code, the complaint will be handled in accordance with the procedures of the given authority specially designed for this purpose.

Source: alaptorvenyblog.hu

Cover image: Gerd Altmann / Pixabay