You might have come across these Facebook pages too—beautiful photos of a foreign city labeled as Moscow, Sevastopol, or another city in Russia. In reality, these images are of entirely different places—perhaps from Spain, Brazil, the USA, or the UK—and the labels and descriptions are completely fabricated. Similarly, photographs of stunning women in military uniforms give the impression that they are members of the Russian army. However, appearances can be deceiving; it is all a hoax.
One of the key features of large language models of generative artificial intelligence is their ability to converse with us through natural human language. This naturally leads us to personify these AI tools and attribute human traits to them – it becomes our virtual assistant, persona, friend, helper, and in many cases, even companion. AI conversational tools compete effectively with living people, raising the question of whether in the near future they might largely replace human conversation – after all, AI does not argue, apologizes, avoids conflict, does not explode with emotions, motivates us, praises us, is not aggressive, and we don’t have to apologize to it, among other things.
Recently, artificial intelligence, especially tools like ChatGPT, Gemini, and Copilot, have become not only the focus of interest for technology enthusiasts but also the target of scammers who exploit its popularity to spread malware and phishing. This trend is worrying, as the increasing interest in AI also increases the number of people who fall for false promises of easy profit or problem-solving using these advanced technologies.
In today's digital world, manipulative techniques are becoming an increasingly common part of public communication, particularly in pre-election campaigns and often fraudulent advertising. The E-Bezpečí team from the Faculty of Education, Palacký University in Olomouc, is offering an innovative solution – the FactNinja application. This unique tool, powered by the advanced GPT-4 Omni artificial intelligence model, allows for fast and efficient analysis of graphical content to uncover manipulative techniques, argumentative fallacies, and other unethical practices aimed at influencing public opinion.
Donald Trump, the former President of the United States, is once again facing criticism for using artificial intelligence in his election campaign. After the discovery of fake AI-generated photos allegedly supporting Trump from Taylor Swift fans and Black voters, concerns about the spread of disinformation and manipulation of public opinion have increased. This is something that will need to be considered in the future as the misuse of artificial intelligence will increasingly affect democratic processes.