RISKS OF DISINFORMATION GENERATED BY AI AND MITIGATION STRATEGIES

Autores/as

  • Rodrigo Thomé de Moura Autor

DOI:

https://doi.org/10.63330/aurumpub.021-010

Palabras clave:

Artificial Intelligence, Disinformation, Deepfakes, Informational security, Mitigation

Resumen

This study analyzed the risks of disinformation generated by Artificial Intelligence and presented mitigation strategies capable of reducing its social, political, and institutional impacts. The research aimed to investigate how AI technologies, especially generative models, have expanded the production and circulation of false, misleading, and manipulated content, as well as to assess the consequences of this phenomenon for public trust, democracy, science, and national security. The methodology adopted was bibliographic and qualitative, based on a review of scientific articles, institutional reports, and specialized works discussing Artificial Intelligence, digital disinformation, and informational integrity. The results showed that generative AI enabled the creation of synthetic texts, images, videos, and audio at high speed and scale, making disinformation more sophisticated and harder to detect. It was observed that the increasing realism of deepfakes, voice cloning, and automation through bot networks significantly enhanced the ability to manipulate public perceptions, favoring coordinated campaigns and interference in democratic processes. The analysis also identified that this scenario contributed to the erosion of trust in institutions, the discrediting of scientific evidence, and vulnerabilities in sensitive areas such as public health and national security. The study concluded that mitigating these risks depends on combining technical, political, and educational strategies, including tools for detecting synthetic media, regulations for algorithmic transparency, digital governance policies, and media literacy programs capable of strengthening citizens’ critical capacity in the contemporary informational environment.

Descargas

Los datos de descarga aún no están disponibles.

Referencias

CHESNEY, Robert; CITRON, Danielle. Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, v. 107, p. 1753–1819, 2019. Disponível em: https://www.californialawreview.org/print/deep-fakes-a-looming- challenge-for-privacy-democracy-and-national-security/. Acesso em: 14 nov. 2025.

FLORIDI, Luciano. The ethics of artificial intelligence. Oxford: Oxford University Press, 2021.

GOODFELLOW, Ian; BENGIO, Yoshua; COURVILLE, Aaron. Deep learning. Cambridge: MIT Press, 2016.

KAPLAN, Jared et al. Scaling laws for neural language models. arXiv, 2020. Disponível em: https://arxiv.org/abs/2001.08361. Acesso em: 14 nov. 2025.

VOSOUGHI, Soroush; ROY, Deb; ARAL, Sinan. The spread of true and false news online. Science, v. 359, n. 6380, p. 1146–1151, 2018. Disponível em: https://www.science.org/doi/10.1126/science.aap9559. Acesso em: 14 nov. 2025.

WARDLE, Claire; DERAKHSHAN, Hossein. Information disorder: Toward an interdisciplinary framework. Strasbourg: Council of Europe, 2017. Disponível em: https://firstdraftnews.org/wp-content/uploads/2017/11/PREMS-162317-GBR-2018- Report-de%CC%81sinformation-1.pdf. Acesso em: 14 nov. 2025.

ZANNETTOU, Savvas et al. Disinformation warfare: Understanding state-sponsored trolls on Twitter and their influence on the web. In: Companion Proceedings of the World Wide Web Conference (WWW), 2019. Disponível em: https://arxiv.org/pdf/1801.09288. Acesso em: 14 nov. 2025.

ZUBOFF, Shoshana. The age of surveillance capitalism: the fight for a human future at the new frontier of power. New York: PublicAffairs, 2019.

Publicado

2025-12-17

Cómo citar

RISKS OF DISINFORMATION GENERATED BY AI AND MITIGATION STRATEGIES. (2025). Aurum Editora, 110-119. https://doi.org/10.63330/aurumpub.021-010