Over the past decade, social media platforms have become hotbeds for misinformation and propaganda. Various actors, including state and non-state entities, exploit these platforms to further their political, military, or economic agendas. The advent of generative AI has greatly exacerbated the scale, personalization, and targeting of these attacks.
In this session, we will confront the challenges posed by sophisticated generative AI applications that enable deception at scale through information attacks, fake personas, and large-scale tailored influence operations. For years, safety in obscurity shielded smaller language communities from the worst information attacks. However, the multilingual capabilities of large language models now render this safety-in-obscurity strategy ineffective. By 2023, these technologies have slipped through regulatory oversight, making it possible to run sophisticated models on individual desktop computers without ethical or safety constraints. This unchecked development has enabled the generation of manipulative and unethical material on an unprecedented scale, leaving us ill-equipped to confront the burgeoning challenges. Simultaneously, generative AI technology also presents vast opportunities. We will explore the possibilities for tracking adversaries’ actions online, summarizing extensive volumes of multimodal data, detecting anomalies, customizing communication strategies, and accelerating content production and dissemination.
More specifically, this session addresses: 1) the potential of the technology as it exists today; 2) the role of generative AI in creating and propagating fake content and manipulating public opinion; 3) the scope and impact of misinformation, propaganda, and fake personas on social media platforms; 4) the operational, privacy, and security concerns related to AI technologies in NATO communication strategies; 5) opportunities and potential applications for AI systems to counter misinformation and track adversaries’ actions online.