Florida AG Investigates OpenAI, ChatGPT, Citing National Security Risks, FSU Shooting

Tallahassee, FL – In a move sending ripples through the burgeoning artificial intelligence sector, Florida Attorney General James Uthmeier has launched a formal investigation into OpenAI, the creator of the widely popular ChatGPT large language model. The probe, confirmed by Uthmeier's office, stems from serious national security concerns, explicitly linking the potential misuse of AI models and data to adversarial nations like China, and even drawing a connection to the recent tragic shooting at Florida State University (FSU).
This isn't merely a data privacy inquiry; it's a pointed examination into whether OpenAI's cutting-edge technology and the vast datasets it utilizes could inadvertently—or even directly—pose a threat to American interests. Attorney General Uthmeier's office expressed specific alarm that "OpenAI’s models or data could be used by adversaries of America, namely China," implying a potential avenue for intelligence gathering, technological exploitation, or even sophisticated disinformation campaigns facilitated by advanced AI.
The explicit mention of the FSU shooting adds a grim, urgent dimension to the investigation. While the direct link between ChatGPT and the incident remains to be fully elucidated by the AG's office, the implication is clear: Uthmeier is broadening the scope of "national security" to encompass domestic tragedies that could potentially be influenced or exacerbated by AI's capabilities, whether through radicalization, misinformation, or other unforeseen pathways. This perspective suggests a growing concern among state officials that the rapid advancement of AI outpaces current understanding of its societal vulnerabilities.
"We're seeing an unprecedented acceleration in AI development, and with that comes an equally unprecedented need for scrutiny," a source close to the AG's office, who wished to remain anonymous to speak freely on the ongoing investigation, told BusinessJournal. "The potential for foreign adversaries like China to leverage these powerful tools, whether for espionage or to undermine our societal fabric, is a threat we simply cannot ignore. When you couple that with the potential for AI to be misused in ways that contribute to domestic harm, as we unfortunately saw at FSU, it demands immediate attention."
The formal inquiry by the Florida Attorney General marks a significant escalation in governmental oversight of AI, moving beyond theoretical discussions of bias and privacy to concrete national security implications. OpenAI, a company that has positioned itself at the forefront of AI innovation with a stated mission to ensure artificial general intelligence (AGI) benefits all of humanity, now finds itself under intense scrutiny from a major U.S. state.
While OpenAI has invested heavily in safety protocols and ethical AI development, the sheer scale and complexity of large language models make comprehensive risk mitigation a monumental challenge. The models are trained on vast swathes of internet data, and concerns about data provenance, potential backdoors, or the unintended leakage of sensitive information have long been discussed within the tech community. What's more, the dual-use nature of AI – its capacity for both immense good and profound harm – is becoming a central theme in regulatory debates globally.
This investigation isn't happening in a vacuum. Federal lawmakers and agencies are also grappling with how to regulate AI, with discussions ranging from establishing a new federal agency to creating robust risk assessment frameworks. Meanwhile, other states are exploring their own legislative and enforcement actions to address concerns ranging from deepfakes in political campaigns to algorithmic bias in hiring practices. Florida's move, however, stands out for its direct invocation of national security and its link to a specific, tragic domestic event.
For OpenAI and the broader AI industry, this probe signals a new era of accountability. Companies developing and deploying powerful AI systems will likely face increased pressure to demonstrate not just the safety and fairness of their models, but also their resilience against state-sponsored exploitation and their potential role in broader societal risks. The stakes, as Attorney General Uthmeier's office has made clear, couldn't be higher.





