New course introduces students to the history of the science behind AI, its current theory and application in the intelligence community, and the ethical considerations around the uses of the technology.

MS in Intelligence Analysis lecturer Rob Johnston, a computational social scientist and the former deputy director of globalization and modernization at the Central Intelligence Agency, designed the new course AI and the U.S. Intelligence Community with the intelligence community in mind.
“I am fortunate enough to know a number of people in the artificial intelligence domain, including the inventor of Alexa and the developer of Siri,” said Johnston, who worked for seven years at the Laboratory for Analytic Sciences, a National Security Agency and North Carolina State University AI and Machine Learning Lab. “I called my buddies up and talked to them about what they thought young people in national security should know about the kind of work that they do. Then I started shaping the course around the fact that homo sapiens have been using tools to augment our capabilities for 300,000 years, as long as we have been a species within our genus. Before that, our genus had been developing tools for three million years.”
The course will introduce students to AI, the history of the science behind AI, its current theory, and different models of usage, how it is being implemented in the intelligence community for national defense, and the ethical considerations around the uses of the technology.
“The course will delve into the immense power and promise, and the serious pitfalls, of AI use,” Johnston said. “We will look at how it is currently being used in national policy, and we will also look to the future of human interfaces with AI.”
“This promises to be a rich course,” said Program Director Michael Ard, “not only because of its content but also because Dr. Johnston is leading it. Rob has expertise in the AI field and is also a former intelligence officer and a well-known social scientist, making him a triple threat in delivering the material.”
“The secret sauce with AI is that it is just fast math,” Johnston said. “It looks fancy, but it is just what the machine does very well. Humans have been trying to capture this ability for fast math for thousands of years. After centuries of work, in 2017 there was this discovery that we could take natural language and ‘math it up,’ to start making predictions about what the next sentence or paragraph or book might look like based on a corpus of training material. Now we have all the technology necessary to create large language models which has eliminated the requirement that people have to speak math. Now they can just speak English or Mandarin or French and talk to the computer in a natural interaction through large language models like ChatGPT, OpenAI, DeepSeek, and Anthropic’s Claude. I find it interesting that we now have this technology that can do seemingly miraculous things, but we don’t have a good model yet as to how we invest it with some level of trust.”
Johnston hopes to share numerous examples with his intelligence analysis students about how they need to proceed with AI with caution.
“We need to keep in mind that the machine has been trained on human data that has no human core, with no way to think about right or wrong or good or bad,” he said. “People don’t process the limitations of the AI technology, and they don’t think about the very real ramifications of just believing the machine, including, from a national security standpoint, the very real espionage risks and faulty targeting risks. As we automate drones and other more independent weapon systems, it is a troubling, ethical conundrum that we are outsourcing violence to a device, instead of making decisions ourselves. The decision to use violence Is an inherently moral decision. Humans, not AI, should be deciding whether violence is appropriate or not appropriate.”
Students will examine the use of AI in the military and its global expansion, how U.S. adversaries are using it, and the ethical challenges and biases that currently exist in the technology as well as the lack to gender and racial equity and socioeconomic variance in the companies that produce it.
“I am legitimately concerned about the state of democracy globally and how AI can be used to thwart democratic goals, to manipulate people, to filter media, to amplify anger and division and resentment,” Johnston said. “However, used in a designed, specific way, this technology can be fantastic and can change the way people acquire information and think. I want this course to compel students to think critically and have a somewhat skeptical eye, recognizing that this technology is only an 80 percent solution. Yes, AI is cool and neat and flashy, but the intelligence community needs to make serious and complicated decisions, craft policies, and inform decisionmakers about the critical intersection of this technology and its impact on human lives.”