The Defense Security and Counterintelligence Agency, which grants security clearances to millions of American workers, is using AI to streamline its work. “Black boxes” are not allowed, says its director.
Before allowing his more than 13,000 Pentagon employees to search for information about an American citizen, the director of the Defense Counterintelligence and Security Agency (DCSA), David Cattler, asks them to ask themselves one question: Does my mother know that the Government can do this?
The “mother test,” as Cattler calls it, is a common-sense check on how the DCSA, a large agency that grants and denies security clearances to millions of workers, does its job. And it’s also the way Cattler thinks about his agency’s use of AI.
The DCSA is the agency in charge of investigating and approving 95% of federal government employee security clearances, requiring it to conduct millions of investigations a year. This gives the agency access to a huge trove of private information, and in 2024, the DCSA turned to AI tools to organize and interpret that data.
This is not ChatGPT, Bard, Claude or other flashy generative AI models. Instead, it’s about extracting and organizing data the way Silicon Valley tech companies have done for years, using systems that show their work more clearly than most big language models. For example, according to Cattler, the most promising use of these tools is prioritization of existing threats.
If not used carefully, these tools could compromise data security and introduce bias into government systems. Still, Cattler was optimistic that some of AI’s less sexy features could be game-changers for the agency, as long as they’re not “black boxes.”
“We have to understand why he is credible and how he does what he does,” Cattler told Forbes. “We have to demonstrate, when we use these tools for the purposes I’m describing, that they do what they say they do and they do it objectively and they do it in a highly compliant and consistent way.”
Many people may not even think that the tools Cattler describes are AI. Cattler is excited about the idea of creating a heat map of the facilities DCSA protects, with risks represented on it in real time, that is updated when other public agencies receive new information about a potential threat. Such a tool, he said, could help DCSA “determine where to put the (metaphorical) fire trucks.” It would not be about discovering new information, but about presenting existing information in a more useful way.
Matthew Scherer, senior policy advisor at the Center for Democracy and Technology, told Forbes that while AI can be useful for collating and organizing information already collected and validated, it is the next step – making critical decisions, such as flagging red flags. during a background check process or collecting data from social media profiles – which can be dangerous. For example, AI systems still have difficulty differentiating between multiple people with the same name, which can lead to identification errors.
“I would be concerned if the AI system made some kind of recommendations or put a thumb on the scale for certain applicants,” Scherer says. “Then you are moving into the realm of automated decision systems.”
Cattler said the department has stayed away from using AI to identify new risks. Even in prioritization, however, privacy and bias issues can arise. When hiring AI companies (Cattler declined to name any partners), the DCSA has to take into account what private data it feeds into proprietary algorithms and what those algorithms can do with the data once they have it. Companies that offer AI products to the general public have inadvertently leaked private data that customers entrusted to them, a breach of trust that would be catastrophic if it occurred with data held by the Pentagon itself.
AI could also introduce bias into Department of Defense systems. Algorithms reflect the blind spots of the people who create them and the data they are trained on, and the DCSA relies on oversight from the White House, Congress, and other administrative bodies to guard against biases in its systems. A 2022 report from the RAND Corporation explicitly warned that AI could introduce bias into the security clearance vetting system “potentially as a result of programmer bias or historical racial differences.”
Cattler acknowledged that the social values that inform algorithms, including those at the Pentagon, change over time. Today, he said, the department is much less tolerant of extremist views than it used to be, but somewhat more tolerant of people who were once addicted to alcohol or drugs and are now in recovery. “Until not long ago, being gay was literally illegal in many places in the United States,” he says. “That was a bias that the system perhaps needed to eliminate itself.”
With reporting from Emily Baker-White and Rashi Shrivastava, Forbes staff.
Subscribe to our YouTube channel and don’t miss our content
Follow information about the world in our international section