By John Irvin
Those who have served in the military, especially those who served at a time when World War II-era buildings were still widely in use, are probably familiar with the concept of “painting rocks.” This was originally born out of the common practice to line either side of the walkway leading to a battalion headquarters (or some other office containing senior-ranking and career-conscious individuals), housed in one of these old, increasingly worn-out wooden buildings, with large and apparently superfluous rocks.
When an inspection from higher authority was anticipated, junior enlisted soldiers were dispatched with brushes and cans of white paint in hand to apply new coats of paint to the rocks, as well as wooden fences, adjacent street curbs, and any other items identified as in need of paint by those of more senior rank. Even novice soldiers, however, soon became aware that this was merely an exercise in making things appear better than might actually be the case. Leadership might be poor, the unit unprepared for combat, but please take note of those “damn fine-looking rocks.”
Eventually, “painting rocks” came to connote an activity that was pursued with the goal of making things appear (superficially) to be organized and efficient rather than ensuring they actually were organized and efficient.
Current counterintelligence focus on technologies designed to detect instances of insider espionage (or “red flags” that would suggest such instances), runs the risk of being an expensive and largely ineffective exercise in “painting rocks.” Moreover, a detection mindset is inherently ill-suited to addressing the issue of prevention.
This is not to say that technologies that assist in detection are without merit – almost any effort to stop insider espionage is worth pursing – but trusting too much in CI technology can lead to a dangerously complacent attitude. It assumes all will be well because the newest, most expensive, most impressive technology has been put in place.
Unfortunately, like a termite-infested house, putting on new locks, or a new alarm system, or a new coat of paint, won’t keep it from eventually falling down. The problem isn’t what’s on the outside. It’s what’s on the inside.
In his third and most recent paper on the psychology of the insider spy, Prevention: The Missing Link for Managing Insider Threat in the Intelligence Community,[i] Dr. David L. Charney, MD, discusses the two fundamental approaches to the insider threat. These are External Management of Insider Threat (EMIT) and Internal Management of Insider Threat (IMIT).
He states, “EMIT targets insider threat suspects with efforts that are externally focused, surveillance-based, intrusive, invasive, and even coercive. Detection is the principal discipline that epitomizes EMIT.”
This is in contrast to IMIT, which Dr. Charney says, “encourages IC employees to think about alternative ideas of how to manage their overwhelming life situations, which makes for healthier internal debate.
Employees on the brink (of engaging in espionage) are persuaded to rethink their dark, hopeless assessments and open up their minds to new, more positive ways of handling their problems. IMIT attempts to convince employees to reach for helping resources that are presented as free choices. Resources would be attractively packaged for their due consideration.”
In other words, while EMIT focuses on what’s visible from the outside, IMIT deals with what’s going on inside.
Most current efforts at countering the insider threat focus on EMIT for understandable, if misguided, reasons. Like the rocks leading up to battalion headquarters, they can be seen and evaluated.
Typical CI practices based on an EMIT approach include pre-employment biographic screening, polygraphs and periodic reinvestigation, compartmentalization of information (need-to-know), and technologies that offer continuous evaluation (CE), such as software that monitors employee online behavior. All of these efforts rely on detecting incongruous behaviors, such as false statements, attempts to access unauthorized systems or information, or polygraph results suggesting deception.
What makes EMIT-based practices desirable is also what makes them insufficient. To achieve tangible, detectable results, someone has to either have done or be in the process of doing something that is detectable. While they do offer tangible evidence of malfeasance or suspect behavior, it can only, by definition, be post hoc evidence. If the main goal of a security program is to punish wrong-doers, this is probably an adequate result. Detection finds real evidence that can be acted upon. This is why, as Dr. Charney writes, “Detection has always been law enforcement’s primary means of managing criminal threat.”
What EMIT misses, however, are the instances of malfeasance that were not detected. No detection-based practice or technology will ever be one-hundred-percent effective. People will always find a way around security practices and technology for the common-sense reason that people are the ones who came up with them in the first place.
The insider spy is especially well-placed to find workarounds because, as a trusted insider, he or she is already familiar with security measures. A modern CE technology that detected when an individual accessed unauthorized information would have been wasted on a spy like Ana Montes because she already had natural access to information of value to her Cuban handlers. Her exceptional memory also ensured she would not get caught walking out with classified documents; the information was already in her head.
Devising ever more intrusive detection methods eventually reaches a point of diminishing returns. Moreover, CE taken to an extreme can have a damaging effect on workplace morale and loyalty to the organization. Like the painted rocks leading to battalion headquarters, they may even offer a false sense that what’s going on inside is just as well-maintained.
EMIT-based efforts are certainly necessary, but can never be sufficient. Apart from an unquantifiable belief that their mere presence also serves as a deterrent, they play virtually no role in prevention. Nevertheless, they are virtually the only efforts in practice today.
Dr. Charney argues, however, that IMIT-based practices would go beyond “painting rocks” and actually address the fundamental causes of insider espionage. As he writes,
“Insider threat events originate within the minds of individuals. That is where it starts. Always.”
Unlike EMIT-based practices, which are only effective after a bad actor has physically done something that can be detected, IMIT-based practices seek to prevent the action from ever happening in the first place. They focus on where the problem really starts, in the mind of the individual. The only truly reliable means of preventing an action from taking place is for the would-be perpetrator never to have let his or her thought translate into an action.
What Dr. Charney offers in Prevention: The Missing Link for Managing Insider Threat in the Intelligence Community also goes beyond simple deterrence. There is evidence that the threat of punishment is not a reliable deterrent to crime.[ii]
His recommendations are not threat-based, but instead address the psychological circumstances that can cause a trusted insider to even consider espionage in the first place. His approach focuses on IMIT-based practices, which are the only ones that can stop insider espionage where it really starts; not in a database or a security system, but in the mind of an individual.
In concert with EMIT, his IMIT approach provides genuinely comprehensive security. It offers an organization the peace of mind that comes from knowing that things have been taken care of on the inside as well as the outside, which is a lot more comforting than a bunch of painted rocks.