Snowden’s Reemergence Refocuses Spotlight on Insider Threat in IT

Slide Show

Reduce Risk: Six Vulnerable Points Enterprises Need to Consider

Last week’s reemergence of Edward Snowden into the news cycle, following his interview with NBC News and the subsequent debate about the extent to which Snowden had worked within the system to convey his concerns about NSA’s intelligence-gathering activities, was valuable for IT professionals. The debate served to refocus the spotlight on the damage that can be caused by internal IT personnel within any organization.

I recently discussed this topic at length with two IT security experts: Kevin Johnson, CEO of Secure Ideas, an IT security consulting firm in Orange Park, Fla.; and Alex Moss, managing partner at Conventus, an IT security consulting firm in Chicago. I wanted to get a sense from these two experts of what CIOs can do to prevent what happened to NSA at the hands of Snowden, from happening in their organizations.

Johnson often speaks about “blind spots” in IT security, so I asked him to what extent internal system access is a blind spot, and how vulnerable companies are to losing mountains of sensitive data because of one individual, as NSA did with Snowden. Johnson said understanding access to internal data, data classification, and protecting internal systems constitute a huge blind spot:

It’s not a question of how vulnerable companies are to losing mountains of data; it’s a question of how vulnerable companies are to losing all their data. Every company out there is susceptible to that kind of thing. We have trusted people, especially people like Snowden, who had legitimate reason to have access to the majority of the data he stole. He had administrative rights, is what we’re being told. Yes, he reportedly used and abused other people’s accounts, other credentials that he gained access to.

The organizations you work with—how many IT people are there in the organization? How many of those IT people get made fun of, get treated badly, are underpaid, don’t get the benefits they want, hear about all the Silicon Valley crap where you make a billion dollars for coming up with a stupid app. And they’re sitting there making $30,000 a year—I’m not making fun of $30,000 a year, but it doesn’t compare to $1.5 billion. I’m not legitimizing any wrongdoing, but we treat people badly because we don’t understand what they do—we treat helpdesk personnel badly, and then we’re surprised when they get angry and they steal data. It doesn’t make the action OK, but it’s something we need to think about as we build our systems.

So what’s Johnson’s advice for CIOs on how to prevent any one person from having the keys to the kingdom? He said it’s all about knowing what’s normal, and being aware of how IT systems are interconnected:

The main thing that CIOs have to do to protect their systems from that one person having access to everything, is to look at things as interconnected, dependent systems, and say, “How does this impact that, how does this interact with that,” and really start to understand what’s happening on the network—getting the same IT visibility as we have business visibility, and knowing what is normal. We’re not going to be able to prevent attacks. We’re never going to be able to say, “I am 100 percent secure.” The CIO’s job is not to be 100 percent secure. The CIO’s job is to be able to say, “We’ve done the best we can, and the minute we got compromised, we were able to react and limit the damage.” It’s like fire. You can’t prevent your house or your building from catching on fire. But what you can do is build it in such a way that the minute the fire starts, something notices it and immediately reacts.

I asked Johnson what NSA CIO Lonny Anderson’s biggest blind spot was with respect to what Snowden was able to do. He said it was exactly what we had just talked about:

They didn’t know what was normal. It wasn’t normal for the accounts he took control of to grab that amount of data. It wasn’t normal for that data to be exfiltrating out of the network. They didn’t know what was normal, hence they didn’t detect the abnormal behavior that was this attack.

I asked Moss, meanwhile, what he thinks was the biggest mistake NSA made that resulted in Snowden’s wherewithal to steal as much highly classified information as he did. Moss said it was failure to manage the concept of least privilege:

What came out was that Snowden was not just using his account—he was using other folks’ accounts, or at least one other account, as well. So the volume of information that he took from the NSA eclipsed what he had direct access to. Snowden violated the trust of individuals around him—his peers—in order to gain their access. One of the biggest problems we have in security now, and one of the things security professionals need to do a better job of, is managing the concept of least privilege—when you’re provisioning a user’s account, you give him the least privilege that he needs in order to function. Least privilege has always focused on system access, or application access. The industry, and companies, haven’t done a great job of moving that into data access—data loss prevention. Data loss prevention is not as widely adopted as it could be, or as it should be. The Snowden case is a perfect example—we need to do a better job of focusing on the data that folks have access to. It goes back to the concept of “need to know,” but it’s not just what’s classified Secret or Top Secret anymore. There can be hundreds of categories, based on position, role, responsibility, etc. NSA’s problem is the same problem that most companies have—they don’t know where to draw the lines on who has access to what.

Moss said companies are doing a much better job of preventing any one systems administrator from having the keys to the kingdom. But he said the problem now lies not so much in the extent of system access, but rather in the extent of data access:

One of the common factors we’ve seen in a lot of targeted attacks—whether it was an individual in an Eastern Bloc country, or it was state-sponsored—one of the targets is the domain controller. The reason they want the domain controller is they want the user accounts, for a couple of reasons. One is that we’ve done a better job of isolating privileged accounts. A lot of companies have realized that domain administrators, root users, root-equivalent users—all of those very senior accounts represent a significant amount of risk. They have the keys to the kingdom. Do they focus first on locking those accounts down, and restricting those accounts? They should be able to accomplish very privileged activity on a system level. But at the data level, they shouldn’t be. System admins shouldn’t even see the data, because they don’t need to see the data. Then you go to the database admin, and that’s a whole different role. Does he need access to the data within the database? In most cases, probably not. So then there comes a data custodian. The industry has done a good job of beginning to confine those super accounts. So the attackers go after the domain controllers, because they want the user accounts—it’s not the system that they want anymore. They want the data. The other benefit to getting those user accounts, aside from getting access to the data they want, is they’re much less likely to be monitored. Whereas if they take a domain or a privileged account, those accounts, typically, are monitored heavily. So while we have progressed in certain areas, the attackers have moved to softer spots. Now the users have become the soft spot.