"Know thyself" is the inscription over the entry to the Oracle at Delphi.
I think "knowing thyself" from an IT service architecture and IT operations perspective is the primary indicator of whether an enterprise can defend themselves effectively from information security threats. It also is a very good predictor of how quickly they can identify, respond, contain and eventually recover from an infosec incident.
In my work as security consultant, I find that enterprises that do the basics well are far ahead of the game in many respects but especially when it comes to security. By basics, I mean specifically the ITIL disciplines like asset management, change management, and the like. Show me an enterprise with a 100% accurate CMDB (impossible?), I'll show you an enterprise that will suffer less damage in an incident. I'd love to do a study of this particular issue and get some hard data on cost of breach versus ITIL practices. Maybe one of my friends in Academia will contact me?
In fact, I might make the bold statement that how you do ITIL is more important than how many different security technologies you have deployed. If you have an amazing security team, but that team has no idea how many hosts are on the network, how effective do you think their vulnerability management is going to be?
I think this has big implications for Small and Medium size Businesses (SMB). Ostensibly, a smaller company with a less sweeping IT infrastructure should be able to manage their assets much more effectively on a smaller budget. Furthermore, they should be able to "baseline" their normal network behavior more accurately and therefore, be more able to identify anomalies. Oftentimes when I respond to SMB environments, the ones with the more damaging breaches are ones with outsourced IT vendors. The companies with the in-house IT personnel (especially if they're good) tend to identify their incidents themselves. In the former case, they are often finding out from VISA, if you know what I mean...
This may all be pretty obvious up to this point, but I'd like to take this to its logical extreme.
Prescribed Interactions
By "prescribed interactions", I mean pre-defined ways in which systems, users and applications interact with one another on the network. This is similar to the idea of whitelisting, but taken to the next level.
Let me give an example. If a particular enterprise has sensitive data that needs to be accessed by some subset of its users, one can develop a matrix of "prescribed interactions" with that data.
Statement 1: Normal user base can interact with sensitive data ONLY through an Intranet web application.
Statement 2: Administrators can interact with the host ONLY
Statement 3: Database Administrators can interact with the database only. I.e., they can interact with the database directly using database management tools.
Then security controls can be applied to each "access pathway" and the statements can become more granular.
Case 1: All users have browsers that can access the Intranet web page from Subnet X. This necessitates segregating the network so that userland network segments can be readily identified and placed in ACLs on routers and firewalls to control Layer 3 access to the website. Furthermore, there will be authentication to the application. (Let's assume SSO for the enterprise). Last, let's ensure that our application developer and DBA on this app have enabled views and other controls to allow granular access to the data.
Case 2: Administrators will interact with the host by logging into a jumpbox first and then using SSH (or rdp) into the host. Backing up the data is done by...(there are any number of options here, but you get the point - whatever the tool is that does backups, you set up controls that only allow that software, using that service account from that server to conduct backups)
Case 3: DBAs will interact with the data by logging into jump boxes on which SQL Server Management Studio (let's assume a MS environment for the moment) is installed and use Windows authentication only.
Etc. So, what we've done is tried to construct a network environment that allows our different types of users to have access to the information they need, but in a very prescribed way. If they need access remotely, then we have to factor in VPN access to the environment, but recognize that we've made an access pathway that is a little more risky here. Do we want VPN users to be lumped into userland network segments? If they have access to sensitive data, maybe it's now a requirement for two-factor authentication to be used.
Now, what I've just said is nothing new. This is the sort of analysis that most security departments conduct when working with IT to deploy new systems and technologies. What I think I'm advocating is taking it to the next level - a very thorough mapping of all of these interactions in a very conscious way. And that anything other than those prescribed interactions is disallowed and controls are instituted to prevent them for occurring. Policies are applied which explain the use cases and the prescribed interactions (especially for Admins!) Violations are immediately investigated and remediated. The implication here is that your incident detection capability is keyed on finding anomalies to the prescribed interactions. This allows you to identify attacks earlier in the "cyber kill chain."
What I've seen, especially in most SMBs, is a continued adherence to the idea that the network perimeter is a security boundary. It is not. And most infosec professionals will tell you that at this point. We're living in a world where our perimeter should be considered a semi-permeable membrane and we should assume that we've been breached at any given moment in time. But many enterprises are still thinking of the "inside" of their perimeter as trusted and thus not being very conscious of the network traffic and transactions occurring between endpoints on the network.
However, all is not lost. Using prescribed interactions is a lot like the military concept of Intelligence Preparation of the Battlefield (IPB - described here). YOU must prepare the battlefield to give you and advantage over your adversary. And this is where you theoretically have the advantage. Investigations of anomalous behavior is now possible and can be much more effective, if you've done the background work to set up the internal environment and you know what the network transactions should look like for the baseline.
Of course, this can be a daunting task - especially in larger enterprises. However, you can start with the users, transactions and systems that touch sensitive data to make it more manageable.
As my tactics instructor at the FBI used to say, "Where do you want to have a gunfight? At home, at night" - meaning that you are in a place where you know the terrain, you know the angles, the hidden alcoves, the creaky stair on the staircase. Without light, you have an even greater advantage. But that only works if you really know your own home. Sadly, most enterprises do not know their own home very well. So, I've come full circle.
Gnothi Seauton!
Monday, May 27, 2013
Tuesday, May 21, 2013
Threat Intelligence?
The term threat intelligence reminds me of the age-old joke about Military Intelligence: it's oxymoronic.
Truthfully, I don't think that is the case for either MI or for computer security threat intelligence. I think the term "threat intelligence" is ambiguous, however, and a number of security vendors have jumped into this space. It might be good to tease it out a little bit and try to figure out what it might mean and whether it adds any value when trying to defend your network.
Here's how I think about it. First as a background, I like to use the terms threat actors and threat vectors. The actor (or "agent," in some circles) is the person or organization with the motive, means and desire to attack you. The vector is the exact means by which they accomplish that.
Threat Intelligence in this context can really come from two sources:
1) an analysis of the attack vectors from a review of incident artifacts
2) penetration of threat actor groups to identify their motives, capabilities and imminent targets
For the first source, the threat intelligence is ostensibly valuable because an understanding of historical attack patterns should give an enterprise some understanding of what defenses are most effective. In my practice as a consultant, I find that this can be a good approach. It certainly has benefits in terms of managing a security program, where decisions about where to invest resources are better driven by some true understanding of the threat. One can never be 100% secure and if you can't be strong everywhere, you want to be strong where it counts.
Clearly, this approach is historical and hinges on the theory that past events can be good predictors of future events. This is somewhat true in that the basic model of how intrusions are carried out is still pretty consistent with the pattern as described in the seminal Hacking Exposed books. However, given the dynamism of today's IT landscape (BYOD and Cloud immediately come to mind) - new vectors are arising everyday. Furthermore, it's important to note that this intelligence is primarily sourced from an analysis of data (log, forensic, malware reversing) from attacked systems.
In short, this approach is valuable, but not the whole story. I find the value is mainly in understanding the Tactics, Techniques and Procedures (TTPs) of the various criminal and APT groups, versus getting lists of bad domains, IP addresses and the like. To use the cliched phrase, it's all about actionable intelligence. I don't think those IPs and domains are actionable - by the time you know about them, it's too late in most cases. (That's not to say you shouldn't use security gateways! I'm simply saying as a security professional, you are better served by understanding the approach of the attackers.)
As an example of what I am talking about, I think there is a great paper written by Jon Espenschied at Microsoft, which can be found here. I think threat feeds that emphasize exact details of the new vectors and how they are might be deployed against your enterprise are worth looking into. There's also a very good exploration of this topic in the context of the "Cyber Kill Chain" in a paper written by the folks at Lockheed Martin, found here. I think we can be assured that these fellows have a great deal of experience dealing with APT. I may devote an entire post to that paper and some other related work.
The second approach involves penetrating the circles of likely attackers and trying to determine what they are planning. This is obviously more proactive (vice historical) and has the added advantage of potentially helping you determine if YOUR enterprise is being targeted, before or during the act. An example of this would be when Zeus or SpyEye botnet operators add certain financial instituions to their config files for credential stealing. However, I think the number of times that attacks are caught before they occur is probably small, if for no reason other than that one can attack your enterprise without attacking your infrastructure (in the example given, the bank can't keep its customers from getting Zeus - they can mitigate it a bit but can't stop it entirely.)
Furthermore, I think penetrating these groups is non-trivial. Guys like Brian Krebs have done a good job of getting some penetration of these groups, but one can assume that his credentials as a journalist may be helping. I don't know, but maybe he'll comment on that here on this blog. Brian made available on his site a good reference - the indictment of Bx1, the SpyEye creator/botherder (interesting that they charged wire fraud and not CFAA, but that's for another post.)
I call your attention to paragraph 24 in the "Overt Acts" section of the indictment. It gives one a good understanding of how these criminals come together to plot these schemes.
Another good discussion can be found here at TechRepublic, which then links to an academic paper found here.
On a final note, penetration of criminal groups is one thing - but what about APT? Aside from some of the open source intelligence gathered and presented in Mandiant's APT1 report, one suspects that this is the kind of information only the NSA and CIA could provide. It seems like we've got a long way to go before the USG figures out how to declassify stuff to provide actionable intelligence on these groups without burning their sources.
So which is more valuable? I think one needs to use both, without expecting too much out of either. But, this is a blog, so I don't have to solve the issue. Comments are welcome.
Truthfully, I don't think that is the case for either MI or for computer security threat intelligence. I think the term "threat intelligence" is ambiguous, however, and a number of security vendors have jumped into this space. It might be good to tease it out a little bit and try to figure out what it might mean and whether it adds any value when trying to defend your network.
Here's how I think about it. First as a background, I like to use the terms threat actors and threat vectors. The actor (or "agent," in some circles) is the person or organization with the motive, means and desire to attack you. The vector is the exact means by which they accomplish that.
Threat Intelligence in this context can really come from two sources:
1) an analysis of the attack vectors from a review of incident artifacts
2) penetration of threat actor groups to identify their motives, capabilities and imminent targets
For the first source, the threat intelligence is ostensibly valuable because an understanding of historical attack patterns should give an enterprise some understanding of what defenses are most effective. In my practice as a consultant, I find that this can be a good approach. It certainly has benefits in terms of managing a security program, where decisions about where to invest resources are better driven by some true understanding of the threat. One can never be 100% secure and if you can't be strong everywhere, you want to be strong where it counts.
Clearly, this approach is historical and hinges on the theory that past events can be good predictors of future events. This is somewhat true in that the basic model of how intrusions are carried out is still pretty consistent with the pattern as described in the seminal Hacking Exposed books. However, given the dynamism of today's IT landscape (BYOD and Cloud immediately come to mind) - new vectors are arising everyday. Furthermore, it's important to note that this intelligence is primarily sourced from an analysis of data (log, forensic, malware reversing) from attacked systems.
In short, this approach is valuable, but not the whole story. I find the value is mainly in understanding the Tactics, Techniques and Procedures (TTPs) of the various criminal and APT groups, versus getting lists of bad domains, IP addresses and the like. To use the cliched phrase, it's all about actionable intelligence. I don't think those IPs and domains are actionable - by the time you know about them, it's too late in most cases. (That's not to say you shouldn't use security gateways! I'm simply saying as a security professional, you are better served by understanding the approach of the attackers.)
As an example of what I am talking about, I think there is a great paper written by Jon Espenschied at Microsoft, which can be found here. I think threat feeds that emphasize exact details of the new vectors and how they are might be deployed against your enterprise are worth looking into. There's also a very good exploration of this topic in the context of the "Cyber Kill Chain" in a paper written by the folks at Lockheed Martin, found here. I think we can be assured that these fellows have a great deal of experience dealing with APT. I may devote an entire post to that paper and some other related work.
The second approach involves penetrating the circles of likely attackers and trying to determine what they are planning. This is obviously more proactive (vice historical) and has the added advantage of potentially helping you determine if YOUR enterprise is being targeted, before or during the act. An example of this would be when Zeus or SpyEye botnet operators add certain financial instituions to their config files for credential stealing. However, I think the number of times that attacks are caught before they occur is probably small, if for no reason other than that one can attack your enterprise without attacking your infrastructure (in the example given, the bank can't keep its customers from getting Zeus - they can mitigate it a bit but can't stop it entirely.)
Furthermore, I think penetrating these groups is non-trivial. Guys like Brian Krebs have done a good job of getting some penetration of these groups, but one can assume that his credentials as a journalist may be helping. I don't know, but maybe he'll comment on that here on this blog. Brian made available on his site a good reference - the indictment of Bx1, the SpyEye creator/botherder (interesting that they charged wire fraud and not CFAA, but that's for another post.)
I call your attention to paragraph 24 in the "Overt Acts" section of the indictment. It gives one a good understanding of how these criminals come together to plot these schemes.
Another good discussion can be found here at TechRepublic, which then links to an academic paper found here.
On a final note, penetration of criminal groups is one thing - but what about APT? Aside from some of the open source intelligence gathered and presented in Mandiant's APT1 report, one suspects that this is the kind of information only the NSA and CIA could provide. It seems like we've got a long way to go before the USG figures out how to declassify stuff to provide actionable intelligence on these groups without burning their sources.
So which is more valuable? I think one needs to use both, without expecting too much out of either. But, this is a blog, so I don't have to solve the issue. Comments are welcome.
Subscribe to:
Comments (Atom)
