Follow Me!

Saturday, February 14, 2015

Violence of Action

One of the things I've learned as an incident responder over the years is that we defenders have a very short time period in which to react to suspected attacks.

In Mandiant's seminal report on APT1, they reported that these sophisticated attackers had penetrated the target, on average, a year prior to being detected.  In one of the investigations they conducted, the attacker had persisted in the network for 4 years and 10 months!

This is interesting.  First off, it is really amazing that Mandiant was able to find evidence five years old.  This is in contradiction to what I have been taught and what I have experienced.   My guess is that it was a stroke of good luck, or that they had a very definitive indicator of compromise (IOC) and were able to get backups going back long enough to establish "patient zero."   That is some excellent investigation, and a client that was willing to pay enough to let them dig back far enough.

As pointed out in Mandiant's report -digital evidence is ephemeral.  Which means that responding quickly to an event is paramount.

We can increase our chances of conducting a successful investigation if the local security team has done one or more of the following (or managed to get their enterprise to do the following):

  1. Enabled verbose logging - everywhere, ideally.  But at least on critical infrastructure, servers and other likely targets on the host level and all network devices and security technologies.
  2. Created secure and centralized logging infrastructure
  3. Good packet capture or other network surveillance technologies (netflow, etc.) 

However, the point of this post is not about doing a root cause analysis and being able to go "back in time" to identify the initial penetration.    The point is about reacting quickly and "violently"  to initial indicators that something is wrong in your environment.

This is something I have been thinking about a lot recently.   Anyone who does security operations knows that there is a deluge of eventing that must be processed effectively and efficiently by security analysts.   Distinguishing signal from noise is THE main issue for security operations.

[Note I am using the term "security operations" versus "incident responders" - in some organizations, these may not be the same people.  For the purposes of this blog, security operations pertains to the analysts who are engaged in incident detection]

I don't know that I have any definitive rules for knowing when an event is an incident.   But I think this is a topic that we security professionals should start discussing more.   And where knowledge transfer should occur.  I will be posting more about this topic in the future.

However, here are some final thoughts.  In each of the APT intrusions investigated by Mandiant, I bet my bottom dollar that there was some indication that things were amiss.   Yes, for some enterprises, they don't even have the detective controls to look at.  I wonder, though - in how many cases, did an alert fire and no one looked at it? (The age old question - if a tree falls in the forest...)  We know for a fact that this happened in the Target breach.

So - when an alert fires one certainly needs to respond quickly.  My contention is that even in instances where suspicious events arise - they should be quickly and fully investigated to the satisfaction of the entire security team.   This may seem like an obvious point, but we should be mindful of the quality of our investigations to ensure that what we think we are seeing, is in fact what we are seeing.  More on this later (before I get "TL;DR'd")








Thursday, January 1, 2015

Horizontal versus Vertical Network Segregation

Originally we considered looking at security from a "ring" perspective.  This shouldn't be surprising, given that since the beginning of time, humans have created rings for safety.  The moat, the castle, especially on a hill - meant safety

So, we have the same situation in infosec.   If we create a perimeter, then ostensibly, we have security. 

However, in today's networked world - we have numerous ways in which our perimeter gets distended.   AWS instances,  mobile devices,  federated identity,  BYOD - you name it.

Thinking of your network as permeable requires that you construct defenses (preventive controls) WITHIN the network - and segregating the internal network is one of those approaches.  I've written about this elsewhere, but utilizing subnets for mission critical infrastructure, sensitive data and other important assets and protecting those "enclaves" is a valid strategy.

This is akin to the "tiering" approach for web applications, where web servers reside in the DMZ and are separated from middleware/business logic layers and data stores.  I think of this as "horizontal segregation"  where the function of the asset dictates its placement in an enclave.  So a data store layer that has your databases in a secure segregated enclave would be a good example.  

But - how about vertical segregation?   If we assume that applications have multi tiers internally - does layer 3/4 segregation mean anything anymore?  We all know that firewalls and network layer controls are ineffective against web application attacks.   The application allows us to interact with it over port 80/443 and allows us to touch the data store in the "back end".

From my perspective, providing security for hosting/data center solutions -  vertical segregation is how we keep multiple tenants from touching each other's data.   It also has the effect of isolating a particular service, website, or web application from OTHER services, web 

So how do we enable communication between hosts in the back end then?  Do we provide a management backplane of connectivity that allows data to flow between endpoints?  Or can we force all data transfers out the front door to our edge router, which then sends it to application B's "front end?"

Food for thought?


Threat intelligence, Part 2

This year has been very interesting in terms of massive, critical vulnerabilities in some of the fundamental technologies  and protocols that underpin the Internet, (Heartbleed and Poodle) and in some of the foundational software packages that comprise our computing platforms (Bash - shellshock).

As a practitioner who has had to scramble around to patch these systems, some interesting artifacts have arisen which got me to thinking about threat intelligence.  In addition to patching machines, we were aware that some of these vulnerabilities had been around for some time before public disclosure.   Even after public disclosure, there was some period of time before patches were available for all the affected platforms that an enterprise would possibly have in their environment.

Which leads me to this:   these types of vulnerabilities can potentially serve as a way to identify your adversaries.   Think of it -  if you have an adversary who wants to get into your network and a vulnerability like heartbleed or shellshock becomes known, don't you think they want to probe you immediately to see if they can use the period in between vulnerability and patch to compromise your systems?

From a vulnerability management perspective, this is why compensating controls are so important.   You must protect yourself while you are waiting for that patch.

However, I would make the point that it is extremely important to also log any attempts to take advantage of the vulnerability in as verbose a fashion as possible.  If you have the choice of two different compensating controls and one gives you a better view/more intelligence about the attacking party - choose that one (actually, choose both!)

Different types of attacks can provide more detailed intelligence on your attackers

Heartbleed didn't really give us much in the way of telemetry about our attackers, mostly because in the early period after the disclosure, there wasn't a good way of detecting probes.  Later, there were some signatures for IDS.   At best, you'd get some idea of the IP addresses that were probing you. And as we all know, IP does not provide strong attribution.   Using it to identify your adversaries is not a winning strategy.

Shellshock gave us a lot more to go on.   Attackers had to craft "attack strings" to take advantage of the vulnerability and these strings appeared in web logs.  Reviewing logs could give us some idea of the attackers approach:

Tool repositories
preferred commands
desired targets - both hosts and the files on them.

Now, these are the kinds of Tactics, Techniques, and Procedures that provide better identification of your attackers.