Follow Me!

Thursday, January 1, 2015

Horizontal versus Vertical Network Segregation

Originally we considered looking at security from a "ring" perspective.  This shouldn't be surprising, given that since the beginning of time, humans have created rings for safety.  The moat, the castle, especially on a hill - meant safety

So, we have the same situation in infosec.   If we create a perimeter, then ostensibly, we have security. 

However, in today's networked world - we have numerous ways in which our perimeter gets distended.   AWS instances,  mobile devices,  federated identity,  BYOD - you name it.

Thinking of your network as permeable requires that you construct defenses (preventive controls) WITHIN the network - and segregating the internal network is one of those approaches.  I've written about this elsewhere, but utilizing subnets for mission critical infrastructure, sensitive data and other important assets and protecting those "enclaves" is a valid strategy.

This is akin to the "tiering" approach for web applications, where web servers reside in the DMZ and are separated from middleware/business logic layers and data stores.  I think of this as "horizontal segregation"  where the function of the asset dictates its placement in an enclave.  So a data store layer that has your databases in a secure segregated enclave would be a good example.  

But - how about vertical segregation?   If we assume that applications have multi tiers internally - does layer 3/4 segregation mean anything anymore?  We all know that firewalls and network layer controls are ineffective against web application attacks.   The application allows us to interact with it over port 80/443 and allows us to touch the data store in the "back end".

From my perspective, providing security for hosting/data center solutions -  vertical segregation is how we keep multiple tenants from touching each other's data.   It also has the effect of isolating a particular service, website, or web application from OTHER services, web 

So how do we enable communication between hosts in the back end then?  Do we provide a management backplane of connectivity that allows data to flow between endpoints?  Or can we force all data transfers out the front door to our edge router, which then sends it to application B's "front end?"

Food for thought?


No comments:

Post a Comment