Clearing the Cloud Part III || How Do You Solve A Problem Like “A Cloud”? || Cloud Computing Security

Welcome! Please comment and leave me a note telling me what you like and what you'd like to see more of. Sign up to my RSS Feed!
This entry is part of a wonderful series, [slider title="Clearing the cloud"]Entries in this series:
  1. Cloud Security Article - 1st in a Series
  2. Cloud Security: Danger (and Opportunity Ahead)
  3. Clearing the Cloud Part II |A Ray of Sunshine On A Cloudy Day || Cloud Computing Security
  4. Clearing the Cloud Part III || How Do You Solve A Problem Like “A Cloud”? || Cloud Computing Security
[/slider]

In the first in this series of “Clearing the Cloud” columns, I explored the dangers of jumping too soon into Cloud Computing. In the second in the series, I defined relevant risks that we must consider when implementing Cloud Computing and promised to show some solutions. In this article, the third in the series, I continue sharing my vision on how to manage and secure cloud-computing solutions.

Clearing the Cloud Part III – How Do You Solve A Problem Like “A Cloud”?

By Ariel Silverstone, CISSP

 

As I promised, this is not an article meant to state the problems.In the sections that follow I will put forward some ideas on how to resolve issues defined in my two previous articles. I will also attempt to show some of the security related benefits that we can garner from the usage of Cloud Computing, especially those that we could not, or could not easily, do before.

The approach

Part I – A Cloud OS:

In the early days of such companies as NetApp and EMC, one of the largest challenges faced by Hosting Providers, was how to allocate, measure and control, bit/strip/block assignment to a specific user, and how to protect such element from unauthorized access/modification, erasure and disclosure. Sounds familiar? Such concern led, ultimately, to elaborate control systems, and to the concept of the Filers.Today, every large enterprise uses those tools and concept, usually seamlessly, and provides online and near-line service to its users and customers. When I was a wee lad at the fantastic organization called “Global Integrity”, one of my mentors was Anish Bhimani.  Anish since then went to greater things, but, back at Global, for one of my inventions, he has asked me a question. I remember driving (being lost) in Reston, VA, looking for a Sushi place, and having an “ah-a!” moment because of his question:“The solution should be Out Of Band!” So: the Solution we are seeking should be a:

“A globally synchronous operating system, operating on TCP/IP, that has the capability to handle user management, LDAP integration, and out of band bucket control

Caption 1:Ariel’s Cloud Law Number 8: The Need for a Cloud OS

What is bucket control?

Let’s do the following:

  1. Create a bucket numbering and identification system where
    1. Such identification is created on the fly
    2. Such identification has a lifespan that terminates when the utility of such bucket terminates
    3. Such identification is inherited to a backup medium (tapes or other identically copied buckets)
    4. Such identification is done with consideration as to the ownership (process, user, organization, etc) of data in that bucket
    5. Such identification is based on a federated model, where different physical locations, and even Cloud service providers, can understand, accept, and act upon each other’s schemes
    6. Optionally, such identification is tied to a digital certificate scheme
  2. Implement a tethering scheme, a-la DRM, but much more user friendly, to monitor, pull, identify and allow/disallow access to such buckets
  3. Implement an in-bucket modular encryption ability
  4. Apply a monitoring, auditing, measuring and reporting mechanism
  5. And finally … allow relationships and some property inheritance between buckets.

Part II – A Reference Model

For ease of use, let’s adopt the model we all know, and some of us love – the ISO Networking Reference model…but with a twist.What would happen if we took the tower and leaned it on its side?

The ISO Model, on its side

Image 2:The ISO OSI Model, On Its Side

Needing a Presentation model is not something I can discuss here – Cloud Computing is too early a concept to divine whether one will be needed.So let’s start with the others:

Part III – Nifty Things

Ok, so we have a Cloud.   What can we do with that? The following sections are some ideas I am thinking of.If I get response from this article showing interest in these, I will elaborate a lot more on these in a separate tome.

1.  Logging (or how to out-Google Google):

  • What if we are the Cloud service provider (CLaSP) and
  • What if we have logs, for example about intrusion attempts, against a piece of the infrastructure and
  • What if we have the ability to collect such logs, which to us represent potentially a great many customers, and
  • What if we have the smarts to correlate such logs for time, source, destination and other criteria

So far, nothing new, right?

And what if, because of the volume of any specific type of logs, and our correlation, we are able to predict future events probability or even the time-of-day of predicted occurrence?

Predicting the future

Image 3:Predicting the future.

(The Late Night with Johnny Carson)

2. Where Social Media and Clouds Meet

Don’t roll your eyes at me!(  )

Imagine: What if we could enable a PKI-like social trust between not only users, but resources as well.What if we could say: User ASilverstone has secure access to his bits wherever they are in the world, whoever hosts them, at whenever times he needs to? Just how powerful would be secure, ubiquitous information access at any point?

In other words:

“Do we need to know where the data is as long we can access, handle, process, control and remove it?”

Caption 4:Ariel’s Cloud Law Number 9: Caring is Not Sharing

3. OS? We Don’t Need No Stinking OS!

If end users, aka “desktops”, have access to a BOOTP/DHCP/Terminal Server-like secure access and an ever expanding (or contracting) storage and processing element…. Do they need a desktop OS?What if all our desktops were unlimited? What if we paid for what we used instead of an over-bloated OS that includes kitchen sinks that most of us will never use?

Now your turn: What do you think?

Evolution of Defense in Depth

בע"ה

Evolution of Defense in Depth

As security professionals will tell you, one of the basic principles of a good security program is the concept of Defense in Depth.  Defense in Depth is arguably the most time-tested principle in Security, and applies to physical security, as well as information security.  Defense in Depth builds on a concept of a hardened “core”, where one places their “crown jewels”.  This core is then surrounded by castle walls and motes, with ever increasing generality of defense.

Defense in Depth is a great concept, but it comes at a price.  Just as the area covered is wider from layer to layer, so is the cost associated with protecting with against more plentiful and less and less specific threats.  A firewall, for example, that typically acts as the last line of defense on the enterprise perimeter, has to protect against a great many varieties of threats, while a server-room door has to “only” be concerned with physical access.

The Server in The Castle
The Server Room in The Center of The Castle

Another flaw in the Defense in Depth design is its inherent difficulty to implement vis-à-vis the three basic tenets of security: Confidentiality, Integrity and Availability.   Why?   Because most forms of defense create increasing Confidentiality, but make Integrity more difficult to implement and manage.  Any increase in defense, of course, makes the concept of Availability that much harder to provide to the users.

A difficulty that I myself encountered many times is the applicability of Defense in Depth to my “layer 8” problem – the users. If users are not trained properly, if they are not aware of information protection needs, methods, and the “why?” of it, they become a liability, rather than an asset, towards data security.  If you are like me, you find the need to increase our moat-to-user-ratio on an ongoing base harder to design, implement, manage, and pay for.   Many of us resign ourselves to the proverbial “this is reality” and define our demarcation line as a physical device, such as a router, an access point, a firewall or a webserver.  There are potentially two things “wrong” with doing so:

  1. We are basically saying  “we are a target just waiting to be attacked” and
  2. We allow most barbarians (in the form of rogue traffic, networks and devices) to hit our gates

If we continue to do so, we will have approached a mathematical certainty of being hacked, or at least DDoS’ed out of the Net.   I really prefer NOT to draw analogies here to the real world, and we all know which those are.

Not only is the problem above big enough to cause some to lose sleep, but imagine what happens when we move to a Cloud topology… there we have nothing but moats and walls and front doors.   These front doors can be any browser, on any device, anywhere in the world.   How do you protect yourself against that?   Speaking of losing sleep – I love coffee, but this is ridiculous.

Clouds, Doors and Windows
Clouds, Doors, and Windows. Source: desktopnexus.com (Heavily edited)

 

Like any solution that might involve our entire user set, which may include Internet users, rather than pure corporate users, any solution must be:

  1. Easy to teach (i.e. close-to-zero learning curve)
  2. Easy to implement
  3. Applicable to the widest range of platforms possible
  4. Have a small delivery and storage footprint
  5. Easy to manage and maintain

Not asking for much, am I? 

Knowing how rapidly threats involve “in the wild”, I also want a tool that does not go the normal route of “black listing”.  I am more and more convinced that we need tools, in our world of security, that no longer compare bad signatures or behavior to a database (which is how most antivirus and firewalls, for example, act) and we need to go the “white-list” route.   I will write about that in the future.   Yes, I want a tool that will be controlled by me and allow me to choose which domains can be accessed, and under what (time or other) conditions can such an access occur.  Let’s add those to my “dream list”:

  1. White list based
  2. Conditional access

To make matters even more interesting, I want control over certain user functions.  (We want, after all, to reduce the number of barbarians and the number of roads leading to our castle, don’t we?)  We want to make sure that the people that request a resource are authorized to even request it.

For example, I would like some files to be able to be read and written, but not printed.  Or that I be able to control launching certain tools, such as IM or browsers from within the session.   And finally (?) I want a bullet proof audit trail.  Why?  SoX, GLBA and HIPAA, to name but a few.

  1. Selective access to file functions
  2. Audit trail

 

What should we do?

Until now, I did not see any solution to this quandary.  Other than Awareness and Training, there was not a whole lot that could be done.   Even MSSPs would tell you – they are there for a reason, which is “people will attack you”.

Thanks to my friend Andreas Wuchner, the CISO of Novartis, I ran (head first, mind you!) into a newly launched company called Quaresso.  Launched by a group of smart people with backgrounds in networking and security, in “Protect OnQ” they created both a new product and a service.  Working together, these allow us to do a few things:

  • Firstly, they allow us, the people responsible for the data’s protection, to select who will be allowed to knock on our doors and with what.  Simply put, if you so choose, people without the proper tool will not even be allowed access to your castle.  “Not allowed on the island”, if you would.  And this permission is manageable in real time.
  • Then, you can select not only which browser is allowed to knock at your door, but also to choose what (and what NOT) that browser is allowed to contain: add-ins, plug-ins, encryption settings, printing ability (or not), security zone setting, and the list goes on.  This effectively extends the defense-in-depth to the actual browser session!!
  • If this was not enough, you are able to control THE ROUTE that your users take to reach you.  While it may seem either unimportant or even impossible, controlling a browser’s allowed connections has the ability to protect against man-in-the-middle attacks, to name just one example.  To prevent DNS hijacking and man-in-the-middle attacks, the knowledge and selection of the route is also critical.
  • Zero-day (zero minute, really) malware protection – if it is not known, it does not get transported.  Simple and neat! 
  • And the final cherry on the icing?  Remember all those virii, trojans, key loggers and co.?  Due to the implementation of the “armored” browser, data can no longer leak from it to the rest of the operating system.   All passwords and personal information typed into a protected browser session remains confidential and un-recordable.   I know I will sleep better.

I tested the tool in several scenarios.    The only drawback seems the need to install another icon on the users’ screen.  I particularly loved it when, while running the tool, I had a sniffer on and could sense no data passing from the browser unencrypted.  So much for data leakage via this route!

So…. Let’s compare this tool to my wish list.

Number

Wish List Item

Protect On-Q Delivers

1

Easy to teach (close-to-zero learning curve)

Yes, being browser based and basically requiring a ‘click’

2

Easy to implement

Yes. The user is required to download an add-on or a link to their desktop and allow it to run.  The tool does NOT require admin rights on the installing system.

For web applications, a simple check-if-present mechanism allows the application to be On-Q aware.

3

Applicable to the widest range of platforms possible

Yes.  The tool, being browser and Java/ActiveX based, allows implementation on most publicly available browser.  And since most ship with any computer nowadays, they are built in.

4

Have a small delivery and storage footprint

Yes.  The package I tested was less than 450KB in size.

5

Easy to manage and maintain

Yes.  They offer a partnering console as a tool to monitor/manage/update the remote pieces.

6

White list based

Yes.It is not only a design philosophy, but an administrator from  The Bank of Atlantis, for example, can allow a specific use-only of that tool, to only selected systems within a selected domain, if he so chooses. Nifty. Imagine allowing remote users ONLY to access a certain system, but not payroll, for example?

7

Conditional access

Almost.  The domain selectivity is in place and working. The time/location is not yet implemented and may be, depending on industry demand.  This variable is relegated to the accessed system for now.

8

Selective access to file functions

Yes and by two separate mechanisms:

Firstly, the control over which browser add-on is present, allows tools like PDF browsing and key-loggers to be excluded.Secondly, the tool can control file operations of the browser.  So, for example, you can choose to provide the ability to remotely (as in on the user’s site) print or not.

9

Audit trail

Yes.  Extensive auditing is available and, because what I saw was an early product, new reports are being developed continuously.

 

The tool does all of these, while requiring zero learning curve to the users.  By allowing the users to use the same browser they are used to and by clicking as they normally would.   No new software, no new directions, nothing.   We now protect another layer of Defense in Depth and greatly increased our control of who comes knocking at our doors.

Try it and let me know what you think.