Thoughts on Intel’s “Global Digital Infrastructure Policy” document

Welcome! Please comment and leave me a note telling me what you like and what you'd like to see more of. Sign up to my RSS Feed!


On July 16, 2010, Intel released a thought and policy document titled "Global Digital Infrastructure Policy".   In this document, Intel shared with the readers what it has been doing regarding driving elements of security into the global infrastructure (as they termed, GDI); what they are doing with regards to working with certain governmental organizations; and what they hope other entities – commercial or governmental, would agree to do in order to facilitate the stated vision.

In this entry, I put forth my thoughts on their ideas.   Some I criticize, but most I applaud, and I am glad that a company the stature of Intel has the excellent understanding to design such views into its own plan.   I am no kool-aide drinker, however, and one must remember that Intel, just like other organizations, has and will always put forth ideas that help them first and foremost.   This is a lens that must be acknowledged before any policy documents are understood and appreciated.



Intel proposes a definition of global digital infrastructure as "GDI".   I have not met that definition before, but I do value the comparison of the GDI to the central nervous system in the human body.   In fact, those of us who favor "Terminator" movies, would liken it to 'Skynet'…, albeit a positive, benevolent one.   

As the GDI concept is integral to this entire paper, it is important to note that Intel is focusing on a global element, and not on any particular political subdivision.   This is the strength of this paper, but also its weakness:  to include information assets that may be in countries as diverse as the United States or North Korea may assume connectivity and the ability to 'synchronize' across those resources.  An ability that, alas, is less than realistic in today's world.


The Global Conundrum

The paper makes a very important statement, which bears repeating here: "As reliance by individual and businesses on the GDI increases, there is a corresponding increase in the value users place upon the security of the network and…data traversing the network".

While I agree that such is a desirable state, I disagree that users, far and large, see the situation as such.   It is true that Privacy, in particular, has seen a spike in importance in the US over the last few years.   What might not be correct is that all users, or even most, understand, the degree to which their information is accessible, by not-previously-authorized eyes and ears, on the network.    The laws, rules and regulation framework which all users rely on is simply not suited, in general, to the level of sophisticated threats which security professionals see day in and day out.  I do not disagree with Intel's intent, just with the reality of what I see in the marketplace.

Similarly, the call for a development of a global GDI Policy is seen by me as a desired, yet probably unattainable, call.  For example, as I discussed in my article "Time for a Cyber NonProliferation Treaty?"  the US has had years to cooperate with Russia or the EU regarding electronic crime policies.  Even those policies, which have a more direct and measurable results, have failed to be embraced worldwide.


A Tricky Subject

The paper continues by calling for "an end to import, export and use restrictions on cryptography for COTS and public research".   While such a call may sound liberating, it is a hopeless desire.   Imagine what could happen if commercial entities, some of which invested billions in development of cyphers, are free to export those to countries which may be at one time or another in a state of war with the origin country.   Two 'issues' would come to mind:   firstly, the job of Intelligence agencies would become much more difficult (think Nokia-Siemens in Iran in the service of the Iranian government) and second, those cypher methodologies may enhance the development of even more robust cypher for military use worldwide.   Consider al-qaida using US made high-level encryption – would YOU want to be responsible for the free export of such tools to Afghanistan?



In my past, I was responsible for standard adoption at Symantec.    Doing so enabled me to see, in a limited fashion, 'around corners', and understand better what would come down the pike next.   The paper's call for a global adoption of a framework or standard on security is applaud-worthy.  I would suggest to start with the ISO's adoption of the British Standards of ISO 27001 and 27002 (and 3, 4,5…) and only then consider jumping into the Common Criteria.   Common Criteria, which I discussed in my paper on "The Strategy to Secure the Federal CyberSpace", is an important element in certifying systems and processes.   But the world must learn to walk before it can compete in marathons, and CC might be just that.



I applaud the paper's call for deepening government and private sector partnerships, especially on cybersecurity research.  Again, in my "The Strategy to Secure the Federal CyberSpace", I called for such an effort, which is also now been made a priority for Howard Schmidt, the National Cyber Security Coordinator for the US.  This is an essential element:  industry would bring innovation and the government could bring intelligence and forewarning.



Intel's statement that "..a siloed, country-specific regulatory approach may…disrupt (the GDI)" is correct.   However, reality dictates, as Intel notes later in the paper, that we are not one 'species' only, but we have cultural, religious and other differences that suggest, nay, require, such regulatory differences.   I could give examples here, but they should be self understood by the reader.  We are simply too different, perhaps even more than 20%…

A bit later in the document the statement is made that "governments around the globe should apply..principles such as technology neutrality..".  I once more agree with the intent, but think that there are reasons such is not the case today.  Some countries protect their own manufacturers, others, such as Russia, require to see source code of every vendor coming into the country.   I am afraid the distance to make Intel's vision a reality is quite substantial.


The Triangle of Trust

In the 6th page of the document Intel introduces a concept which is new to me.   The Triangle of Trust itself is not, but as it is represented here, the sides are Industry, NGOs and Government:

Triangle of Trust

I applaud Intel once more for making the sharing and working together a clear one.   Not one of these sides alone can further our security.   We must share knowledge and responsibility more freely to assure our success.




All in all, a good paper.   I would have liked to see more suggestions on practically approaching the global synchronization.


So, share with me and other readers your thoughts of Intel's "Global Digital Infrastructure Policy" paper!


Clearing the Cloud Part II |A Ray of Sunshine On A Cloudy Day || Cloud Computing Security

This entry is part of a wonderful series, Clearing the cloud»
In the first in this series of “Clearing the Cloud” columns, I explored the dangers of jumping too soon into cloud computing. In this article, the second in the series, I continue my vision on how to manage and secure cloud-computing solutions.

Clearing the Cloud Part II – A Ray of Sunshine On A Cloudy Day

In the weeks that passed since the publication of my first article, it seemed that new information and new “Cloud Solutions” were popping out every day, and sometimes every hour. I am gratified to see that NIST, the National Institute of Science and Technology, has published their (15th) draft on Cloud computing, and with it, agreed with much of the definition I proposed in part I of this article – “Service based data processing and storage capability which is flexible, extensible and virtual”.
NIST suggested that cloud computing has the following salient characteristics: That it would be:
  • On-demand self-service, based upon
  • Ubiquitous network access, using
  • Location independent resource pooling, feature
  • Rapid elasticity and provide
  • A Measured Service.
It is interesting to note that NIST specifically called out the piece about the service having to be measured.   I wholeheartedly agree and take this to be a step in the maturity of Cloud Computing.

Security Models

The Jericho Forum proposed an interesting approach to cloud computing security. Starting with a description of Cloud Layers below, allows us to envision the problem:

Figure 1: Jericho Forum's Cloud Layers

Figure 1: Jericho Forum’s Cloud Layers

Here, the Forum proposed that Security (and Identity Management) are elements that cross all layers and in effect provide a design they call Collaboration Oriented Architecture (COA). Once this foundation has been laid, they defined Cloud Security as a proposed a cube-shaped model that highlights various possibilities of architecture, the one addressed here is, of course, the outsourced / external / De-Parameterized option.
Figure 2: Jericho Forum's Cloud Security Model
Figure 2: Jericho Forum’s Cloud Security Model
At about the same time, the Cloud Security Alliance, of which I am a member, designed a not-too-different view.
The CSA broke down Cloud computing into three delivery types:
  1. Infrastructure as a Service (IaaS)
  2. Platform as a Service (PaaS)
  3. Software as a Service (SaaS)
And then proceeded to define the Cloud consumption models:
  1. Private
  2. Public
  3. Managed
  4. Hybrid
If I arrange those elements in a matrix, we will get a cube similar to Jericho’s work:
Figure 3: Cloud Computing Cube
Figure 3: Cloud Computing Cube
The CSA’s model of service delivery stacks, however, is very complicated.    While I do not disagree with their reference model, I find it to be exceedingly complex.
So, allow me here to define the problem statement a bit differently than above. Because these are the early days of any cloud discussion, and that translates usually into this time being the formative years, let’s expand the basic three tenets of security, which are:
  1. Confidentiality
  2. Availability and
  3. Integrity
And add additional controls.   We will borrow from Donn Parker’s Hexad, and add:
  1. Possession (or Control)
  2. Authenticity and
  3. Utility
Clearly, in the case of Cloud computing, and especially in the Public/External case, we no longer have any control. Once the bits “leave our network,” control passes elsewhere.   And then there were five.
Losing one control typically mandates an increase in the other controls. Here, we have another set of problems.    Let us explore the remaining controls:
  • Confidentiality: Typically, we handle confidentiality through the usage of technologies such as Encryption and access Control.   We can still encrypt, but imagine what happens to a large data set.   It has to be sent, or assembled, in the Cloud, remain there in an encrypted form, and be transferred to us, for processing.Once the data is at our location, we have to decrypt it, perform the operations needed, then re-encrypt and resend to the Cloud.   Doable – yes. The performance tax here is huge. While today’s routers and servers no longer have their performance brought down to 1/6th by encryption (a loss of 84%), we still pay a heavy price.
Figure 4: Lifecycle of Encrypted Data
Figure 4: Lifecycle of Encrypted Data
Let us state once more: Having the data unencrypted at any point in the storage or transfer process exposes it to unauthorized disclosure. Unauthorized exposure, of course, is the opposite of any good security or compliance requirements, such as PCI or HIPAA. Even Amazon, with inherit interest in providing such services, announced that their Cloud is not PCI compliant nor intended for such work:
Hi, Thank you for contacting Amazon Web Services. Our payment system is PCI compliant and it is an “alternative payment processing service” meaning your users re-direct to our platform to conduct the payment event using their credit cards or bank accounts. The benefit for you is that we handle all the sensitive customer data so you don’t have to. If you haven’t looked at it, I highly suggest you check out the features and functions of our Flexible Payment Service and our Payment Widgets (

As for PCI level 2 compliance, that requires external scanning via a 3rd party, PCI-approved vendor. It is possible for you to build a PCI level 2 compliant app in our AWS cloud using EC2 and S3, but you cannot achieve level 1 compliance. And you have to provide the appropriate encryption mechanisms and key management processes. If you have a data breach, you automatically need to become level 1 compliant which requires on-site auditing; that is something we cannot extend to our customers. This seems like a risk that could challenge your business; as a best practice, I recommend businesses always plan for level 1 compliance. So, from a compliance and risk management perspective, we recommend that you do not store sensitive credit card payment information in our EC2/S3 system because it is not inherently PCI level 1 compliant. It is quite feasible for you to run your entire app in our cloud but keep the credit card data stored on your own local servers which are available for auditing, scanning, and on-site review at any time.


Cindy S.
Amazon Web Services

Figure 5: From here
Figure 6: Encryption in Cloud Case Study
Try the following as an example: Suppose you have a volume of credit-card bearing transactions that you must preserve for a period of one year.   And let’s assume that the data is in SQL form.   If so, the steps needed would include:


  1. Exporting the relevant tables
  2. Encrypting these files with suitable encryption
  3. Uploading the encrypted files to your cloud “bucket”
  4. Storing the data in the cloud, in an encrypted form
  5. Downloading it, while encrypted
  6. Decrypting the data
  7. Importing the data, and finally
  8. Processing it



One other element within Confidentiality is the ability to destroy data.  In a cloud, that we do not own, and on storage media that we do not control, there is high –probability that the same media be used for other purposes.   These storage buckets are dynamic and the service/platform/application provider might allocate them to other users.

This sharing, and in many cases, repeated sharing, of storage media leads to the need for assured destruction. We must follow a strict regime that states how long is data to be kept, when and by whom destroyed, and how such destruction is verified.  Since degaussing tapes and shredding CD’s is out of the question, we must employ more agile software- (or, dare we say – hardware?) based methods to assure that destruction.



Figure 5: Chercher les ...données?

Figure 7: Chercher les …données?

This question becomes infinitely more complicated when we consider that data at rest does not necessarily “rest” on a certain platen of a certain hard drive. The data can, and usually does, move between storage locations on the drives.   The onus is still on us to assure confidentiality, but… we don’t manage the drives.   The only practical solution here is to demand regular scouring of storage media from the service providers.   Do we think that such a requirement is feasible?


Figure 6:  Attention to Storage Media

Figure 8: Attention to Storage Media
Finally, lest someone think I am only talking about the storage aspect of Cloud Computing, the above discussion is easily applicable to processing in a Cloud as well.
  • The next control we will deal with is Availability.   When dealing with a Cloud-computing resource, we are at the mercy of the network, the remote server, and whatever controls are applicable along the way, be they host- or network-related.

    Yes, we always were at the mercy of such risks, but we owned them before.   When multiplying 99.5% by another, we quickly fall in our SLA capability to below 99%, and even further.   At what point does the enterprise take notice?    As we can see from recent, published outages at Google and elsewhere, users are very sensitive to the information they require, and rightly so.

    Figure 9:  Rapidly Decreasing SLA's
    Figure 9:  Rapidly Decreasing SLA’s
    (98.5% = 5 and a half days a year)
    Even when taking steps to “assure” access, which in reality translates into reducing exposure to this particular risk, we have typically resulted to building redundancy into the system.   Here, that would presumably add lines, servers, networking equipment and personnel.   Doable, but at what cost? What does the complexity of redundancy mean to an organization? What is the True Cost of operations?
Figure 10: Availability in Cloud Case Study
Let’s look at an example: we have a volume of data which stretches at times by a factor of ten, so cloud computing seems like the perfect solution.   So here is what may happen:
  1. We ask the Cloud service provider for an availability in data storage bursting. We will estimate this payment at 10% of our regular Cloud computing cost.
  2. We ask our network services provider to create another redundant, and highly-available path to the Cloud service provider.   We will estimate that cost at 25% of our regular data communications cost.
  3. And now we must consider what we are to do if such data-burst occurs when we have no availability to send it to the cloud. Are we going to dispose of it?  Cease operations? No and no.   So here we must plan for (at least) the storage of such data regardless whether we use cloud computing services.


  • As for Integrity, we can detect changes after they were made. From hashing to redundancy checks, from digital signatures to trip-wiring we are able to ascertain that a change occurred.   But… we can no longer prevent changes.

    The bastion of defense in depth has crumbled when we talk about Cloud computing. We do not own the moats, the walls, or the doors (see my paper about the Evolution of Defense in Depth).   Accepting data without verification should be unthinkable, verifying all inbound data will be complex and costly. Adding yet another layer to the mix of technologies and methodologies that we must rustle.

    Indeed, the Cloud unchecked could lead to a wave of new attacks aimed directly at data whose guardians (by virtue of possession) are not incentivized to protect it from change, only mostly to be able to speed it on its way.
    Cloud computing could be a gold rush to people designing man-in-the-middle attacks, too. While most hosting companies will boast of their monitoring and security, few, if any, can assure you that they have never been compromised.   In fact, a provision of Cloud data, with its already built in doorway (or tunnel) to you, makes their life easier. They can now both alter the data AND assure that it, and associated payloads, make their way to the intended destination.


Figure 11: Integrity in Cloud Case Study
Here is one scenario.   We have a university system who stores grade data on the cloud.   “Pranksters” attack the datacenter and access student data, changing some grades, deleting some students.
Unless very carefully monitoring, either by using as-occurred instances or by comparing data to a “master” copy, such modified data runs a high probability of being recognized as the authoritative copy.
Worse yet, the data that was modified here has a known ownership.   These malefactors can well insert, install, and even custom-design a payload to effect the specific environment and systems found at that university – while the data itself is treated by the university as owned, and therefore “blessed” data.   This is a whole new definition of insider threat, isn’t it?


Know What Data You Are Getting

Figure 12: Know What Data You Are Getting


  • Perhaps moving away a tad from the pure-security elements of C.I. and A., to the more “business” ones, the first we will discuss in the Hexad will be Possession (or Control).As recent developments in the realm of data-breach notification laws has shown us (see my article about the “new” version of Mass’  201 CMR ), the United States, albeit one state at a time, is moving closer to the European, and indeed worldly, model of “Data Ownership”.


Figure 13: Definition of Data Ownership
Data ownership is a time-tested term and function that has been used typically in the military realm for over fifty years.   That term has slowly been filtering to the corporate world, and now to real-politick,   The concept that every element of data has an owner is a simple one, really.    Do allow me to explain here, for the sake of completeness, that most elements of data have at least a few owners, and perhaps many, many more.  
The data, for example, can be owned by a person designated by the Enterprise, by the system administrator, and/or by the individual it is about (in the case of PII, or personally identifiable data, for example).
Data ownership can specify who is responsible for the data, who can sell the data, whom the data is about, or what is the legal status of the data.   And frequently more than just one of these items.
Most professionals would agree that data ownership is far easier to define and maintain when the data is at a known location.    Even a relatively known event, such as the data being transmitted from a server to a data center, can affect our concept of the data ownership.   As we can see from breach-notification loopholes, selling the data, as if it was fungible item, sometimes releases a company from responsibility for the data, leading some companies to create wholly-owned subsidiaries, often with very different names, whose sole reason d’être is to shield the mother organization from disclosure rules.
Now comes the Cloud.   Even if we admit to owning the data, do we know where it is? Allow me to make an observation:
Caption 14: Ariel’s 7th Law of Cloud Computing
“Most laws are geo-political, and therefore lose their efficacy and meaning when involving trans-border clouds”.
So even if we are the best of meaning CIOs and the furthest thing from our mind is flouting the Law, we are faced with a few obstacles in our way.   Let’s state some, in no particular order:
  1. How do we comply with Breach notification laws?
  2. What happens if we have data regarding an EU national?
  3. What must we do when we disclose risk information to Auditors? To the SEC?
  4. How do we comply with rules relating to CALEA? E-Discovery? Data Forensics?
and the list goes on and on.
Lastly, we do remember that data has a lifecycle. Such DPLC mandates, ultimately, that the data be disposed off in a secure manner.    Remember those Cloud-buckets? Well, these must be certifiably-erased when we are done with their utility.   How do we do that in a Cloud?
  • If we remember the example we used above, in the University case, Authenticity of data is a problem that must be addressed.   Sometimes seen as a combination of non-repudiation, integrity and accountability, Authenticity is a super-set that defines the reliability we assign and the trust we place, in our data.

Should data in/from a Cloud seen as less-trusted data? If so, is there any worth to it?   Would Cloud end up being used only for data we could care less about? Only time would tell.   But In the following article will offer some solution ideas.



Figure 15: Authenticity in Cloud Case Study
Would you, for example, require a CRC-type check before you accept data?   Would that make your life easier or harder? And what about the following, admittedly oversimplified, scenario:
  1. A man walks into a bar (really);
  2. The man orders some drinks;
  3. One of the drinks ordered is Tsing Tao – a brand of Chinese beer;
  4. The drinks’ prices are tracked, together with the bar’s inventory in a Cloud;
  5. The Cloud’s buckets are – one in China, one in Nepal, and One in …. Oman;
  6. The bartender wants to give the man his bill…..
Just how many problems do you see here?   I see at least three:
  1. What if the data buckets arrive back at their controlling application but a confused manner, wrong order, or super-delayed timing?
  2. What if data about alcohol is being processed in Oman?
  3. And finally (and most related to our point), what if the Tsing Tao family of brewers wants to create an artificial, and large, order from this distributor, and as such bribes/hacks the Cloud provider to modify a piece of the data bucket?
  • The sixth element of our Hexad is Utility.   Utility is where Cloud Computing excels.   If we can figure out the other five elements, we can be the Bruce Willis of this story.   Cloud Computing is clearly an idea whose time is near.   We cannot argue against the flexibility, MIPS-saving, Just-in-Time, CapEx efficient model of a Cloud.   The elasticity and low(er) cost attached to such incredible advance in utility mandates that we solve the issues stated above.   Not solving them will have the effect of putting data at an amazingly higher risk.   Companies will put data in the Cloud; use the Cloud; and expand the Cloud in a tremendously accelerating rate, regardless of data security and privacy.

In the next article I will put forward some ideas on how to resolve issues defined in this and the previous articles. I will also attempt to show some of the security related benefits that we can garner from the usage of Cloud Computing, especially those that we could not, or could not easily, do before the advent of the Cloud.