Clearing the Cloud Part II |A Ray of Sunshine On A Cloudy Day || Cloud Computing Security

Welcome! Please comment and leave me a note telling me what you like and what you'd like to see more of. Sign up to my RSS Feed!
This entry is part of a wonderful series, Clearing the cloud»
In the first in this series of “Clearing the Cloud” columns, I explored the dangers of jumping too soon into cloud computing. In this article, the second in the series, I continue my vision on how to manage and secure cloud-computing solutions.

Clearing the Cloud Part II – A Ray of Sunshine On A Cloudy Day

In the weeks that passed since the publication of my first article, it seemed that new information and new “Cloud Solutions” were popping out every day, and sometimes every hour. I am gratified to see that NIST, the National Institute of Science and Technology, has published their (15th) draft on Cloud computing, and with it, agreed with much of the definition I proposed in part I of this article – “Service based data processing and storage capability which is flexible, extensible and virtual”.
NIST suggested that cloud computing has the following salient characteristics: That it would be:
  • On-demand self-service, based upon
  • Ubiquitous network access, using
  • Location independent resource pooling, feature
  • Rapid elasticity and provide
  • A Measured Service.
It is interesting to note that NIST specifically called out the piece about the service having to be measured.   I wholeheartedly agree and take this to be a step in the maturity of Cloud Computing.

Security Models

The Jericho Forum proposed an interesting approach to cloud computing security. Starting with a description of Cloud Layers below, allows us to envision the problem:

Figure 1: Jericho Forum's Cloud Layers

Figure 1: Jericho Forum’s Cloud Layers

Here, the Forum proposed that Security (and Identity Management) are elements that cross all layers and in effect provide a design they call Collaboration Oriented Architecture (COA). Once this foundation has been laid, they defined Cloud Security as a proposed a cube-shaped model that highlights various possibilities of architecture, the one addressed here is, of course, the outsourced / external / De-Parameterized option.
Figure 2: Jericho Forum's Cloud Security Model
Figure 2: Jericho Forum’s Cloud Security Model
At about the same time, the Cloud Security Alliance, of which I am a member, designed a not-too-different view.
The CSA broke down Cloud computing into three delivery types:
  1. Infrastructure as a Service (IaaS)
  2. Platform as a Service (PaaS)
  3. Software as a Service (SaaS)
And then proceeded to define the Cloud consumption models:
  1. Private
  2. Public
  3. Managed
  4. Hybrid
If I arrange those elements in a matrix, we will get a cube similar to Jericho’s work:
Figure 3: Cloud Computing Cube
Figure 3: Cloud Computing Cube
The CSA’s model of service delivery stacks, however, is very complicated.    While I do not disagree with their reference model, I find it to be exceedingly complex.
So, allow me here to define the problem statement a bit differently than above. Because these are the early days of any cloud discussion, and that translates usually into this time being the formative years, let’s expand the basic three tenets of security, which are:
  1. Confidentiality
  2. Availability and
  3. Integrity
And add additional controls.   We will borrow from Donn Parker’s Hexad, and add:
  1. Possession (or Control)
  2. Authenticity and
  3. Utility
Clearly, in the case of Cloud computing, and especially in the Public/External case, we no longer have any control. Once the bits “leave our network,” control passes elsewhere.   And then there were five.
Losing one control typically mandates an increase in the other controls. Here, we have another set of problems.    Let us explore the remaining controls:
  • Confidentiality: Typically, we handle confidentiality through the usage of technologies such as Encryption and access Control.   We can still encrypt, but imagine what happens to a large data set.   It has to be sent, or assembled, in the Cloud, remain there in an encrypted form, and be transferred to us, for processing.Once the data is at our location, we have to decrypt it, perform the operations needed, then re-encrypt and resend to the Cloud.   Doable – yes. The performance tax here is huge. While today’s routers and servers no longer have their performance brought down to 1/6th by encryption (a loss of 84%), we still pay a heavy price.
Figure 4: Lifecycle of Encrypted Data
Figure 4: Lifecycle of Encrypted Data
Let us state once more: Having the data unencrypted at any point in the storage or transfer process exposes it to unauthorized disclosure. Unauthorized exposure, of course, is the opposite of any good security or compliance requirements, such as PCI or HIPAA. Even Amazon, with inherit interest in providing such services, announced that their Cloud is not PCI compliant nor intended for such work:
Hi, Thank you for contacting Amazon Web Services. Our payment system is PCI compliant and it is an “alternative payment processing service” meaning your users re-direct to our platform to conduct the payment event using their credit cards or bank accounts. The benefit for you is that we handle all the sensitive customer data so you don’t have to. If you haven’t looked at it, I highly suggest you check out the features and functions of our Flexible Payment Service and our Payment Widgets ( http://aws.amazon.com/fps).

As for PCI level 2 compliance, that requires external scanning via a 3rd party, PCI-approved vendor. It is possible for you to build a PCI level 2 compliant app in our AWS cloud using EC2 and S3, but you cannot achieve level 1 compliance. And you have to provide the appropriate encryption mechanisms and key management processes. If you have a data breach, you automatically need to become level 1 compliant which requires on-site auditing; that is something we cannot extend to our customers. This seems like a risk that could challenge your business; as a best practice, I recommend businesses always plan for level 1 compliance. So, from a compliance and risk management perspective, we recommend that you do not store sensitive credit card payment information in our EC2/S3 system because it is not inherently PCI level 1 compliant. It is quite feasible for you to run your entire app in our cloud but keep the credit card data stored on your own local servers which are available for auditing, scanning, and on-site review at any time.

Regards,

Cindy S.
Amazon Web Services

Figure 5: From here
Figure 6: Encryption in Cloud Case Study
Try the following as an example: Suppose you have a volume of credit-card bearing transactions that you must preserve for a period of one year.   And let’s assume that the data is in SQL form.   If so, the steps needed would include:

 

  1. Exporting the relevant tables
  2. Encrypting these files with suitable encryption
  3. Uploading the encrypted files to your cloud “bucket”
  4. Storing the data in the cloud, in an encrypted form
  5. Downloading it, while encrypted
  6. Decrypting the data
  7. Importing the data, and finally
  8. Processing it

 

 

One other element within Confidentiality is the ability to destroy data.  In a cloud, that we do not own, and on storage media that we do not control, there is high –probability that the same media be used for other purposes.   These storage buckets are dynamic and the service/platform/application provider might allocate them to other users.

This sharing, and in many cases, repeated sharing, of storage media leads to the need for assured destruction. We must follow a strict regime that states how long is data to be kept, when and by whom destroyed, and how such destruction is verified.  Since degaussing tapes and shredding CD’s is out of the question, we must employ more agile software- (or, dare we say – hardware?) based methods to assure that destruction.

 

 

Figure 5: Chercher les ...données?

Figure 7: Chercher les …données?

This question becomes infinitely more complicated when we consider that data at rest does not necessarily “rest” on a certain platen of a certain hard drive. The data can, and usually does, move between storage locations on the drives.   The onus is still on us to assure confidentiality, but… we don’t manage the drives.   The only practical solution here is to demand regular scouring of storage media from the service providers.   Do we think that such a requirement is feasible?

 

Figure 6:  Attention to Storage Media

Figure 8: Attention to Storage Media
Finally, lest someone think I am only talking about the storage aspect of Cloud Computing, the above discussion is easily applicable to processing in a Cloud as well.
  • The next control we will deal with is Availability.   When dealing with a Cloud-computing resource, we are at the mercy of the network, the remote server, and whatever controls are applicable along the way, be they host- or network-related.

    Yes, we always were at the mercy of such risks, but we owned them before.   When multiplying 99.5% by another, we quickly fall in our SLA capability to below 99%, and even further.   At what point does the enterprise take notice?    As we can see from recent, published outages at Google and elsewhere, users are very sensitive to the information they require, and rightly so.

    Figure 9:  Rapidly Decreasing SLA's
    Figure 9:  Rapidly Decreasing SLA’s
    (98.5% = 5 and a half days a year)
    Even when taking steps to “assure” access, which in reality translates into reducing exposure to this particular risk, we have typically resulted to building redundancy into the system.   Here, that would presumably add lines, servers, networking equipment and personnel.   Doable, but at what cost? What does the complexity of redundancy mean to an organization? What is the True Cost of operations?
    .
Figure 10: Availability in Cloud Case Study
Let’s look at an example: we have a volume of data which stretches at times by a factor of ten, so cloud computing seems like the perfect solution.   So here is what may happen:
  1. We ask the Cloud service provider for an availability in data storage bursting. We will estimate this payment at 10% of our regular Cloud computing cost.
  2. We ask our network services provider to create another redundant, and highly-available path to the Cloud service provider.   We will estimate that cost at 25% of our regular data communications cost.
  3. And now we must consider what we are to do if such data-burst occurs when we have no availability to send it to the cloud. Are we going to dispose of it?  Cease operations? No and no.   So here we must plan for (at least) the storage of such data regardless whether we use cloud computing services.

 

  • As for Integrity, we can detect changes after they were made. From hashing to redundancy checks, from digital signatures to trip-wiring we are able to ascertain that a change occurred.   But… we can no longer prevent changes.

    The bastion of defense in depth has crumbled when we talk about Cloud computing. We do not own the moats, the walls, or the doors (see my paper about the Evolution of Defense in Depth).   Accepting data without verification should be unthinkable, verifying all inbound data will be complex and costly. Adding yet another layer to the mix of technologies and methodologies that we must rustle.

    Indeed, the Cloud unchecked could lead to a wave of new attacks aimed directly at data whose guardians (by virtue of possession) are not incentivized to protect it from change, only mostly to be able to speed it on its way.
    Cloud computing could be a gold rush to people designing man-in-the-middle attacks, too. While most hosting companies will boast of their monitoring and security, few, if any, can assure you that they have never been compromised.   In fact, a provision of Cloud data, with its already built in doorway (or tunnel) to you, makes their life easier. They can now both alter the data AND assure that it, and associated payloads, make their way to the intended destination.

 

Figure 11: Integrity in Cloud Case Study
Here is one scenario.   We have a university system who stores grade data on the cloud.   “Pranksters” attack the datacenter and access student data, changing some grades, deleting some students.
Unless very carefully monitoring, either by using as-occurred instances or by comparing data to a “master” copy, such modified data runs a high probability of being recognized as the authoritative copy.
Worse yet, the data that was modified here has a known ownership.   These malefactors can well insert, install, and even custom-design a payload to effect the specific environment and systems found at that university – while the data itself is treated by the university as owned, and therefore “blessed” data.   This is a whole new definition of insider threat, isn’t it?

 

Know What Data You Are Getting

Figure 12: Know What Data You Are Getting

 

  • Perhaps moving away a tad from the pure-security elements of C.I. and A., to the more “business” ones, the first we will discuss in the Hexad will be Possession (or Control).As recent developments in the realm of data-breach notification laws has shown us (see my article about the “new” version of Mass’  201 CMR ), the United States, albeit one state at a time, is moving closer to the European, and indeed worldly, model of “Data Ownership”.

 

Figure 13: Definition of Data Ownership
Data ownership is a time-tested term and function that has been used typically in the military realm for over fifty years.   That term has slowly been filtering to the corporate world, and now to real-politick,   The concept that every element of data has an owner is a simple one, really.    Do allow me to explain here, for the sake of completeness, that most elements of data have at least a few owners, and perhaps many, many more.  
The data, for example, can be owned by a person designated by the Enterprise, by the system administrator, and/or by the individual it is about (in the case of PII, or personally identifiable data, for example).
Data ownership can specify who is responsible for the data, who can sell the data, whom the data is about, or what is the legal status of the data.   And frequently more than just one of these items.
Most professionals would agree that data ownership is far easier to define and maintain when the data is at a known location.    Even a relatively known event, such as the data being transmitted from a server to a data center, can affect our concept of the data ownership.   As we can see from breach-notification loopholes, selling the data, as if it was fungible item, sometimes releases a company from responsibility for the data, leading some companies to create wholly-owned subsidiaries, often with very different names, whose sole reason d’être is to shield the mother organization from disclosure rules.
Now comes the Cloud.   Even if we admit to owning the data, do we know where it is? Allow me to make an observation:
Caption 14: Ariel’s 7th Law of Cloud Computing
“Most laws are geo-political, and therefore lose their efficacy and meaning when involving trans-border clouds”.
So even if we are the best of meaning CIOs and the furthest thing from our mind is flouting the Law, we are faced with a few obstacles in our way.   Let’s state some, in no particular order:
  1. How do we comply with Breach notification laws?
  2. What happens if we have data regarding an EU national?
  3. What must we do when we disclose risk information to Auditors? To the SEC?
  4. How do we comply with rules relating to CALEA? E-Discovery? Data Forensics?
and the list goes on and on.
Lastly, we do remember that data has a lifecycle. Such DPLC mandates, ultimately, that the data be disposed off in a secure manner.    Remember those Cloud-buckets? Well, these must be certifiably-erased when we are done with their utility.   How do we do that in a Cloud?
  • If we remember the example we used above, in the University case, Authenticity of data is a problem that must be addressed.   Sometimes seen as a combination of non-repudiation, integrity and accountability, Authenticity is a super-set that defines the reliability we assign and the trust we place, in our data.

Should data in/from a Cloud seen as less-trusted data? If so, is there any worth to it?   Would Cloud end up being used only for data we could care less about? Only time would tell.   But In the following article will offer some solution ideas.

 

 

Figure 15: Authenticity in Cloud Case Study
Would you, for example, require a CRC-type check before you accept data?   Would that make your life easier or harder? And what about the following, admittedly oversimplified, scenario:
  1. A man walks into a bar (really);
  2. The man orders some drinks;
  3. One of the drinks ordered is Tsing Tao – a brand of Chinese beer;
  4. The drinks’ prices are tracked, together with the bar’s inventory in a Cloud;
  5. The Cloud’s buckets are – one in China, one in Nepal, and One in …. Oman;
  6. The bartender wants to give the man his bill…..
Just how many problems do you see here?   I see at least three:
  1. What if the data buckets arrive back at their controlling application but a confused manner, wrong order, or super-delayed timing?
  2. What if data about alcohol is being processed in Oman?
  3. And finally (and most related to our point), what if the Tsing Tao family of brewers wants to create an artificial, and large, order from this distributor, and as such bribes/hacks the Cloud provider to modify a piece of the data bucket?
  • The sixth element of our Hexad is Utility.   Utility is where Cloud Computing excels.   If we can figure out the other five elements, we can be the Bruce Willis of this story.   Cloud Computing is clearly an idea whose time is near.   We cannot argue against the flexibility, MIPS-saving, Just-in-Time, CapEx efficient model of a Cloud.   The elasticity and low(er) cost attached to such incredible advance in utility mandates that we solve the issues stated above.   Not solving them will have the effect of putting data at an amazingly higher risk.   Companies will put data in the Cloud; use the Cloud; and expand the Cloud in a tremendously accelerating rate, regardless of data security and privacy.

In the next article I will put forward some ideas on how to resolve issues defined in this and the previous articles. I will also attempt to show some of the security related benefits that we can garner from the usage of Cloud Computing, especially those that we could not, or could not easily, do before the advent of the Cloud.

DeliciousStumbleUponDiggTwitterFacebookRedditLinkedIn

201 CMR 17:00 A New Dawn

Back in early September, I shared with my readers that I sent a letter (you can see it here) to the Massachusetts Office of Consumer Affairs and Business Relations, OCABR, with suggestions to improve 201 CMR 17:00.

On Friday, October 30th, 2009, I have received a letter from the OCABR informing me of the new version to the law, and let's examine below whether or not they accepted my suggestions:
 

 

Comment Number One:

Starting with section 17.01(1), the change to remove the emboldened words (… by persons who own, license, store or maintain personal information…) represents a major shift in policy. In the real world, the protection sought by this entire effort is needed at least from mid-sized and small companies as is from large companies. While smaller companies are more likely to use a third party provider for their storage and hosting needs, there is no reason why these hosting environments should not be compliant with the demands of this regulation. Conversely, these hosting companies ought to partake in the responsibility for protection of personally identifying information (PII).

Further, the current revision actually encourages data set holders to use third parties to host the data. What is to stop a company from using the system and creating a fictitious sub-company, a separate legal entity, whose entire reason d’être would be to hold the data? Doing so will clearly circumvent any protection intended by this regulation. This situation occurs again in the following sections: 17.01(2),  

 

What did OCABR do here:

Nothing here…. but… they added it below, in a new section (17.03(3)(f):

1. Taking reasonable steps to select and retain third-party service providers that are capable of maintaining appropriate security measures to protect such personal information consistent with these regulations and any applicable federal regulations; and

 

Comment Number Two:

Section 17.03 (definitions), in its previous incarnation contained the following verbiage:

Encrypted," transformation of data through the use of a 128-bit or higher algorithmic process, or other means or process approved by the office of consumer affairs and business regulation that is at least as secure as such algorithmic process, into a form in which there is a low probability of assigning meaning without use of a confidential process or key.

In its current form, the verbiage was changed to:

Encrypted, the transformation of data into a form in which meaning cannot be assigned without the use of a confidential process or key.

My comments on these changes are several: The first is an applaud to taking OCABR out of the business of approving or disapproving encryption. You are right not be in that particular storm. However, there is a need for encryption, nearly always. There are two choices here – the first is to require encryption no matter what; the second is to define certain data elements that will require encryption. While 128-bit is no longer “good enough” and 256-bit is the current “gold standard”, there are certain standards of encryption that are accepted more-or-less universally.

 

What did OCABR do here:

Nothing.

 

Comment Number Three:

Please specify that encryption is required.

 

What did OCABR do here:

Nothing.

 

Comment Number Four:

Please specify that “a minimum of 256-bit” is needed AND select from certain industry standards. For example, AES is appropriate for today’s day-and-age, and perhaps for the next four or five years.  

 

What did OCABR do here:

Nothing.

 

Comment Number Five:

Please do not use the term “a low probability”. This is understood by all practitioners in the art, and here serves to confuse, rather than clarify.  

 

What did OCABR do here:

They fixed that term.

 

Comment Number Six:

Still within this section, the following appears:

Owns or licenses, receives, maintains, processes, or otherwise has access to personal information in connection with the provision of goods or services or in connection with employment.

This section, and especially the highlighted text, seems to conflict with the first change noted above. Let me request a clarification of this text, as it would seem to include third-parties.  

 

What did OCABR do here:

They clarified it!  See the notes to Comment Number One.

 

Comment Number Seven:

Similarly to Comment Number Six, above, the text stating:

Service provider, any person that receives, maintains, processes, or otherwise is permitted access to personal information through its provision of services directly to a person that is subject to this regulation; provided, however, that “Service provider” shall not include the U.S. Postal Service.

In addition, especially the emboldened text, seems to conflict with the changes in the first item. What is your true intent?

 

What did OCABR do here:

They clarified it!  See the notes to Comment Number One.

 

Comment Number Eight:

In addition to Comment Number Seven above, and in full understanding of the need to call-out federal agencies away from compliance requirements, I question the naming of the USPS and its subsequent removal. Let me play devil’s advocate here by giving two examples, for two different scenarios that are likely to be butting against this regulation:

  1. Since any state law attempting to regulate a Federal agency is automatically void, is there a need to call out the USPS? Imagine that the US Department of Health and Human Services discovers that it is not excluded from this regulation de jour. What would happen then?
  2. Imagine that UPS or Federal Express, both of which are common carriers de facto, chose to contest this regulation based on the concept of “like-service”. How would you defend against such action?

My suggestion: Remove the reference to any Federal Agencies. Then, decide what you want the data owners to do. It would be perfectly ok, in a vein similar to requiring encryption, to require that sensitive data be transported physically only while encrypted (as one example only). This responsibility should belong to the data owners. I expect many questions regarding this point, and I will be thrilled to assist with all I can.  

 

What did OCABR do here:

OCABR listened here too, and removed the reference to the USPS.

 

Comment Number Nine:

Section 17.03 (remainder) drops the requirement (in a manner similar to my Comment Number One) related to third party. It then proceeds to make a bad situation worse by removing the need to be compliant to organizations that monitor data.

The situation, however, become much worse, if the intent is to drop the requirement to monitor the performance of such compliancy program.

In the real world, there is no substitution for ongoing monitoring of performance of compliance programs. This should not present an un-due burden to small businesses, as long as pragmatism prevails. I recommend clearly demanding a written review and monitoring of such compliance programs.

 

What did OCABR do here:

Nothing.

 

Comment Number Ten:

The changes to section 17.03(2) are very significant and leave the regulation completely without purpose or merit. They seem to go in a contrary direction to the National policy of tightening controls, as exemplified in changing HIPAA by the addition of the HITECH amendment.

By removing the need for a formal risk assessment this regulation just became a best intent regulation. Risk assessments are the only mechanism, short of guessing to understand any data privacy, security, and compliance issues. Nowhere in the original document there is a call for a risk assessment to be done just for the sake of compliance with CMR 17:00 – so do not call for one now, but please DO demand that the very important issues brought forth within this regulation be included in such formal risk assessment.

Even small businesses can conduct (inexpensive) formal risk assessments. Size of the business does not need to affect this requirement.

 

What did OCABR do here:

Here I get excited!  They took my comments to heart and combined the previous version of the section, beefed it up, and improved on it.   Read this new section of 201 CMR 17:00 here, below:

The safeguards contained in such program must be consistent with the safeguards for protection of personal information and information of a similar character set forth in any state or federal regulations by which the person who owns or licenses such information may be regulated.

(2) Without limiting the generality of the foregoing, every comprehensive information security program shall include, but shall not be limited to:

(a) Designating one or more employees to maintain the comprehensive information security program;

(b) Identifying and assessing reasonably foreseeable internal and external risks to the security, confidentiality, and/or integrity of any electronic, paper or other records containing personal information, and evaluating and improving, where necessary, the effectiveness of the current safeguards for limiting such risks, including but not limited to:

1. ongoing employee (including temporary and contract employee) training;

2. employee compliance with policies and procedures; and

3. means for detecting and preventing security system failures.

(c) Developing security policies for employees relating to the storage, access and transportation of records containing personal information outside of business premises.

(d) Imposing disciplinary measures for violations of the comprehensive information security program rules.

(e) Preventing terminated employees from accessing records containing personal information.

(f) Oversee service providers, by:

1. Taking reasonable steps to select and retain third-party service providers that are capable of maintaining appropriate security measures to protect such personal information consistent with these regulations and any applicable federal regulations; and

2. Requiring such third-party service providers by contract to implement and maintain such appropriate security measures for personal information; provided, however, that until March 1, 2012, a contract a person has entered into with a third party service provider to perform services for said person or functions on said person’s behalf satisfies the provisions of 17.03(2)(f)(2) even if the contract does not include a requirement that the third party service provider maintain such appropriate safeguards, as long as said person entered into the contract no later than March 1, 2010.

(g) Reasonable restrictions upon physical access to records containing personal information,, and storage of such records and data in locked facilities, storage areas or containers.

(h) Regular monitoring to ensure that the comprehensive information security program is operating in a manner reasonably calculated to prevent unauthorized access to or unauthorized use of personal information; and upgrading information safeguards as necessary to limit risks.

(i) Reviewing the scope of the security measures at least annually or whenever there is a material change in business practices that may reasonably implicate the security or integrity of records containing personal information.

(j) Documenting responsive actions taken in connection with any incident involving a breach of security, and mandatory post-incident review of events and actions taken, if any, to make changes in business practices relating to protection of personal information.       

 

Comment Number Eleven:

By not requiring a person to have overarching responsibility for the performance of such regulation, the OCABR is risking the “failure has no father” syndrome. In line with most European and US laws and regulations, assigning ownership to the data and to the privacy protection efforts is of major import. Please reconsider.

 

What did OCABR do here:

They fixed it!  See above at Comment Number Ten.  They added

(a) Designating one or more employees to maintain the comprehensive information security program;

 

Comment Number Twelve:

The removal of the call to handle terminated employees’ access, while seemingly in-line with the high-level intent of this regulation is not a good choice. As can be seen from many occurrences this year alone, organizations oft “forget” to do so, resulting in major security breaches and incidents – a subject on which I would be glad to discuss, if you wish. Please call that specific requirement in the next version.  

 

What did OCABR do here:

They fixed it!  See above at Comment Number Ten.  They added:

(e) Preventing terminated employees from accessing records containing personal information.

 

Comment Number Thirteen:

The movement of the compliance date to March 2012 is not a good move. In essence, a “hunting season” was just declared to allow organizations to sign contracts moving the data (as suggested by my Comment Number One) to third parties. Doing so, will leave no one as the responsible party.

 

What did OCABR do here:

They fixed it!  See above at Comment Number Ten.  They added:

2. Requiring such third-party service providers by contract to implement and maintain such appropriate security measures for personal information; provided, however, that until March 1, 2012, a contract a person has entered into with a third party service provider to perform services for said person or functions on said person’s behalf satisfies the provisions of 17.03(2)(f)(2) even if the contract does not include a requirement that the third party service provider maintain such appropriate safeguards, as long as said person entered into the contract no later than March 1, 2010.

 

Comment Number Fourteen:

Further, the language today is in effect a license, in perpetuity, as far as I can tell, to never be compliant as long as the contract is signed in time. This wide loophole must be addressed.  

 

What did OCABR do here:

They fixed it!  See above at Comment Number Thirteen.

 

Comment Number Fifteen:

The dual dating of March 1 2010 and 2012 here is quite confusing.  I asked several legal professionals to read this paragraph and they all indicated this writing is confusing. Can this be clarified?  

 

What did OCABR do here:

They fixed it!  See above at Comment Number Thirteen.

 

Comment Number Sixteen:

Another change emerges from this version. Briefly, the data owner no longer has to take reasonable steps.  As long as the owner has a contract, he or she appears to be in the clear. Surely, that is not the intent here.  

 

What did OCABR do here:

They fixed it!  See above at Comment Number Thirteen.

 

Comment Number Seventeen:

The removal of a need for a formal comprehensive information security program (CISP) leaves the regulation to interpretation of “I had one, in my mind, we all knew of it”. CISPs are requirements of today’s reality. Please consider adding the requirement for such a written program.

Again, even small businesses can create a formal CISP. The terms “formal”, “comprehensive”, or “written” should not be seen as equivalent to “expensive”. A sample guideline can be easily developed. One that is shared with the small business owner.  

What did OCABR do here:

They fixed it!  comprehensive information security program (CISP) is required once again!

 

Comment Number Eighteen:

With regard to item (g) herein, the removal of the requirement to keep least records for least amount of time and the least access, go against the grain of any good privacy program. As can be seen in the AICPA’s Generally Accepted Privacy Principals, and other places, these requirements are essential to the program. They must be called out, explained, and demanded.  

 

What did OCABR do here:

They did not change a thing.  🙁

 

Comment Number Nineteen:

In my view, the changes called out in Comment Number Eighteen, taken together with the changes no longer requiring Risk Assessment make this entire regulation pointless.

No risk assessment + no knowledge+ no need for knowledge + no rules of least access = no protection  

 

What did OCABR do here:

They did fix the first part (CISP), but not the principal of LEAST

 

 

Comment Number Twenty:

 

The new language, still in section (g), to:

(g) Reasonable restrictions upon physical access to records containing personal information, and storage of such records and data in locked facilities, storage areas or containers.

Virtually guarantees that this Rule will be visited repeatedly by the courts. The call for “Reasonable”ness has proven unwieldy in every instance it was used. This must be clarified. Other organizations have defined and validated such access rules and I suggest we use some of the language called out by certain of them to define this paragraph more meaningfully.

 

What did OCABR do here:

They did not change it.

 

 

Comment Number Twenty-One:

 

The removal of section (J)’s (old section (k)) words “or the potential therefore” has a chilling effect on the regulation. Near-breaches, together with known vulnerabilities, are no longer required to be investigated. This WILL result in substantially less protection to the data. I respectfully request that they be re-introduced, and perhaps expanded upon

.

What did OCABR do here:

They did not change it.

 

 

Comment Number Twenty-Two:

 

Section 17.04’s header has two changes in it. The first was referenced by my Comment Number One. The second change, “…and to the extent technically feasible” appears redundant and, frankly, nonsensical to me. If it is not feasible…. It cannot be done, right? What is risked here is that a business will say: “My experts believed it was not technically feasible.” This is another loophole which must be sealed.

 

What did OCABR do here:

They did not change it.

 

Comment Number Twenty-Three:

Section 17.04(1) again re-introduces the word “reasonable”. Please see my Comment Number Twenty for a discussion why this should be changed and clarified.

 

What did OCABR do here:

They did not change it.

 

Comment Number Twenty-Four:

In section 17.04(1) (b) I applaud the change from “seven….” to “or use of unique identifier technologies, such as biometrics or token devices;” OCABR has no need to define secure vs. non-secure.

 

What did OCABR do here:

Nothing needed to be done 🙂

 

 

Comment Number Twenty-Five:

 

In section 17.04(1) (c), the new language to “…and/or format that does not compromise the security of the data they protect;” has two difficulties. Firstly, the previous version was correct to demand that passwords will not be kept with the data they protect. While that might seem obvious to some, it is not to all.

 

What did OCABR do here:

Nothing needed to be done 🙂

 

 

Comment Number Twenty-Six:

 

Continuing from the above, in that self-same section, the term “format” seems to not include other-than-password methods, such as physical keys, or tokens. Let us revise this section to enhance its usability.  

 

What did OCABR do here:

They did not change this.  And I know why….

 

Comment Number Twenty-Seven:

In section 17.04(2) (b), the new introduction of “that are reasonably designed to maintain the integrity of the security of the access controls;” suffers from the same malady as I described in Comment Number 20 and others. There are well-ratified standards for that. Let us use one of them.

 

What did OCABR do here:

Nothing.

 

Comment Number Twenty-Eight:

Section 17.04(4) has several changes. Again, the introduction of “reasonable” is objectionable as detailed several times above.  

 

What did OCABR do here:

Nothing.

 

Comments Number Twenty-Nine and Thirty:

Continuing, and adding 17.04(5), the removal of a specific word: “recording” has the value of negating anything good that this Regulation intended. From the trenches: the requirements of recording and regular review are two of the keystones of any security and privacy program. There can be no protection without an audit trail. There can be no protection without a regular audit.  

 

What did OCABR do here:

Nothing.  This is one of my two remaining issues with this regulation.

 

Comment Number Thirty-One:

I applaud the addition of the demand for encryption on mobile devices. I suggest further, in-depth review of the meaning of this change, to the usage of extremely popular devices, such as the iPhone®.

 

Comment Number Thirty-Two:

In section 17.04(6), the word “reasonably” has appeared again. Twice. This is not an actionable standard.  

 

What did OCABR do here:

Nothing needed.

 

Comment Number Thirty-Three:

Reading on, the phrase: reasonably designed to maintain the integrity of the personal information”, is non-implementable in the sense that only Secure operating systems, such as some that are employed in unique instances by the Department of Defense, can be called “designed to …”

.

What did OCABR do here:

Nothing.

 

Comment Number Thirty-Three:

I suggest that the plug-and-patch cycle, which unfortunately is a part of life for system administrators, be handled via the call for inclusion in the CISP I suggest in Comment Number Seventeen of a written change control procedure.  

 

What did OCABR do here:

Nothing.

 

Comment Number Thirty-Four:

In section 17.04(7), the word “reasonably” has appeared again.  

 

What did OCABR do here:

Nothing.

 

Comment Number Thirty-Five:

The removal of all reference to physical security in section 17.04(9) is wrong. There can be no information protection without physical protection. I request that this be added back into the Regulation and enhanced. There are quite a few guidelines available from well-tested regulations.

What did OCABR do here:

Nothing.   I was wrong here – this was stated before in the above section, 17.04(2).

All in all, great changes!   Thank you, OCABR!

 

 

Permalink

DeliciousStumbleUponDiggTwitterFacebookRedditLinkedIn