Friday, December 10, 2010

WikiLeaks–How to Fix a Leak with Better Plumbing

The 9/11 Commission Report cited "pervasive problems of managing and sharing information across a large and unwieldy government that had been built in a different era to confront different dangers". Since 9/11 governments around the world have considerably adjusted their stance on information-sharing to allow more adequate and timely sharing of information. Unfortunately, the need to share information quickly in many situations had priority over the need to protect it and this left security policies, certification and accreditation practices, and existing security controls behind.

WikiLeaks may jeopardize all we've worked towards to enhance information sharing, and impede pursuits to make information-sharing more effective. Or it may serve as a wakeup call that our current policies, processes and solutions are not adequate in today's world where information must be collected, fused, discovered, shared and protected at network speed.

Here at Layer 7, we've been working with government agencies worldwide to support their needs for sharing information more quickly, while introducing a more robust set of access and security controls to allow only those with need-to-know clearance access to privileged information. In the following paragraphs, I'm going to discuss how Layer 7 Technologies aids in breaking down information-sharing silos while maintaining a high degree of information protection, control and tracking.

There are multiple efforts underway across government agencies to use digital policy to control who gets access to what information when, as opposed to relying on a written policy. Layer 7's policy-oriented controls allow for digital policy to be defined and enforced across distributed information silos. Either inside an enterprise or in the cloud, using Layer 7,government agencies and commercial entities can define and enforce rules for information discovery, retrieval and dissemination across a variety of security realms and boundaries. With the right kind of policy controls, companies can avoid a WikiLeak of their own.

Layer 7 provides information plumbing for the new IT reality. Using Layer 7 products organizations can ensure:

Data Exfiltration –The WikiLeaks scandal broke because of a single user’s ability to discover, collect and exfiltrate massive quantities of information, much of which was not needed for the day-to-day activities of the user. With Layer 7, digital policies can be defined and enforced which put limits on the number of times a single user can retrieve a single type of data or multiple types of data that, when aggregated together, could be interpreted as having malicious intent. If the user goes beyond his administratively imposed limit, Layer 7 can either allow the operation while notifying administrative or security personnel of the potential issue, or can disallow access altogether while awaiting remediation.

Access Control -The heart of any information system is its ability to grant access to people who meet the "need to know" requirement for accessing the information contained within. The reality with government organizations is that many information systems rely on the user’s level of clearance, the network he is using, or course-grained information likethe branch of service he belongs to, in order to grant or deny access to an information-sharing system in its entirety. For those going beyond the norm with usage of Role Based Access Control (RBAC), the burden of administrating hundreds or thousands users, based on groups, is formidable and limits the effectiveness of the system; it increases the likelihood that the system has authorized users whom no longer have “need to know” of the information.

Layer 7 policy enforcement and decision allows for user authorization through either Attribute Based Access Control (ABAC) or Policy Based Access Control (PBAC). These types of authorizations correlate through policy, attributes about the user, resource and environment in order to allow/deny access. Attributes can be collected from local identity repositories or from enterprise attribute services.

In addition, enterprise attribute services can be federated to allow for attributes to be shared across organizations, thereby minimizing the requirement of having to manage attributes about users from other organizations. An often-overlooked factor of authorization is the need to tie typical authorization policy languages like XACML (is user X allowed to access resource Y) to policies around data exfiltration, data sanitization and transformation, and audit. This is the area where Layer 7 stands out: not only do we have the ability to authorize the user, but we can also enforce a wide variety of policy controls that are integrated with access control.

The following blog posts by Anil John, a colleague whom has specialization in the identity space, provides good information about the benefits and needs of the community in moving from roles to policy and attributes. Policy Based Access Control (PBAC) and Federated Attribute Services


Monitoring, Visibility & Tracking - Even when controls are in place that help mitigate the issue of “need to know,” there will always be a risk of authorized users collecting information within the norms of their current job and role. In support of this, visibility of usage by the individual IT system owner and across enterprise systems is key to limiting this type of event in the future. Layer 7 allows for federation of monitoring data so information about data accesses can be shared with those organizations monitoring the network or enterprise. This allows authentication attempts and valid authorizations to be tracked, and distributed data retrieval trends analyzed on a per user basis across the extended enterprise.

Leakage of privileged information to unauthorized users can never be 100% guaranteed. However, with the simple implementation of a policy-based information control like Layer 7, access to confidential information can be restrictedand tracked.


Tuesday, November 2, 2010

Creating Robust Net-Centric Services through Policy


Next Tuesday at TMForum Management World Americas conference in Orlando, I'll be presenting along with Sriram Chakrapani, (Chief, Integration Engineering Division, DISA) a presentation titled Policy Enabled Net-Centric Information Sharing. Due to this, and a whitepaper I'm putting the final touches on titled "Robust Net-Centric Services", I thought it would be an opportune time to write a post discussing the value of policy in defining robust net-centric services.

As integration frameworks, Web Services and Restful applications adequately address how applications get exposed and communicate via SOAP/XML to exchange information with one another in a platform agnostic way. In real-world applications however, security, reliability, routing, bandwidth conservation, versioning and other requirements still have to be dealt with and in turn severely impact the loosely coupled nature of net-centric services.

For tactical edge deployments as well as disadvantaged (in one way or another) enterprise deployments these requirements are vital as web services and consumers undergo challenges and need to operate in a constantly changing environment. Bandwidth and connection state among other things require web services to have situational awareness where they can adapt to a constantly changing scenario. A simple example of such a change could be that a consumer and service are in use in a connected state to DISA Net-Centric Enterprise Services (NCES) and then become disconnected due to a kinetic or cyber attack. In this disconnected state the information exchange must continue to operate seamlessly by moving to a fall-back set of requirements (security, transport, reliability, etc.), locally deployed core enterprise services (machine to machine messaging), and potentially a cached business service. All without impacting the user.

The presentation and paper proposes the concept of “Robust Net-Centric Services” or “net-centric services with a high degree of resilience even when faced with a comprehensive array of faults and/or challenges and inherently capable of reacting gracefully to both internal application changes as well as external environmental changes, all without impacting information exchange”.

Given the distributed and federated nature of robust net-centric services, especially those supporting tactical edge communications; the ability to define robust requirements using policies, which are understandable and interoperable across a variety of implementations while at the same time implemented in a distributed fashion and subsequently easily changed is key to achieving complete information superiority.

The paper and presentation will highlight the four primary challenges to creating robustness. For the sake of brevity, I'm only going to list the four categories in this blog post. Each will be detailed in the paper when it is released.

  1. The availability and robustness of a network
  2. The availability of resources to execute a particular function
  3. Information Assurance (IA)
  4. User Interface (UI)

In order to accommodate the challenges above, it is required that we look back to the fundamental principles of software engineering: flexible systems are achieved by decoupling the variable parts of the implementation from the invariant parts. This variable layer can then be managed without affecting the system invariants. In this, conflicting constraints and capabilities can be reconciled, managed and constantly monitored. For example, performance and response time requirements can be weighed against security, confidentiality and privacy requirements.

Robust Net-centric services employ a deployed policy-driven and intelligent run-time capability to provide a Policy Layer, so that applications can be built based on their perspective business requirements, allowing applications to be deployed without knowledge of requirements they might face during certification, deployment, or during operation.

The Policy Layer provides a light-weight federated on-ramp to the enterprise and to the particular enterprise services in which the application depends upon, and facilitates a policy oriented approach to connectivity, and integration to locally deployed resources as well as those available on the enterprise network. This layer architecturally is made up of two fundamental concepts a Policy Enforcement Point (PEP) and a Policy Application Point (PAP). The following diagram illustrates how policy and a run-time policy enforcement and application capability could be deployed to allow for robustness in the face of a comprehensive array of requirements, and or situational challenges.

Through Policy enablement, operators can create and modify integration, caching, access control, privacy, confidentiality, audit logging and other such policies around the business services, without interfering with the development of the services themselves. This is the first step towards real-world implementation of loosely coupled SOA and a necessary step in preparation for robustness.

Email me if you would like to receive the paper on robust net-centric services when it is completed or if you have unique challenges/situations that you would like to see conveyed in the paper. If you would like to learn more about how Layer 7 products support the vision of robust net-centric services today, contact your local sales government representative. I hope to see some of you in Orlando!

Wednesday, September 15, 2010

Hacking as a Service (HaaS)

On Monday this week there was a very interesting post by Andy Greenberg a blog writer for Forbes.com which introduces a botnet herd standing by for payment and targeting instructions to launch a powerful Distributed Denial of Service (DDoS) attack. It appears based on his research that the botherd called "I'm DDOS" and available at "IMDDOS.org" is supposed to be used for testing purposes, however it is not clear how any type of target validation would or could be done by the company running the service to validate the target belongs to the attacker. You can see from the User Interface (UI) that the service looks to be fairly easy to use making it a likely attack tool for anyone with minimal computer skills and a grudge.

As with pioneers in computer infrastructure as a service, such as Salesforce and Amazon’s EC2 cloud, cyber arms dealers have begun asking customers, “Why buy when you can rent?” Renting cyber attack capabilities allows a political activist, terrorist group, or nation state to launch an attack on an online application - on demand. Those familiar with Cloud Computing and Software as a Service should recognize this as being the malicious equivalent - "hacking as a service".
It is clear that the "?? as a Service" model is going to be popular for people wanting to bring their products to market quickly and for those that want to see results with minimal up front capital costs.




Wednesday, July 7, 2010

Letter to the President on Cyber Security

The United States Senate sent President Obama a letter on July 1st. The letter spoke of the criticality of securing our nation's information systems, communications networks, and critical infrastructure, and states that there is an urgent need for action to address the vulnerabilities. Action in the text is largely comprised of policy, and coordination, however it does state the need to improve and expand the U.S. cyber workforce and increase cyber threat awareness throughout the country.

This letter is a prelude to a number of highly sensitive pieces of legislation that the President will need to comment on in the coming weeks and months and likely means that Senate thinks that there may be opposition to their legislation.

Here is the link:
http://www.nationaljournal.com/congressdaily/issues/documents/Letter_President_on_Cyber_Security_Legislation_070110.pdf





Friday, June 11, 2010

Federated Service Monitoring

What is Federated Monitoring?

A wise man once told me that there is a big difference between reachability and availability. Ever since I have been fascinated by the challenges that we face with net-centric information sharing and service dependencies which cross all forms of organizational, network, and even classification boundary. The reality here is that with net-centric approaches and the need to re-use services, we will have massive dependencies on services outside of our control.

The Federal Government has emphasized and even mandated in some cases the use of XML, Web Services, and SOA concepts and standards to align IT assets with business processes to employ the concept of netcentricity. Simply put the concept of netcentricity makes the right information available at the right time to the right people.

By exposing applications as reusable and dynamically composable services, new business processes can be defined on-demand to allow for business agility. This is especially important as Government organizations are constantly defining and building solutions for an evolving set of requirements many of which are based on a near term objectives to offer a set of capabilities to the war-fighter or analyst supporting an immediate threat.

The reality here is that these services will be stood up and offered throughout the government enterprise and will cross organizational, network, and potentially even classification boundaries. These newly formed IT Communities of Interest (CoI) will require a shared knowledge of their individual and collective purpose, mission objectives, service level agreements, security postures, and availability and reachability characteristics.

Existing monitoring approaches and products are based on the perspective of internal monitoring and portraying network, application, and service visibility. Within the DoD and IC the definition of enterprise is often not clear, and visibility and monitoring is segmented based on project, department, organization, branch of service, etc.. In Government, we are integrating our services across these different mini-enterprises and are lacking in an ability to monitor services in a federated fashion. Since netcentricity is all about services, I assert that we don't care that much about the health and availability of a server or an application, unless it impacts the service that we are using, and therefore my focus is on Federated Service Monitoring.

Federated service monitoring portrays the service availability information as it relates to usage of the service external to the enterprise. Availability in this case is measured not only by the internal services status, but additionally by aspects of the service provider's network. This end-to-end reachability information must be portrayed outside the enterprise in a secure fashion and made available to those wishing to use the service. With federated monitoring service implementers can extend their internal monitoring external to the organization to allow for business partners to accurately measure services availability, reachability, and performance in an ongoing fashion.

The Department of Defense (DoD) and Intelligence Community (IC) has developed the Joint DoD/IC Enterprise Service Monitoring (JESM) Specification, which in time will be used across the govenrment as a way of doing secure federated monitoring. The JESM specification is based on a subset of WSDM relevant to DoD/IC use-cases and WS-Eventing.

Layer 7 Technologies (www.layer7tech.com) SecureSpan and CloudSpan line of products are fully supportive of the Joint DoD/IC ESM specification. For every service within Layer 7, JESM monitoring can be enabled for external consumption of service metrics. The JESM Service supports request/response or publish/subscribe and for each JESM enabled service (Mission App A-C, etc.) and policy can be enforced to ensure access-control, confidentiality, integrity, and audit of JESM data. For example, Mission Application A metrics can be made available, but access limited by the attributes of the authenticated subject whom is requesting them.

In my time working with government, I have seen numberous occations where a service went down and noone knew for several days, all while they believed the data coming from the service was still available.

Mission IT visibility (past, current, and future) and operational flexibility (in the face of attack or even power failure) is critical. Federated monitoring isn't a silver bullet, however I believe it will be helpful in allowing for communities of interest to come together quickly, integrate their IT, while providing visibility and react-ability in the case of failure.


Wednesday, March 3, 2010

CNCI Partially Unclassified.

In the wake of CNN's airing of the two-hour special, We Were Warned: Cyber Shockwave, which stunned much of the public, and this governments administration, Mr. Howard Schmidt, the Executive Branch Cybersecurity Coordinator, or Cyber Czar, gave a keynote speech at the RSA conference yesterday.

Although I was stuck in DC, I read online that he spoke of Partnership and Transparency as being critical components in the President's Cyberspace Policy Review. Mr. Schmidt also announced that the Obama Administration had revised the classification guidance for the Comprehensive National Cybersecurity Initiative (CNCI) and that the unclassified portion would be made available by the end of day on Tuesday on the whitehouse.gov website. CNCI was launched by President Bush in National Security Presidential Directive 54/Homeland Security Presidential Directive 23 (NSPD-54/HSPD-23 ) in January 2008. Although I'm excited about the potential of Partnership and Transparency as I too believe we can not be successful in cyber space without them, I was more interested in the public release of CNCI, which illustrates some direction for our government in the shorter term.

Announcement of a funded strategy, even if it's not the Administrations promised National U.S. Cybersecurity strategy, couldn't have come at a better time. With the recent CNN broadcast, and the successful attacks in the last few months on Google, and Twitter, consensus is that the US is not ready for a sophisticated cyber attack that crosses Government, critical infrastructure, and private domains.

If you watched the CNN special, you would have seen that the question wasn't whether we could defend ourselves from a nation state or hacker group or launch an attack across cyber space, but rather could we act quickly enough to an attack while being impeded by questions of law, policy, politics, and jurisdictional boundary - issues not shared with our fast moving, highly sophisticated adversaries.

CNCI has the following major goals:

  • To establish a front line of defense against today's immediate threats
  • To defend against the full spectrum of threats
  • To strengthen the future cybersecurity environment

Layer 7 Technologies a Vendor of dynamic cyber defense products, provides our customers with the ability to protect applications, enable application monitoring for situational awareness, and the ability to adapt in the face of attack. These capabilities are critical in providing solutions across all three of the major goals of CNCI.

For more information on CNCI, please see www.whitehouse.gov

For more information on Layer 7 Technologies, please see www.layer7tech.com

Thursday, February 11, 2010

Identity and Access Management in Cloud Computing: Part 2

Cloud Computing Implementation Options and Challenges

Like any traditional IT project, a project leveraging cloud computing must first look to its requirements. Most IT projects have some requirement for identity whether it be that all accesses to the cloud or just administrative accesses require Authentication and Authorization. This second blog post in the series titled "Identity and Access Management in Cloud Computing" is focused on the implementation challenges of Identity and Access Control Architectures as they relate to cloud computing.

Identities for cloud computing can be broken down into the following categories:

  • Enterprise - Enterprise Users, and applications that will access cloud applications
  • Internet - Customers, Partners, and Unanticipated Users that will access cloud applications
  • Cloud - Cloud applications that will access cloud, enterprise, and partner applications

Whether we are talking about cloud usage, or cloud administration, identities can be binned into one of these three categories. The following paragraphs focus on the options and challenges in implementing an identity and access control architecture for cloud computing.

Identity Management - Identities may be associated with human resources hiring and firing, new or changing partner and contractor relationships, or new servers or applications being setup. Processes may include identity creation and role/group addition, credential issuance, audit and compliance, and on-going management and eventual deletion. Most companies leverage products which govern the creation of identities within their enterprise in accordance with their particular compliance regulations.

There are two approaches to identity management in cloud computing:

  • Leverage existing enterprise identity management system for cloud identities
  • Utilize a new cloud based identity management system and process for cloud identities
Identity Management in the cloud through either an integral cloud provided identity system or a cloud deployed identity management system fails in a number of ways. Below are the issues that come to mind:

User Experience

  • Separate systems increases user frustration

  • Users having more than a single credential can be problematic

  • Users have to deal with two separate processes for identity creation

  • Users may potentially become confused with enterprise vs. cloud issues and or policies

Manageability

  • Administration of identities requires double the amount of administration

  • User attributes are not automatically populated in cloud-based systems

Compliance and Risk

  • Cloud-based systems must adhere to regulatory requirements for identity provisioning

  • Cloud-based systems can easily be overlooked when changes are made to enterprise User's identities and privileges

  • Cloud-based systems may be susceptible to internet breach

Cost

  • Double the amount of work required to administer users
  • Purchasing and fielding identity products to the cloud may be costly
  • Separate Audit and Compliance may requires significant investments

Therefore, we must look to our existing enterprise identity management capabilities for managing identities for cloud usage, and administration.

Authentication Services - Principals are authenticated based on the principal making a claim regarding its identity, and then providing proof that the claim is true. For example, in computer systems, the username claims the principal's identity while the password which is a shared secret between the user and the system with which they are authenticating is the proof.

Authentication Services are responsible for authenticating principal's based on the principal making a claim regarding its identity, and validating that the claim is true. An Authentication Service provides a single logical component of a IT architecture where authentication may be accomplished. LDAP is a typical Authentication Service in that it provides a single point where users can be validated against their claims, whether their claim be in the form of a password, a certificate, or a stronger form of credential.

Identities and claims are managed and stored within the enterprise today and investments have already been made in this area. Authentication in the cloud requires user identities and claims to be available to the cloud applications. There are four approaches to this that will be discussed:

New cloud based solution
  • Many of the same issues encountered in moving identity management to the cloud are encountered with this approach.
  • Possible breach and release of identities to the internet
  • Administrative burden in managing two systems
Connectivity to the enterprise
  • For security reasons LDAP, and enterprise identity repositories are not accessible from the internet and thus would not be available to the cloud applications.
  • If they were available, latency of authentication queries may be a significant issue.
Identity replication from enterprise to cloud
  • All enterprise users information stored in the cloud poses a security and privacy problem should the cloud based identity repository be breached from the internet.
Federation of enterprise identity system
  • This approach carries the most opportunity for success as identity repositories can remain within the protected interior of enterprise. An externally available Secure Token Service (STS) could allow authentication and issuance of a federated authentication token to be utilized for authenticating to the cloud.

Federation of enterprise identity systems will be described in a future blog posting. This is the basis for allowing Identity Management Systems and Authentication Services to remain within the enterprise.

Authorization Services - Authorization is the means for ensuring that only properly authorized principals are able to access resources within a system. A Principal can either be a human, machine, or an application. In order to carry out authorization, the first step is to authenticate the principal, the second step is to obtain information about the principal and resource to which the principal is interacting and the final step is to allow or deny access to the principal based on the applicable policies for that resource.

An Authorization Service is responsible for evaluating an authorization query, collecting necessary information about the principal and the resource, potentially from an Attribute Service and/or identity directory, and evaluating a policy to determine if access should be granted or denied. There are three approaches where an authorization policy may be enforced in cloud computing.

Enterprise Authorization - The Cloud application asks the enterprise to make an authorization decision to grant or deny access.

  • Policies are created, managed, and stored within the enterprise
  • Authorization Services must be available to the internet which raises potential security issues of man in the middle and denial of service impacting cloud application usage
  • Latency may be an issue as cloud resources depend on network calls to enterprise for access

Stand Alone Cloud Authorization - Usage of cloud provided or custom authorization services to grant or deny access

  • Policies are created, managed, and stored in the cloud
  • Requires separate administration of cloud-based system
  • Course-grained capabilities of cloud-provided solutions may not suffice
  • Compliance and regulatory requirements may not be met by cloud provided systems

Cloud Authorization with Enterprise Governance- The cloud makes an authorization decision but policies are governed by the enterprise

  • Policies are created, managed, and stored in the enterprise but cached in the cloud
  • Allows policies to be created and managed in accordance with enterprise processes
  • Allows faster response times as authorization services are available local to the cloud applications

For these reasons, the most robust mechanism for cloud authorization is to deploy an authorization service in the cloud which can retrieve authorization policies from the enterprise. This will be a topic of a future blog posting. Specifically, standards will be discussed which make it possible for cloud-based authorization services to retrieve polices from the enterprise in a secure fashion.

Conclusions

Organizations must extend their existing Identity and Access Management Strategies into the Cloud. New solutions for the cloud simply will not scale rather the cloud must be seen as part of the "extended" enterprise, whereas existing privacy concerns, compliance issues, and processes and controls are dealt with within the cloud using strategies and solutions already built and utilized within the enterprise. In future blog postings, I plan to discuss ways that the enterprise can extend its existing solutions for Authentication and Authorization Services to the cloud.

Friday, January 15, 2010

Iranian Cyber Army Strikes Again

At the end of my post in December titled "Iranian Cyber Army Hacks Twitter", I asked "what's next for the Iranian Cyber Army". I appears that although trumped for media coverage this week with the Chinese attack on Google and Others, the Iranian Cyber Army has struck again, this time they have successfully defaced Baidu, a search engine based in China. Interestingly enough Baidu is the only competitor to Google in China, and stands to benefit should Google decide to close operations in China. For more information on the recent Google attack you can read my blog post on the topic (Cyber Attack on Google and Others).

It appears that Baidu was compromised similarly to Twitter on Tuesday, January 12th in that their DNS records were changed to point to a server hosted by the ISP ThePlanet.com in Houston, TX which displayed an Iranian Flag and message stating that the site was hacked by the Iranian Cyber Army to any user whom tried to use the Baidu website www.baidu.com. Baidu corrected the issue approximately 2 hours later according to sources on the internet.

As with Twitter, this appears to only have been a successful defacing attack, however the attacker could have just as easily created a fake baidu page, and pharmed or phished information from users.

It is startling the frequency at which these attacks are happening as of late and shows the dire need we have for a global cyber-coalition.

Cyber Attack on Google and Others

On Tuesday, Google reported in their official blog that in mid-December they detected a "highly sophisticated and targeted" attack on their corporate infrastructure originating from China that resulted in the theft of intellectual property from Google. Additionally, Google stated in this blog that 20 other large companies were similarly targeted. Google went on to state that they have evidence to suggest that a primary goal of the attackers was to access the Gmail accounts of Chinese human rights activists. This incident, as well as the limitation on free speech imposed on Google by the Chinese government, is forcing Google to review the feasibility of their business operations in China.

In follow-up, a number of security firms who are supporting the investigation have concluded that the number of attacked companies is not 20 but between 30 and 34. Most of the attacked were large Fortune 500 companies. The attack code named "Aurora" by the attackers was made up of dozens of pieces of malware, and several levels of encryption to hide itself in the targeted company networks and to obscure activity.

The U.S. Government has been under this type of attack for many years. This is the first time that a highly organized and sophisticated attack was launched on private industry. Who knows what the impact of this will be on the global economy? The mind can only fathom what would happen if each of the companies attacked lost some intellectual property which resulted in them being "second to market" for a product that they have been planning for and building for months or even years.

What we know about Aurora

There is some debate currently on whether Aurora leveraged a vulnerability in Internet Explorer and Adobe's Reader and Acrobat applications or whether the attack only leveraged Internet Explorer. Either way, Aurora installation began on the targeted system by viewing a malicious website or potentially through opening a PDF document sent in an email but as I mentioned this has not been substantiated by Adobe. Once executed in the browser an encrypted shell script would run. The shell script downloaded the binary from an external machine which once executed would open a backdoor to the attackers Command and Control servers. These servers were purportedly running in hosted facilities in the US. This allowed the attacker some level of access into the users machine and the network to which the machine is connected.

Microsoft Versions Affected:

According to Microsoft, Internet Explorer 6 Service Pack 1 on Microsoft Windows 2000 Service Pack 4, and Internet Explorer 6, Internet Explorer 7 and Internet Explorer 8 on supported editions of Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008, Windows 7, and Windows Server 2008 R2 are affected.

Let's review the time line of events in this event. The following dates/times were derived from various sources on the internet.

Mid-December - Google detects a "highly sophisticated and targeted" cyber attack

January 2nd - Adobe becomes aware of "sophisticated, and coordinated" cyber attack

January 4th - Attack seems to have stopped as Command & Control Servers are shut down

January 12th/3pm - Google announces the Cyber Attack via blog

January 12th/3:16pm - Adobe announces the Cyber Attack via blog

January 12th/Evening - U.S. Government asks China for an Explanation

January 14th - Microsoft issues a security advisory


When looking at the time line the scary thing is that the attack seems to have been commencing from mid-December (let's say the 15th). If Google detected it at its start, which may not be the case, and it was not shut down till January 4th, the attackers had 21 days of access. It's scary to think how much information could have been stolen and potentially how much damage the attackers could have done in 21 days should this have been their goal.

As stated in the U.S. Government Cyberspace policy review, information and communication networks are largely owned and operated by the private sector, both nationally and internationally. The report goes on to state that Cyber security requires a public-private partnership as well as international cooperation. Unfortunately, we are sorely lacking in the ability to ensure a coordinated response and recovery to a significant incident should one occur. This time line only proves this point. It appears as though private/public communication did not effectively start till January 12th, during this time companies were infiltrated, but yet may not have known. Even if Google had notified all the companies it derived were under attack from the information they had available, there is nothing to say that another attack was not going on simultaneously by the same attackers but disconnected from the one affiliated with Google.

With worldwide cyber attacks becoming more focused, we must accelerate our ability to deal with them more rapidly in a coordinated fashion. This particular instance seems to have been about stealing information, monetary gain, or political issues. We need to remember that it could just have easily been about disrupting critical national infrastructure for pursuit of national disorganization and loss of life.

Wednesday, January 6, 2010

Identity and Access Management in Cloud Computing #1

The new United States Federal Chief Information Officer (CIO) Vivek Kundra is serious about embracing cloud computing as a vehicle for rationalizing government IT assets, costs, and budgets. Aneesh Chopra, the Federal CTO follows suite, and has gone on the record to say that the federal government should be exploring greater use of cloud computing where appropriate. Cloud-based and Cloud application providing government storefronts like Apps.gov are being stood up in support of this goal. As stated by Vivek Kundra the major challenge they face in making cloud computing a reality is around Security and Privacy.

With this and an influx of government customers approaching Layer 7 for advice to deal with their cloud computing security and privacy challenges, I have been reading any cloud computing literature I can get my hands on. Although there is some good information coming out of the Cloud Security Alliance, NIST, and from industry sources, there is still a lack of sufficient detail on the topic of security and privacy to allow government customers to move forward smartly with cloud computing.

The fundamental shift from traditional IT to Cloud based IT is that enterprises are moving away from a model where they control all aspects of application delivery to a model where a large portion of the governance associated with the applications deployment and run-time characteristics of a service is controlled by the cloud provider. This is a significant move for the government which traditionally kept its IT close and its data even closer. One of the biggest questions is "How do I do Identity and Access Control and Management in the cloud" and that is a very good question.

There are a number of challenges associated with cloud computing and identity, access control and management, none of which have simple solutions. Challenges in provisioning identities for the cloud, storing identities so that the cloud has access, and enforcing fine-grained or even course grained access control in the cloud are all issues that have been resolved in the enterprise but require a new way of thinking in addressing them in cloud computing.

In the coming weeks, I will write a series of blog posts to flush out the concept of identity and access management in cloud computing, beginning next week with a description of cloud computing integration patterns.