What started out as a way to slice and dice physical resources of a compute engine (so called “virtualization”), evolved into an elastic and on-demand server-side infrastructure called “cloud” over the last few years. Some pundits compare cloud computing with the cumbersome mainframe machines from the seventies or with the speed freak “supercomputers” or the isolated grid computing of the late 1990s. The computing performance and flexibility beyond any past computing paradigms what we call “cloud computing” today and it has disrupted all previous paradigms.
While the earliest “cloud” concepts might have their genes in the first time-sharing ideas of the late 1950s, performance computers of the 1960s and networked computers of the 1970s, the available “cloud” infrastructure as a service (IaaS) emerged with Amazon’s EC2/S3 release in 2006. Early adopters embraced the new zero overhead computing availability. Users could see instant opportunity to solve huge problems. The power, flexibility, on demand nature, and pay-as-you-use models as benefits of services offered in 2007 and 2008 were allowing large computing to solve problems that weren’t accessible with the old “build your own server farm or computing infrastructure” approach. By 2009, other big name application builders like Google, Microsoft and Salesforce began making “killer” apps available via cloud and computing hardware/OS builders. IBM and VMware began offering their versions of clouds around the same time. Many consumer, retailer and online businesses started migrating their applications and services to Amazon and other cloud provider’s infrastructure and startups found their perfect platforms for innovating the next generation web applications.
In this second decade of the new millennium, cloud has become a mainstream focus for consumer, as well as Enterprise applications. Various forms and shapes of flexible pay as you use large scale computing started cropping up in the server and data center industry. All major web, software and hardware vendors now take cloud very seriously while several Enterprises have started using cloud as part of their core IT strategy. Today, cloud is well past its inflection point, into a tremendous growth phase, and possibly experiencing the most innovative cycle since its inception. Cloud is now ubiquitous.
Recent reports and surveys indicate that cloud adoption is at 50% year after year growth, on rocket fuel. Original cloud architectures are gradually becoming commonplace, with newer architectures and models emerging. Rightfully so, the early days of public cloud were followed by the hay days of private cloud, then the year of “hybrid cloud” in 2013! So, what’s new in 2014? Certainly, “hybrid” cloud has yet to see it’s real glory days as the adoption cycle for truly hybrid use cases is quite long. Hybrid clouds make more sense for enterprise applications while enterprises are in no hurry to move from their decades-old, beloved data centers and hosted environments. Of course, the risks associated with the perceived “cost savings” and “flexibility” of public cloud inhibit any quick decisions or even considerations of using hybrid models for any complex production workload.This means the best days of cloud are probably still ahead of us.
It is not done! Ahead we see continued obstacles work on security, reliability, performance and penetration. I get excited when writing about “cloud” and as usual, but the main mission of this post is: an attempt to clarify some of the popular qualifiers of cloud (so called “types or architectures of cloud”). Here we go…
Public Cloud includes services offered over the public Internet and are available to anyone who wants to purchase the service. This is the most popular and widely used form of cloud. There are a few hundred public Cloud Service Providers (CSP) around the world, offering infrastructure, platform or software/application services: IaaS, PaaS or SaaS. AWS (the clear leader), Microsoft Azure, IBM SoftLayer and Google Cloud are amongst the most popular public clouds of the world today. Well abstracted applications that have little hardware dependencies; applications with low security and less performance requirements; and applications that can scale horizontally – are well suited for public clouds.
Private Cloud is a virtualized cloud data center inside your company’s firewall. It may also consist of private servers and racks, dedicated to a company, within a cloud provider’s data center. The company’s IT department provides software and hardware as a service to its users who are the employees, contractors, vendors and other stakeholders. VMware, HP and Dell are amongst the most popular private cloud infrastructure providers of the world. With OpenWorld 2014 announcement, Oracle has entered private cloud market a couple of weeks back. Applications with stringent security and privacy requirements; high performance requirements; and some specialized industry vertical workloads are well suited for private clouds.
Hybrid Cloud combines aspects of both public and private clouds for various workloads and optimizes the infrastructure, platform and software services for those workloads. Some call traditional data center or hosted environments integrated with public cloud “hybrid” cloud as well. Some vendors seamlessly extend traditional data center or private cloud infrastructure to public clouds, such as VMware vCloud, as long as the underlying infrastructure is homogenous. In this case, both private and public infrastructure is based on VMware technologies, such as, vSphere. Others require software agents to be installed on both private cloud/data centers and/or public cloud ends to implement seamless “hybrid” environments. Selection of services that run on each end is one of the most critical decision points for successful “hybrid” deployments. Workloads with mixed security and scaling requirements may take advantage of hybrid clouds by creating a sophisticated divisionof duties, e.g., running authentication and security services on private cloud while running web services and expanding capacity for handling excess load may be utilized on public clouds.
Multi Cloud is an architecture where more than one instance of public cloud is utilized in a “hybrid” or public cloud only environment for a workload. That means, a workload may run in a private cloud or data center environment as well as multiple public clouds or it may run on multiple public clouds only. Note that multi cloud workloads do not necessarily require any cloud-wide integration between the clouds it utilizes beyond connectivity and any integration needed for the specific workload only. Such workloads are probably theoretical at the time of writing and may become more commonplace over the next few years. When innovative application architectures emerge that can breakdown workloads and distribute into multiple clouds seamlessly, multi cloud architectures are likely to become popular.
Inter Cloud is an interconnected global “cloud of clouds” — a term first used in the context of cloud computing in 2007 when Kevin Kelly opinioned that “eventually we’ll have the intercloud, the cloud of clouds”. It became popular in early 2009 and Cisco probably deserves the credit for popularizing the term. Most recently, they came up with the product offering by the same name: InterCloud that offers solutions based on data centers and multiple public clouds. Rightfully, it can be viewed as “cloud of clouds” as it combines data centers with multiple public clouds. Cisco has made a series of recent announcements and is investing billions to build the infrastructures that can support seamless “workload mobility”. Those workloads can run on more than one cloud without any significant modification. Any application that requires portability across geo- or location- or other-criteria to run on different clouds should be good candidates for inter cloud architectures.
Cross Cloud is a scenario where a workload running in one cloud environment has dependencies on services running in another cloud. Such scenarios can arise in all of hybrid, multi and inter cloud architectures and may require dedicated, high bandwidth network connectivity between the clouds in the play. Cross cloud scenarios are likely to yield high levels of service optimization in these architectures, and allow a workload to take advantage of any better service offerings or SLAs provided by a different cloud than where the workload is running. Cross cloud scenarios are starting to appear in hybrid clouds from a security and privacy standpoint. Such scenarios are likely to appear in multi and inter cloud scenarios over the next few years. Future application architectureswill take advantage of one cloud service, e.g., compute on one platform and another cloud service, e.g., content distribution on another platform and create highly distributed applications that run across clouds utilizing different services on each.
Pure Cloud architectures would stress on very high levels of adoption of the truly “cloud” oriented services and concepts usage for an application or application service. Such architectures definitely require that the underlying infrastructure is cloud-only and not mixed with pre-cloud servers or data centers. On top of the cloud infrastructure, platform and software services should be utilized fully by an application in order to qualify as a pure cloud. For example, an application using a traditional database rather than a cloud database; running on cloud cannot be called a “pure cloud” application. Most greenfield cloud applications can utilize all or most services available on a platform and thereby, reduce operational burden of running highly scalable and complex workloads.
Mixed Cloud architectures would have much higher tolerance for usage of non-cloud or pre-cloud constructs, components and services. The foundational infrastructure can be based on a combination of cloud and data center services. Platforms and applications running in mixed cloud environments may include both cloud and non-cloud services. But the workload itself should be running in a cloud environment. Most legacy applications that are migrated from on-premise and traditional environments end up “mixed cloud” as they can take some advantage of cloud services, but not many due to architectural and implementation constraints of such legacy applications.
These simple concepts can be helpful in choosing approaches to solving large data/computing problems. We’re working on some really cool technologies to managed migrations between models of large data/computing needs that will surface the need for some of the concepts I’ve used in this post, demonstrating how our technologies will make an impact in the future of cloud and very widespread cloud adoption, especially, in the Enterprise IT and computing space (not just Silicon Valley or other tech hubs).