You are on page 1of 8

Cloud Computing

Introduction:

The concept of cloudsis not new,it has proven a major commercial success in the recent years and will play a large part in the coming years in the field of IT. According to Wikipedia, the underlying concept of cloud computing can be dated even further back to a public speech given by John McCarthy 1961 where he predicts that computer time-sharing may lead to the provisioning of computing resources and applications as a utility.

Cloud systems can be understand as an another form resource provisioning infrastructure.By utilizing this new technology,organization can reduced their development costs and time. For the implementation of this new methodology,there is a cloud provider which provides the platform and other infrastructure as a service over the internet.Cloud provider possess all the necessary hardware and software and reside all of them in some datacenter. Organizations can utilize the cloud services on the pay as utilize basis.

The following list identifies the main types of clouds : 1)Infrastructure as a service(laaS):It also referred as a resource cloud.By utilizing laaS ,end users can benefit large pool of databases and storage capabilities as a service.They can make use of the storage capabilities according to their weight and requirements. laaS is also utilized for providing low level services like CPU for computations which can make a typical virtual environment.

2)Platform as a Service(PaaS): PaaS provides computational resources via a platform through which applications and services can be developed and hosted. 3)Software as a Service(SaaS): It is also referred as a service and application cloud.Through SaaS ,end users can benefits of application softwares and business processes without having them at their own site which leads to the reduction in their development costs and time.They utilize these services whenever they required or when they have maximum seasonal load. Google Docs and salesforce CRM are good examples of this methodology.

The whole phenomenon can be understand with the following diagram.

Key Issues:
Beside many benefits, cloud computing methodology introduces many concerns about privacy, security, data integrity and others. These issues can become more worse in the case of combining mulitiple clouds. Moreover there need to define vivid policies for both the cloud providers and the consumers. There is also the need of high level of trust between the both stack holders because on utilizing cloud service all the secure data is at the risk of cloud providers. There may come some issues when some organization changed its application structure from private cloud to public cloud.

Historical Background:
The underlying concept of cloud computing dates back to 1960, when John McCarthy opined that "computation may someday be organized as a public utility"; indeed it shares characteristics with service bureaus that date back to the 1960s. In 1997, the first academic definition was provided byRamnath K. Chellappa who called it a computing paradigm.

The term cloud had already come into commercial use in the early 1990s to refer to large Asynchronous Transfer Mode(ATM) networks. By the turn of the 21st century, the term "cloud computing" began to appear more widely, although most of the focus at that time was limited to SaaS. In 1999, Salesforce.com was established by Marc Benioff , Parker Harris, and their associates. They applied many technologies developed by companies such as Google and Yahoo! to business applications. They also provided the concepts of "on demand" and SaaS with their real business and successful customers. The key for SaaS is that it is customizable by customers with limited technical support required. In the early 2000s,Microsoft extended the concept of SaaS through the development of web services. IBM detailed these concepts in 2001 in the Autonomic Computing Manifesto, which described advanced automation techniques such as self-monitoring, self-healing, self-configuring, and self-optimizing in the management of complex IT systems with heterogeneous storage, servers, applications, networks, security mechanisms, and other system elements that can be virtualized across an enterprise.

Description:
Cloud architecture:
cloud computing , comprises hardware and software designed by a cloud architect who typically works for a cloud integrator. It typically involves multiple cloud components communicating with each other over application programming interfaces, usually web services. This closely resembles the UNIX philosophy of having multiple programs each doing one thing well and working together over universal interfaces. Complexity is controlled and the resulting systems are more manageable than their monolithic counterparts. Cloud storage architecture is loosely coupled, where metadata operations are centralized enabling the data nodes to scale into the hundreds, each independently delivering data to applications or users. The whole system is administered through a central server that is also used for monitoring clients demand and traffic and ensuring smooth functioning of the system.Middleware is used to allow computers that are connected on the network to communicate with each other.Cloud computing systems also must have a copy of all its clients data for the restoration of the service if some breakdown happens. This process of copying data is called redundancy and it is the responsibility of cloud computing provider.

Cloud Deployment Models: There are four different models of cloud computing. 1)public cloud 2)Community Cloud 3)Hybrid Cloud 4)Private Cloud

Public Cloud: Public or external cloud is tranditional cloud computing in which resources are provisioned on a finegrained,self-service basis over the internet. Community Cloud: If several organizations have similar requirements and want to share infrastructure then community cloud is established.This is more expensive option as compared to public cloud as the costs are distributed among few users as compared to the public cloud but the community cloud offer a higher level of privacy and security.

Hybrid Cloud: Hybrid cloud means either two separate clouds joined together or a combination of virtualized cloud server instances used together with real physical hardware. The most correct definition of the term Hybrid Cloud is probably the use of physical hardware and virtualized cloud server instances together to provide a single common service. Private Cloud: Private cloud consists of applications or virtual machines in a companys own infrastructure.They provide the benefits of utility computing shared hardware cost,the ablility to recover from failure and the ability to scale up or down depending upon the demand.

Major Obstacles for cloud computing:


Some of the major obstacles in adopting cloud computing are as follows.

Availability of the service: Organizations worry about whether Utility Computing services will have adequate availability, and this makes some wary of cloud computing. One solution for this problem is the use of mulitiple network providers so that failure by a single company will not take them off the air.Due to this problem,large companies are still reluctant to migrate to cloud computing because even if the company has multiple datacenters in different geographic regions
using different network providers, it may have common software infrastructure and accounting systems, or the company may even go out of business.

Data Lock-In: Customers cannot easily extract their data and programs from one site to run on another. Concern about the difficult of extracting data from the cloud is preventing some organizations from adopting Cloud Computing. Customer lock-in may be attractive to Cloud Computing providers, but Cloud Computing users are vulnerable to price increases (as Stallman warned), to reliability problems, or even to providers going out of business.

Data Confidentiality and Auditability:


My sensitive corporate data will never be in the cloud. Anecdotally we have heard this repeated multiple times.

We believe that there are no fundamental obstacles to making a cloud-computing environment as secure as the vast majority of in-house IT environments, and that many of the obstacles can be overcome immediately with well understood technologies such as encrypted storage and Virtual Local Area Networks. For example, encrypting data before placing it in a Cloud may be even more secure than unencrypted data in a local data center; this approach was successfully used by TC3, a healthcare company with access to sensitive patient records and healthcare claims, when moving their HIPAAcompliant application to AWS [2]. One more concern is that many nations have laws requiring SaaS providers to keep customer data and
copyrighted material within national boundaries. Similarly, some businesses may not like the ability of a country to get access to their data via the court system; for example, a European customer might be concerned about using SaaS in the United States given the USA PATRIOT Act.

Data Transfer Bottlenecks:


Applications continue to become more data-intensive. If we assume applications may be pulled apart across the boundaries of clouds, this may complicate data placement and transport. At $100 to $150 per terabyte transferred, these costs can quickly add up, making data transfer costs an important issue. Cloud users and cloud providers have to think about the implications of placement and traffic at every level of the system if they want to minimize costs. One opportunity to overcome the high cost of Internet transfers is to ship disks. Jim Gray found that the cheapest way to send a lot of data is to physically send disks or even whole computers via overnight delivery services. Although there are no guarantees from the manufacturers of disks or computers that you can reliably ship data that way, he experienced only one failure in about 400 attempts. A second opportunity is to find other reasons to make it attractive to keep data in the cloud, for once data is in the cloud for any reason it may no longer be a bottleneck and may enable new services that could drive the purchase of Cloud Computing cycles.

Performance Unpredictability:
Our experience is that multiple Virtual Machines can share CPUs and main memory surprisingly well in Cloud Computing, but that I/O sharing is more problematic. One opportunity is to improve architectures and operating systems to efficiently virtualize interrupts and I/O channels. Technologies such as PCIexpress are difficult to virtualize, but they are critical to the cloud. One reason to be hopeful is that IBM mainframes and operating systems largely overcame these problems in the 1980s, so we have successful examples from which to learn.

Another possibility is that flash memory will decrease I/O interference. Flash is semiconductor memory that preserves information when powered off like mechanical hard disks, but since it has no moving parts, it is much faster to access (microseconds vs. milliseconds) and uses less energy.

Bugs in Large-Scale Distributed Systems:


One of the difficult challenges in Cloud Computing is removing errors in these very large scale distributed systems. A common occurrence is that these bugs cannot be reproduced in smaller configurations, so the debugging must occur at scale in the production datacenters.

Scaling Quickly:
Pay-as-you-go certainly applies to storage and to network bandwidth as both of which count bytes used. Computation is slightly different, depending on the virtualization level.For example Google AppEngine automatically scales in response to load increases and decreases, and users are charged by the cycles used. Similarly AWS charges by the hour for the number of instances you occupy, even if your machine is idle. The reason for scaling is to conserve resources as well as money. Since an idle computer uses about two-thirds of the power of a busy computer, careful use of resources could reduce the impact of datacenters on the environment, which is currently receiving a great deal of negative attention. Cloud Computing providers already perform careful and low overhead accounting of resource consumption. By imposing per-hour and per-byte costs, utility computing encourages programmers to pay attention to efficiency.

Software Licensing:
Current software licenses commonly restrict the computers on which the software can run. Users pay for the software and then pay an annual maintenance fee. Indeed, SAP announced that it would increase its annual maintenance fee to at least 22% of the purchase price of the software, which is comparable to Oracles pricing. Hence, many cloud computing providers originally relied on open source software in part because the licensing model for commercial software is not a good match to Utility Computing. The primary opportunity is either for open source to remain popular or simply for commercial software companies to change their licensing structure to better fit Cloud Computing. For example, Microsoft and Amazon now offer pay-as-you-go software licensing for Windows Server and Windows SQL Server on EC2.

You might also like