You are on page 1of 24

Abstract

The purpose of this research project is to clarify terms of Cloud Computing and

related models using main characteristics typically associated with this paradigm in

the literature and to identify the top technical and non-technical obstacles and

opportunities of Cloud Computing.

It is argued that the construction and operation of extremely large-scale, commodity-

computer data centres at low-cost locations was the key necessary enabler of Cloud

Computing, for decrease in cost of electricity, network bandwidth, operations,

software, and hardware available at these very large economies of scale. These

factors, combined with statistical multiplexing to increase utilization compared a

private cloud, meant that cloud computing could offer services below the costs of a

medium-sized data centre and yet still make a good profit.

Any application needs a model of computation, a model of storage, and a model of

communication. The statistical multiplexing necessary to achieve elasticity and the

illusion of infinite capacity requires each of these resources to be virtualized to hide

the implementation of how they are multiplexed and shared. Our view is that different

utility computing offerings will be distinguished based on the level of abstraction

presented to the programmer and the level of management of the resources.

Moreover, it is aimed in this research project that to give a high level technical

overview of Cloud Computing, related concepts and Cloud Service Models.


INTRODUCTION

As a metaphor for the Internet, "the cloud" is a familiar cliché, but when combined

with "computing," the meaning gets bigger and fuzzier. Some analysts and vendors

define cloud computing narrowly as an updated version of utility computing: basically

virtual servers available over the Internet. On the other hand others go very broad,

arguing anything you consume outside the firewall is "in the cloud," including

conventional outsourcing.

The most common analogy to explain cloud computing is that of public utilities such

as electricity, gas, and water. Just as centralized and standardized utilities free

individuals from the vagaries of generating their own electricity or pumping their own

water, cloud computing frees the user from having to deal with the physical, hardware

aspects of a computer or the more mundane software maintenance tasks of possessing

a physical computer in their home or office. Instead they use a share of a vast network

of computers, reaping economies of scale.

As mentioned above the term “cloud computing” originated from the cloud symbol

that is usually used by flow charts and diagrams to symbolize the internet.

“Comes from the early days of the Internet where we drew the network as a cloud…
we didn’t care where the messages went… the cloud hid it from us” – Kevin Marks,
Google

The principle behind the cloud is that any computer connected to the internet is

connected to the same pool of computing power, applications, and files. Users can

store and access their own personal files such as music, pictures, videos, and

bookmarks or play games or use productivity applications on a remote server rather

than physically carrying around storage medium such as a DVD or thumb drive.
Almost all users of the internet may be using a form of cloud computing though few

realize it.

Taking these features into account an encompassing definition of the cloud computing

could be provided. Obviously, the cloud computing concept is still changing and these

definitions show how the cloud is conceived today. “I have not heard two people say

the same thing about it [cloud]. There are multiple definitions out there of ‘the cloud’“

- Andy Isherwood, HP’s Vice President of European Software Sales

However The National Institute of Standards and Technology (NIST) provides a

concise and specific definition: “Cloud computing is a model for enabling convenient,

on-demand network access to a shared pool of configurable computing resources

(e.g., networks, servers, storage, applications, and services) that can be rapidly

provisioned and released with minimal management effort or service provider

interaction.” [1]
How Cloud Computing Works?

To understand the concept of cloud computing, in other words computing as a utility

rather than a product, one must compare it to other utilities. Without public utilities

such as electricity, water, and sewers, a homeowner would be responsible for all

aspects of these concerns. The homeowner would have to own and maintain a

generator, keeping it fuelled and operational and its failure would mean a power

outage. They would have to pump water from a well, purify it, and store it on their

property. And they would have to collect their sewage in a tank and personally

transport it to a place where it could be disposed of, or they would have to run their

own personal sewage treatment plant.

Since the above scenario not only represents a great deal of work for the homeowner

but completely lacks economies of scale, public utilities are by far the more common

solution. Public utilities allow the homeowner to simply connect their fuse box to a

power grid, and connect their home's plumbing to both a water main and a sewage

line. The power plant deals with the complexities of power generation and transport

and the homeowner simply uses whatever share of the public utility's vast resources

he needs, being billed for the total.

By way of comparison, a typical home or work computer is an extraordinarily

complex device and its inner workings are out of the reach of most users. A computer

owner is responsible for keeping their machine functional, organizing their data, and

keeping out viruses and hackers. When computing power is contained at a specialized

data centre, or "in the cloud", the responsibility for performing these complicated
maintenance tasks is lifted from the user. The user becomes responsible only for

maintaining a very simple computer whose purpose is only to connect to the internet

and allow these cloud services to take care of the rest.

When a user accesses the cloud for a popular website, many things can happen. The

user's IP address, for example, can be used to establish where the user is located

(geolocation). DNS services can then direct the user to a cluster of servers that are

close to the user so the site can be accessed rapidly and in the user's local language.

Users do not log in to a server, but they log in to the service they are using by

obtaining a session id or a cookie, which is stored in their browser.

What the user sees in their browser usually comes from a cluster of web servers. The

web servers run user interface software which collects commands from the user

(mouse clicks, key presses, uploads, etc.) and interprets them. Information is then

stored on or retrieved from the database servers or file servers and an updated page is

displayed to the user. The data across the multiple servers is synchronised around the

world for rapid global access.

Companies can use cloud computing to effectively request and use time-distributed

computing resources on the fly. For example, if a company has unanticipated usage

spikes above the usual workload, cloud computing can allow the company to meet the

overload requirements without needing to pay for hosting a traditional infrastructure

for the rest of the year[1]. The benefits of cloud computing include that it can

minimize infrastructure costs, save energy, reduce the necessity and frequency of
upgrades, and lessen maintenance costs. Some heavy users of cloud computing have

seen storage costs fall by 20% and networking costs reduced by 50% [2].

Cloud Computing Architecture

Cloud computing architecture, the systems architecture of the software systems

involved in the delivery of cloud computing, typically involves multiple cloud

components communicating with each other over application programming interfaces,

usually web services and 3-tier architecture. This resembles the Unix philosophy of

having multiple programs each doing one thing well and working together over

universal interfaces. Complexity is controlled and the resulting systems are more

manageable than their monolithic counterparts.

The two most significant components of cloud computing architecture are known as

the front end and the back end. The front end is the part seen by the client, i.e. the

computer user. This includes the client’s network (or computer) and the applications

used to access the cloud via a user interface such as a web browser. The back end of

the cloud computing architecture is the ‘cloud’ itself, comprising various computers,

servers and data storage devices.

Key characteristics of Cloud Computing

Agility

Agility improves with users' ability to rapidly and inexpensively re-provision

technological infrastructure resources.

Application Programming Interface


Application Programming Interface (API) is the accessibility to software that enables

machines to interact with cloud software in the same way the user interface facilitates

interaction between humans and computers. Cloud computing systems typically use

REST-based APIs.

Cost

Cost is claimed to be greatly reduced and in a public cloud delivery model capital

expenditure is converted to operational expenditure. This ostensibly lowers barriers to

entry, as infrastructure is typically provided by a third-party and does not need to be

purchased for one-time or infrequent intensive computing tasks. Pricing on a utility

computing basis is fine-grained with usage-based options and fewer IT skills are

required for implementation (in-house).

Device and location independence

Device and location independence aims to enable users to access systems using a web

browser regardless of their location or what device they are using (e.g., PC, mobile

phone). As infrastructure is off-site (typically provided by a third-party) and accessed

via the Internet, users can connect from anywhere.

Multi-tenancy

Multi-tenancy enables sharing of resources and costs across a large pool of users thus

allowing for:

- Centralization of infrastructure in locations with lower costs (such as

real estate, electricity, etc.)

- Peak-load capacity increases (users need not engineer for highest

possible load-levels)

- Utilization and efficiency improvements for systems that are often only

10–20% utilized.
Reliability

Reliability is improved if multiple redundant sites are used, which makes well

designed cloud computing suitable for business continuity and disaster recovery.

Scalability

Scalability via dynamic ("on-demand") provisioning of resources on a fine-grained,

self-service basis near real-time, without users having to engineer for peak loads.

Performance

Performance is monitored, and consistent and loosely coupled architectures are

constructed using web services as the system interface.

Security

Security could improve due to centralization of data, increased security-focused

resources, etc., but concerns can persist about loss of control over certain sensitive

data, and the lack of security for stored kernels. Security is often as good as or better

than under traditional systems, in part because providers are able to devote resources

to solving security issues that many customers cannot afford. However, the

complexity of security is greatly increased when data is distributed over a wider area

or greater number of devices and in multi-tenant systems which are being shared by

unrelated users. In addition, user access to security audit logs may be difficult or

impossible. Private cloud installations are in part motivated by users' desire to retain

control over the infrastructure and avoid losing control of information security.

Maintenance

Maintenance of cloud computing applications is easier, because they do not need to be

installed on each user's computer. They are easier to support and to improve, as the

changes reach the clients instantly.


Cloud Computing Deployment Models

There are many considerations for cloud computing architects to make when moving

from a standard enterprise application deployment model to one based on cloud

computing. There are three basic service models to consider, and they differed as the

open APIs versus the proprietary ones. These are public, private and hybrid cloud and

IT organization can choose to deploy applications according to their requirements.

IT organizations can choose to deploy applications on public, private, or hybrid

clouds, each of which has its trade-offs. The terms public, private, and hybrid do not

dictate location. While public clouds are typically “out there” on the Internet and

private clouds are typically located on premises, a private cloud might be hosted at a

collocation facility as well.

Companies may make a number of considerations with regard to which cloud

computing model they choose to employ, and they might use more than one model to

solve different problems. An application needed on a temporary basis might be best

suited for deployment in a public cloud because it helps to avoid the need to purchase

additional equipment to solve a temporary need. Likewise, a permanent application,

or one that has specific requirements on quality of service or location of data, might

best be deployed in a private or hybrid cloud.

Public Clouds

Public clouds are run by third parties, and applications from different customers are

likely to be mixed together on the cloud’s servers, storage systems, and networks.
Public clouds are most often hosted away from customer premises, and they provide a

way to reduce customer risk and cost by providing a flexible, even temporary

extension to enterprise infrastructure. If a public cloud is implemented with

performance, security, and data locality in mind, the existence of other applications

running in the cloud should be transparent to both cloud architects and end users.

Indeed, one of the benefits of public clouds is that they can be much larger than a

company’s private cloud might be, offering the ability to scale up and down on

demand, and shifting infrastructure risks from the enterprise to the cloud provider, if

even just temporarily.

Figure. A public cloud provides services to multiple customers, and is typically


deployed at a collocation facility.

Private clouds

Private clouds are built for the exclusive use of one client, providing the utmost

control over data, security, and quality of service. The company owns the

infrastructure and has control over how applications are deployed on it. Private clouds

may be deployed in an enterprise datacenter, and they also may be deployed at a


collocation facility. Private clouds can be built and managed by a company’s own IT

organization or by a cloud provider. This model gives companies a high level of

control over the use of cloud resources while bringing in the expertise needed to

establish and operate the environment.

Figure 4. Private clouds may be hosted at a collocation facility or in an enterprise


datacenter. They may be supported by the company, by a cloud provider, or by a third
party such as an outsourcing firm.

Hybrid clouds

Hybrid clouds combine both public and private cloud models (Figure 5). They can

help to provide on-demand, externally provisioned scale. The ability to augment a

private cloud with the resources of a public cloud can be used to maintain service

levels in the face of rapid workload fluctuations. A hybrid cloud also can be used to

handle planned workload spikes. Hybrid clouds introduce the complexity of

determining how to distribute applications across both a public and private cloud.

Among the issues that need to be considered is the relationship between data and

processing resources. If the data is small, or the application is stateless, a hybrid cloud
can be much more successful than if large amounts of data must be transferred into a

public cloud for a small amount of processing.

Figure 5. Hybrid clouds combine both public and private cloud models, and they can
be particularly effective when both types of cloud are located in the same facility.
Cloud Computing Service Models and Actors

In practice, cloud service providers tend to offer services that can be grouped into

three categories: software as a service (SaaS), platform as a service (PaaS), and

infrastructure as a service (Iaas). Figure 3.1 shows Cloud Computing service

architecture as a SPI model.

Software as a service (SaaS)

SaaS is software that is developed and hosted by the SaaS vendor and which the end

user accesses over the Internet. Unlike traditional applications that users install on

their computers or servers, SaaS software is owned by the vendor and runs on

computers in the vendor’s data center (or a collocation facility). A single instance of

the software runs on the cloud and services multiple end users or client organizations.

Broadly speaking, all customers of a SaaS vendor use the same software: these are

one-size-fits-all solutions. Well known examples are Salesforce.com, Google’s Gmail

and Apps, instant messaging from AOL, Yahoo and Google, and Voice-over Internet

Protocol (VoIP) from Vonage and Skype.

SaaS has four maturity levels as following:

- Level 1: Ad-Hoc/Custom

- Level 2: Configurable

- Level 3: Configurable, Multi-Tenant-Efficient

- Level 4: Scalable, Configurable, Multi-Tenant-Efficient


Platform as a service (PaaS)

PaaS provides virtualized servers on which users can run applications, or develop new

ones, without having to worry about maintaining the operating systems, server

hardware, load balancing or computing capacity. A PaaS environment provides

compute power by providing a runtime environment for application code. Therefore

the unit of deployment is a package that contains application code or some compiled

version of the application code. Another capability of PaaS environments is that scale

can be specified via configuration and provided automatically by the environment.

For example, if you need three instances of a web user interface in order to deal with

anticipated load then this could be specified in a configuration file and the

environment would deploy your three instances automatically. For example,


Microsoft’s Azure Services Platform supports the .NET Framework and PHP. As

another example Google’s App Engine supports Java and Python.

Infrastructure as a service (IaaS)

In its purest incarnation Infrastructure as a Service (IaaS) offers compute power,

storage, and networking infrastructure (such as firewalls and load balancers) as a

service via the public internet. An IaaS customer is a software owner that is in need of

a hosting environment to run their software. Originally the term for this type of

offering was Hardware as a Service (HaaS). IaaS vendors use virtualization

technologies to provide compute power. Therefore the unit of deployment is a virtual

machine which is built by the software owner. The best known example is Amazon’s

Elastic Compute Cloud (EC2) and Simple Storage Service (S3). Commercial

examples of IaaS include Joyent, whose main product is a line of virtualized servers

that provide a highly available on-demand infrastructure. Services are typically

charged by usage and can be scaled dynamically, i.e. capacity can be increased or

decreased more or less on demand.

Actors

Many activities use software services as their business basis. These Service Providers

(SPs) make services accessible to the Service Users through Internet-based interfaces.

Clouds aim to outsource the provision of the computing infrastructure required to host

services. This infrastructure is offered ‘as a service’ by Infrastructure Providers (IPs),

moving computing resources from the SPs to the IPs, so the SPs can gain in flexibility

and reduce costs.


The most important Cloud entity, and the principal quality driver and constraining

influence are, of course, the user. The value of a solution depends very much on the

view it has of its end-user requirements and user categories.

Figure 2. Cloud user hierarchy.

Above figure illustrates four broad sets of nonexclusive user categories: System or

cyber infrastructure (CI) developers; developers (authors) of different component

services and underlying applications; technology and domain personnel who integrate

basic services into composite services and their orchestrations (workflows) and

delivers those to end-users; and, finally, users of simple and composite services. User

categories also include domain specific groups, and indirect users such as

stakeholders, policy makers, and so on. Functional and usability requirements derive,

in most part, directly from the user profiles.

Business Value of Cloud Computing

In this chapter business value of cloud computing will be discussed. In deciding

whether hosting a service in the cloud makes sense over the long term, it is argues that
the fine-grained economic models enabled by Cloud Computing make trade-off

decisions more fluid, and in particular the elasticity offered by clouds serves to

transfer risk.

As well, although hardware resource costs continue to decline, they do so at variable

rates; for example, computing and storage costs are falling faster than WAN costs.

Cloud computing can track these changes and potentially pass them through to the

customer more effectively than building one’s own datacenter, resulting in a closer

match of expenditure to actual resource usage.

In making the decision about whether to move an existing service to the cloud, one

must additionally examine the expected average and peak resource utilization,

especially if the application may have highly variable spikes in resource demand; the

practical limits on real-world utilization of purchased equipment, and various

operational costs that vary depending on the type of cloud environment being

considered.

Although the economic appeal of Cloud Computing is often described as “converting

capital expenses to operating expenses” (CapEx to OpEx), we believe the phrase “pay

as you go” more directly captures the economic benefit to the buyer. Hours purchased

via Cloud Computing can be distributed non-uniformly in time (e.g., use 100 server-

hours today and no server-hours tomorrow, and still pay only for what you use); in the

networking community, this way of selling bandwidth is already known as usage-

based pricing. In addition, the absence of up-front capital expense allows capital to be

redirected to core business investment. Therefore, even though Amazon’s pay-as-you-


go pricing (for example) could be more expensive than buying and depreciating a

comparable server over the same period, we argue that the cost is outweighed by the

extremely important Cloud Computing economic benefits of elasticity and

transference of risk, especially the risks of over provisioning (underutilization) and

under provisioning (saturation).

The key observation is that Cloud Computing ability to add or remove resources at a

fine grain (one server at a time with EC2) and with a lead time of minutes rather than

weeks allows matching resources to workload much more closely. Real world

estimates of server utilization in datacenters range from 5% to 20%. This may sound

shockingly low, but it is consistent with the observation that for many services the

peak workload exceeds the average by factors of 2 to 10. Few users deliberately

provision for less than the expected peak, and therefore they must provision for the

peak and allow the resources to remain idle at nonpeak times. The more pronounced

the variation, the more the waste.

(a) Provisioning for peak load


(b) Under provisioning 1

(c) Under provisioning 2

In above figure;

(a) Even if peak load can be correctly anticipated, without elasticity we waste

resources (shaded area) during non-peak times.

(b) Under provisioning case 1: potential revenue from users not served (shaded area)

is sacrificed.

(c) Under provisioning case 2: some users desert the site permanently after

experiencing poor service; this attrition and possible negative press result in a

permanent loss of a portion of the revenue stream.


Even less-dramatic cases suffice to illustrate this key benefit of Cloud Computing: the

risk of mis-estimating workload is shifted from the service operator to the cloud

vendor. The cloud vendor may charge a premium (reflected as a higher use cost per

server-hour compared to the 3-year purchase cost) for assuming this risk. We propose

the following simple equation that generalizes all of the above cases. We assume the

Cloud Computing vendor employs usage-based pricing, in which customers pay

proportionally to the amount of time and the amount of resources they use. While

some argue for more sophisticated pricing models for infrastructure services, we

believe usage based pricing will persist because it is simpler and more transparent, as

demonstrated by its wide use by “real” utilities such as electricity and gas companies.

Similarly, it is assumed that the customer’s revenue is directly proportional to the total

number of user-hours. This assumption is consistent with the ad-supported revenue

model in which the number of ads served is roughly proportional to the total visit time

spent by end users on the service.

In above formula the left-hand side multiplies the net revenue per user-hour (revenue

realized per user-hour minus cost of paying Cloud Computing per user-hour) by the

number of user-hours, giving the expected profit from using Cloud Computing. The

right-hand side performs the same calculation for a fixed-capacity datacenter by

factoring in the average utilization, including nonpeak workloads. Whichever side is

greater represents the opportunity for higher profit.


Obstacles and Opportunities for Cloud Computing

In this section, a ranked list of obstacles is offered to the growth of Cloud Computing.

Each obstacle is paired with an opportunity on how to overcome the obstacle, ranging

from straightforward product development to major research projects.

These obstacles and opportunities are grouped into three categories such as: adoption

challenges, growth challenges, and policy and business challenges.

Adoption Challenges

Challenge Opportunity
Availability Multiple providers
Data lock-in Standardization
Data Confidetiality and Auditability Encryption, VLANs, Firewalls

Growth Challenges

Challenge Opportunity
Data transfer bottlenecks FedEx-ing disks, reuse data multiple times
Performance unpredictability Improved VM support, flash memory
Scalable storage Invent scalable storage
Bugs in large distributed systems Invent Debugger using Distributed VMs
Scaling quickly Invent Auto-Scaler

Policy and Business Challenges

Challenge Opportunity
Reputation Fate Sharing Offer reputation-guarding services like those
for email
Software Licensing Pay-for-use licenses; Bulk use sales
CONCLUSION

The long dreamed vision of computing as a utility is finally emerging. The elasticity of a

utility matches the need of businesses providing services directly to customers over the

Internet, as workloads can grow (and shrink) far faster than 20 years ago. It used to take years

to grow a business to several million customers – now it can happen in months.

From the cloud provider’s view, the construction of very large datacenters at low cost sites

using commodity computing, storage, and networking uncovered the possibility of selling

those resources on a pay-as-you-go model below the costs of many medium-sized datacenters,

while making a profit by statistically multiplexing among a large group of customers. From

the cloud user’s view, it would be as startling for a new software startup to build its own

datacenter as it would for a hardware startup to build its own fabrication line. In addition to

startups, many other established organizations take advantage of the elasticity of Cloud

Computing regularly, including newspapers like the Washington Post, movie companies like

Pixar, and universities like ours. Our lab has benefited substantially from the ability to

complete research by conference deadlines and adjust resources over the semester to

accommodate course deadlines. As Cloud Computing users, we were relieved of dealing with

the twin dangers of over-provisioning and under-provisioning our internal datacenters.

Some question whether companies accustomed to high-margin businesses, such as ad revenue

from search engines and traditional packaged software, can compete in Cloud Computing.

First, the question presumes that Cloud Computing is a small margin business based on its

low cost. Given the typical utilization of medium-sized datacenters, the potential factors of 5

to 7 in economies of scale, and the further savings in selection of cloud datacenter locations,

the apparently low costs offered to cloud users may still be highly profitable to cloud

providers. Second, these companies may already have the datacenter, networking, and

software infrastructure in place for their mainline businesses, so Cloud Computing represents

the opportunity for more income at little extra cost.


Although Cloud Computing providers may run afoul of the obstacles summarized in Table 6,

we believe that over the long run providers will successfully navigate these challenges and set

an example for others to follow, perhaps by successfully exploiting the opportunities that

correspond to those obstacles.


References

[1] ^ "NIST.gov – Computer Security Division – Computer Security Resource Center".

Csrc.nist.gov.

[2] ^ M. Armbrust; A. Fox; R. Griffith; A.D. Joseph; R.H. Katz; A. Konwinski; G. Lee; D.A.

Patterson; A. Rabkin; I. Stoica and M. Zaharia. "Above the Clouds: A Berkeley View of

cloud computing". University of California at Berkeley. Retrieved 10 April 2011.

[3] By Sun Microsystem, ”INTRODUCTION TO CLOUD COMPUTING AR-

CHITECTURE” , June 2009[Online].Avilable :www.sun.com/featured-

articles/CloudComputing.pdf.

[4] Above the Clouds: A Berkeley View of Cloud Computing, Michael Armbrust et al, Feb

2009 (white paper and presentation)

[5] Google AppEngine: http://code.google.com/appengine/

[6] Amazon EC2: http://aws.amazon.com/ec2/

You might also like