You are on page 1of 47

http://www.exforsys.com/tutorials/cloud-computing/the-future-of-cloud-computing.

html Table of Contents

1. Cloud Computing Basics 2. Cloud Computing Architecture 3. Cloud Computing Basic Components 4. Cloud Computing and Web 2.0 5. Cloud Computing Behavior 6. Cloud Computing Platforms 7. Cloud Computing User's Perspective 8. Cloud Computing in Enterprise 9. Cloud Computing Security 10. Software as a Service (SaaS) Model 11. Everything as a Service (EaaS) Model 12. Selecting a Cloud Computing Vendor 13. Migrating to Cloud Computing 14. Moving Beyond the Desktop Experience 15.The Future of Cloud Computing Cloud Computing Basics
Page 1 of 2
Author: Exforsys Inc. Published on: 3rd Apr 2009 | Last Updated on: 23rd Nov 2010

The internet or online connectivity started out as a simple information exchange. Almost anything that users want to learn is possible because of the internet. They just go online, make a few searches and a minute or two, they will have the information they need. Personal communication became a lot easier as email was developed into one of the greatest innovations of the century. Instead of sending a snail mail which could take weeks, a single email could be read in a matter of seconds. Even with a simple connection, exchange of information could be done chat and updates on new data can also be done through the internet.

Although the same things could be done without internet, the experience that come from internet has become so much more. Through improvement of communications infrastructure, the internet was able to move away from the regular phone line and has a dedicated connection.

Dial-up connection is almost a thing of the past as more household adapt to dedicated lines with increasing internet connection. Wireless connectivity with almost the same speed is not possible. You do not need to drag a cable around, you just have a decent internet connection.

Because of the increasing capability of the internet, developers have looked beyond information sharing. Certain functions in desktop could now be done online. Office documents could be uploaded and extracted or even worked on at the same time online. Data processing is not limited anymore to your desktop as the increasing capacity of online connectivity has made it possible to emulate or even surpass local data processing.

Cloud Computing is Born


In gist, cloud computing is all about implementing process online instead in your local gadget. Data and processes could be done online without the need of any local software or client. As long as the user knows the process and have the right security credentials, he could access the system and make the necessary changes.

Cloud Computing The Application Backbone of Service

Basic

Components

Cloud Computing Basic Components


Page 1 of 2
Author: Exforsys Inc. Published on: 5th Apr 2009

Successful implementation of cloud computing requires proper implementation of certain components. Without any of these components, cloud computing will not be possible. These components cant be easily implemented by one person alone.

Ads

Cloud Computing will require persons with different expertise, experiences and backgrounds. As it will require more people in the industry, its no wonder why cloud computing is a very expensive venture. But even with the expenses that the company would often have to spend, the advantages provided by cloud computing is far more than the initial spending.

Some would resort to a cloud computing vendor because of the lack of resources while others have the resources to build their cloud computing applications, platforms and hardware. But either way, components have to be implemented with the expectation of optimal performance.

The Client The End User


Everything ends with the client. The hardware components, the application and everything else developed for cloud computing will be used in the client. Without the client, nothing will be possible.

The client could come in two forms: the hardware component or the combination of software and hardware components. Although its a common conception that cloud computing solely relies on the cloud (internet), there are certain systems that requires pre-installed applications to ensure smooth transition. The hardware on the other hand will be the platform where everything has to be launched.

Optimization is based on two fronts: the local hardware capacity and the software security. Through optimized hardware with security, the application will launch seamlessly.

The Service the Functions in Cloud Computing


Cloud computing always has a purpose. One of the main reasons cloud computing become popular is due to the adoption of businesses as the easier way to implement business processes. Cloud computing is all about processes and the services launched through cloud computing always has to deal with processes with an expected output.

Ads

The optimization on services is based on two things: the proper development of the application and the end user. Sometimes, the service could be used by the user wherein their experience is greatly affected by their gadget.

12

Read Next: Cloud Computing and Web 2.0

This tutorial is part of a Cloud Computing Tutorial tutorial series. Read it from the beginning and learn yourself.
Cloud Computing Tutorial Tutorials

SOD - The Secure Online Desktop on Cloud Computing SOD stands for Secure Online Desktop. It is a system that unites thin client technology with cloud computing technology and public key cryptography to provide a secure Desktop computer on the Internet.

With SOD, you'll have a data center to run your software using the computing resources of many computers (Cloud Computing) through your

thin client. The users of your company will no longer need a computer to perform their work; with SOD they will have a thin client providing the access to a Datacenter in order to use their software with high performance.

SOD provides an alternative to desktop computers and has no impact on servers and network devices of your company.

SOD offers a Cloud Computing service that allows you to run software applications, record images and documents in order to have them always at hand, in the company or out of it. SOD is a solution that allows you to work with the latest desktop applications without buying or installing them. It is enough to subscribe to and then you may use SOD solution during the period of your subscription without any limits. The software becomes a service (SaaS - Software as a Service), not a must-have thing but a service to be paid according to usage.

One of the benefits of SOD solution is that companies will no longer have to deal with problems related to software licenses and there will be no need to hire skilled personnel in order to manage the software and data.

Cloud computing is an advanced tool; all you need for working is within a cloud on the Internet where hundreds of servers are working together.

There will be no need in purchasing PCs and software for your employees.

SOD Introduction

SOD O.S.

Integration (The Cloud Computing for Desktop PC) (The Cloud Computing for mobile devices)

Cloud computing - Who uses? SOD is a solution that appeals to all companies that want to reduce the expenses related to desktop computers as well as having a greater security for their users. Any company, regardless of its size and number of workstations may adopt a SOD solution.

SOD is also ideal for studying rooms, libraries and university environments where it is important to manage a number of computers and hold costs down. In fact, SOD offers a solution to the educational world. Cloud computing accessible from anywhere SOD is an Internet service for being used by any company in the network regardless of their geographical location.
Cloud Computing Architecture
Page 1 of 2

The success of cloud computing is largely based on the effective implementation of its architecture. In cloud computing, architecture is not just based on how the application will work with the intended users. Cloud computing requires an intricate interaction with the hardware which is very essential to ensure uptime of the application. These two components (hardware and application) have to work together seamlessly or else cloud computing will not be possible. If the application fails, the hardware will not be able to push the data and implement certain processes.

On the other hand, hardware failure will mean stoppage of operations. For that reason, precaution has to be done so that these components will be working as expected and necessary fixes has to be implemented immediately for prevention as well as quick resolution.

Data Centers

One of the most distinguishing characteristics of cloud computing architecture is its close dependency on the hardware components. An online application is just a simple application that could be launched in different servers but when the application is considered with cloud computing, it will require massive data centers that will ensure the processes are done as expected and timely.

Data centers for cloud computing architecture are not your run-of-the-mill data processing centers. Its composed of different servers with optimal storage capacity and processing speed. They work together to ensure that the application will be operating as expected. The area is usually in a highly controlled environment where it would be constantly monitored through various applications and manually checked for actual physical problems. The data center could be considered as the backbone of cloud computing architecture. The destruction of it could easily mean millions of dollars in additional spending for companies. For that reason, data centers of large companies are often kept secret to avoid infiltration either by hacking or actual physical damage.

Cloud Computing and Web 2.0


Page 1 of 2

Cloud computing has been the byword for different businesses today. This is a type of process that relies o the internet or online connectivity for different data processing instead of using the local gadget. The internet has come a long way of providing different types of services to users.

Years ago, the internet is just used merely for information gathering and email. Today, the internet could become a host of different types of applications which will not require any local installation in the users end.

Cloud computing promises portability as users would just have to need a strong internet connection to ensure the process is done. Cloud computing could even emulate the desktop experience.

Cloud computing is usually focused on the enterprise. As businesses today require more collaboration in real time without any geographical consideration, cloud computing became a viable option as this will provide real time interaction in business process. It offers the portability to different users without having to constantly require the local gadget for certain installations.

Considering Web 2.0


The term Web 2.0 on the other hand, have been used for many years long before cloud computing became popular in the software development industry. But instead of defining Web 2.0 about the components it requires, Web 2.0 is more about the interaction the user will receive.

In Web 2.0, the focus of different online applications is the user wherein they are given the freedom to connect and make some changes on their online environment. Web 2.0 is more about the interaction between users.

Classic examples of Web 2.0 are the different online social networking websites. Users of these websites will be able to connect to one another while trying to make some changes in their online environment depending on their preferences.

Although Web 2.0 will still require hardware support, its focus is more on the actual interaction of the online application with the user. The application seen in different Web 2.0 websites are always geared towards the users need to do something about their experience online.

Cloud Computing Behavior

The behavior of cloud computing is highly dynamic wherein the only way the process would be possible is through proper interaction of the application and hardware. If one of the components in cloud computing will not work or at least will execute below par, cloud computing will never work. Developers and business managers have to make sure everything is according to plan which will never falter in any occasion. Certain support measures have to be implemented to prevent any form of downtime. Infrastructure and extensive monitoring is usually a requirement to properly implement cloud computing and stay true to its behavior.

From Desktop to Browser


One of the best characteristics of cloud computing is its ability to remove the need of desktop or local applications. Everything could be extracted online and everything could be performed in a secured browser. Major browsers have the capability to handle different types of cloud computing and they will work seamlessly as long as the supporting factors could properly execute.

Obviously, cloud computing is all about doing everything online. This means cloud computing will require different factors should be working together. Aside from these factors, they should be working consistently to ensure continuous business operation.

Ability to be Dynamic
Getting the application, data and proper process in the cloud is not just getting the data available in the server and could be extracted by request. The processing power of data center and application should have dynamism that should be observed at all times. Dynamism in cloud computing is the ability to redistribute the processing power of the cloud at will. If there are few users who tries to use the cloud, the resources of the cloud should be distributed on those number of users. At the same time, cloud computing should be redistributed when the number of users will increase.

Distribution of Resources
Aside from the ability to adapt to the number of users and data requests, cloud computing should have the ability to work with different form of resources. Most well known service providers do not rely their operations in one service center alone. They would usually come with two or more server farms infrastructure with multiple and massive servers.

These are not done just to store large amount of data, multiple server farms in different areas are redistributed in locations so that they could provide ample support to other data centers. If one server farm will go down, other server farms will take over the load temporarily.

Operations as Abstraction
Cloud computing is never a simple concept. Each application that will run in the server and would eventually become online will require different set-up. This is essential in abstraction which will enable the application to be highly dynamic as well as adaptable. Abstraction should be observed as behavior in cloud computing as this will provide the needed operations for cloud computing. Through abstraction, the application should be adaptable enough to different scenarios. This is useful especially for businesses that requests adaptable processes.

Platform Development
The only way the application in cloud computing should be launched is through a platform. By using a platform such as browser, the local gadget will only require minimal application to be launched. One classic example is the browser wherein online applications could be easily launched.

This is very challenging for developers since launching the application online will require greater adaptability. Browser incompatibility is still one of the biggest challenges a developer must face. Internet Explorer, Firefox, Opera, Safari and other major browsers have different implementation to different functions. This means developers might not have the ability to implement certain application in a specific browser.

Physical Requirements
Getting everything done online requires powerful hardware capabilities. As already indicated, the online application in cloud computing requires different type of data center. Everything will be for nothing if the application will not have powerful support. The physical requirement is actually the most challenging part in cloud computing. The physical requirement will need considerable amount of spending just to make sure everything will work seamlessly. Once everything has been set-up, additional support for software as well as hardware components should be there. The security of these components is always a requirement for disaster prevention and optimum performance.

Cloud Computing Platforms

Implementing cloud computing through a platform is one of the most popular options for businesses today in online transactions. Largely different from software based cloud computing, platforms are basically programming languages or applications that could be customized based on the need of the enterprise. Because the platforms are geared towards different functionalities needed by the enterprise, they are dubbed as platform as a service or PaaS. Before going further, it is important to differentiate PaaS to SaaS (Software as a Service). SaaS are applications that could be used in the could by different enterprise. They already have predefined functions and the enterprise would only need to adapt to these functions. Paas on the other hand, provides the basic platform wherein developers and the enterprise have to design from the scratch or the preloaded functions.

Characteristics of Platform Cloud Computing


Full Application Development Cycle PaaS is not just launching an application online. It will require planning, coding and testing before they are fully implemented for proper use. This will take time and resources which should be expected since PaaS is keen on implementing customized functions and services for the specific enterprise.

Use of Online Programming Language PaaS is an online application. Naturally, it will require programming languages made for online interaction. From simple HTML to highly complicated JavaScript, and Java, applications could be used to build online applications which will serve cloud computing.

Powerful Integration The online application built by developers should never be the final version. Updates should be available and different forms of integration should be available for the application. This is necessary for PaaS as the application will require the application to be highly intricate with data hosted in the server. Through integration, Mash-up of different application is possible.

Collaboration and Instrumentation The development of PaaS should not be limited to a team of developers alone. Adaptation of the application is very important since it will ensure ease of development for other developers as well as maintenance when the new developer takes over.

Through collaboration, instrumentation of functions becomes possible. This will even give developers and PaaS providers to sell certain functions.

Forms of Cloud Computing Platforms


The forms PaaS could be easily distinguished based on how they are developed and the requirements a vendor would provide to the enterprise.

Development through Preloaded Functions

There are Platform as a Service providers that concentrate in providing developers preloaded functions. They are often considered as proprietary functions wherein the work of developers is only limited to getting the functions together. The vendor will provide everything and the developers will just have to study the functions and how they could be related to the need of the enterprise.

Development through Web Hosting

This type of service in PaaS is the most basic in this form of cloud computing. Developers will have to look for a programming language that could be launched in the server, build different functions and integrate the data located in the server. The role of PaaS provider is to host the functions and prevent it from possible downtime. This type of service concentrates in ensuring the data is available 24/7 and the functions will work as expected.

Development through Frameworks


There are developers who opted to use frameworks in building an application. These frameworks are tools in building application wherein the coding is on the native programming language of the developer. But with the help of the framework, the native coding will be translated to another programming language fit for PaaS. Good examples of this type of development are frameworks for Ajax based applications and Ruby on Rails.

Advantage/Disadvantages of Cloud Computing Platform


The main advantage of platform development through cloud computing is on the cost of development and deployment. Since the applications that come from PaaS will be launched online, little to no requirement will be made on the client side.

On the other hand, cloud computing platform might not have the ability to adapt to different changes on the demand of the enterprise. This is especially true when the application is made through proprietary function from the vendor. Developers have to carefully inspect the history of updates of the vendor to ensure compatibility to the different needs of the enterprise.

Cloud Computing User's Perspective


Page 1 of 2
Author: Exforsys Inc. Published on: 9th Apr 2009

The entire buzz about cloud computing and its effects on businesses are all geared towards the improvement of customer experience. Businesses want to improve their business process to increase the number of customers through better service and increased availability.

Ads

The users experience in cloud computing could actually be direct wherein they would have the ability to use the online application of the business or their experience could be indirect wherein the improved tools in cloud computing will improve the customer interaction with the company. Either way, business should always focus on their customer first as they become the end point or the ultimate gauge if cloud computing is actually the right move for the enterprise.

It can be even said that the assurance of success of cloud computing could be done if businesses would think of cloud computing on how it would ultimately affect the clients or users. As much as possible, businesses have to concretely identify how the end user will benefit from cloud computing. The more concrete and precise the explanation, the more likely cloud computing will be a success.

Not every user is aware of the term cloud computing, but users will eventually notice the difference in operation, they will appreciate the update. Businesses have to impress certain improvement in service quality to ensure success in cloud computing implementation.

Adaptability and Learning


The biggest challenge for every user when trying to deal with cloud computing is on learning the application presented by the enterprise. This is especially true when cloud computing application directly influences the user experience through the online application. Like any new application, users have to be acquainted with the application before they can fully utilize the online application.

This actually presents some danger on the side of the users. Cloud computing is being adopted by business leaders and most of the processes in cloud computing are for sensitive business processes such online purchases of products or acquisition of services. If the user does not get the online process the first time, the wrong product maybe purchased or could even be exposed to different types of security attacks.

Ads

Businesses have to make sure the online application is user friendly even if they are only used for the first time. This is always a must for start-up companies as they are still slowly gaining users and frustration on first time use will not ensure success in cloud computing.

Cloud Computing in Enterprise


Page 1 of 2
Author: Exforsys Inc. Published on: 10th Apr 2009

One of the main reasons why cloud computing is aggressively being developed is the enterprise or the business setting. Many businesses, large and small, have come to realize the potential of cloud computing in terms of easing business transactions without having to spend too much on additional infrastructure, manpower and even time. There mere fact that transactions in almost any form could be done online has made cloud computing a good answer to different business problems.

Ads

Most businesses will just resort to local installation of applications in their gadgets. Some would resort to simplified data transfer transactions such as email or online messaging system (chat). But oftentimes, these transactions are not enough especially when you have a business system that requires extensive interaction with a specific application. This could be easily installed in a local gadget but this could easily cause complication.

For example, a salesman is on the road trying to seal the business deal. But before everything could be agreed on, the salesman has to use certain applications. This will not be possible if the application will not work on the local gadget. But if the salesman uses an online application through cloud computing, not only will they be able to show considerable data but real time interaction with upper management.

Number One Challenge: Data Manipulation


One of the biggest concerns of businesses when they opt to migrate to cloud computing is on how they could transfer the massive data they accumulated online. Although the services available in the industry today could very well handle type and required storage capacity of data, getting them right for the first time is very difficult.

Cloud computing is not just data transferred online which could be extracted anytime. They have to go through certain processes, access management and how to properly dispose those data by requests. All of these have to be considered with optimal security in mind.

These cant necessarily be executed at will by any businesses. They are left at the mercy of the cloud computing provider to ensure everything is according to expectation, especially security. For that reason, businesses have to make sure they work with a reputable cloud computing provider to ensure everything is according to certain processes.

Balancing and Scaling


Developers would have to make sure they are building an application, especially when launched online, could be easily management and have the capacity to hold massive loads. This is a must for business applications as massive data requests will happen. If they are not controlled, certain functions of the application will not work.

Although the server will be able to handle data requests, the application itself will not have the ability to control the data. This situation might lead to certain security concerns such as data leaks.

Ads

Businesses have to choose a provider that will give them the capability of balancing loads. Some providers even have auto-scaling function wherein the load is automatically balanced whenever the load is getting heavier due to massive requests.

Cloud Computing Security


Author: Exforsys Inc. Published on: 11th Apr 2009

Security is one of the biggest concerns of businesses in any form. Whether a business is a small brick-and-mortar or a multi-million online ventures, security should be implemented. Exposing the company to different security flaws is always inviting to different elements with malicious intent. A single security strike could mean millions of dollars for businesses and might single handedly close the business down.

Ads

Proper implementation of security measures is highly recommended for cloud computing. The mere fact that the application is launched through internet makes it vulnerable to any time of attack. An application available in LAN (Local Area Network) only could even be infiltrated from the outside so placing an application over the internet is always a security risk. This is the unique situation of cloud computing. Implementation of cloud computing could require millions of dollars in infrastructure and applications development but it still places itself at risk for different types of attacks.

Protecting the Users


Above everything else, cloud computing or any type of online application format should consider protecting its users. Developers should make sure that data related to the user should not be mishandled and could be extracted just by one.

There are two ways to ensure cloud computing security: restrictive user access and certifications.

Restrictive access could come from simple username/password challenge to complicated CAPTCHA log in forms. But applications in cloud computing should not only base itself on these challenges. IP specific applications and user time-outs are only some of the security measures that should be implemented.

The challenge in restrictive user access is to limit the access privilege of the user. Each user will have to be assigned manually with security clearance to ensure limitation of access to different files.

Certifications are also important for user certification. Developers have to open their application to security specialists or companies that provide certifications for security. This is one way of assuring users that the application has been fully tested against different types of attacks. This is often the dilemma for cloud computing as external security checks might open the company secrets on cloud computing. But this has to be sacrificed to ensure the security of their users.

Data Security
Aside from user protection against different types of attacks, the data itself should be protected. In this aspect, the hardware and software linked to cloud computing should be scrutinized. Again, a certification is highly desired in this part of cloud computing.

The hardware component for cloud computing on the other hand requires a different type of security consideration. The location of data center should not only be selected because of its proximity to controllers and intended users but also on its security (and even secrecy) from external problems. The data center should be protected against different types of weather conditions, fire and even physical attacks that might destroy the center physically.

With regards to the hardware component in relation to the application, certain manual components have to be available for increased security. Among them is manual shutdown to prevent further access of the information. Although data could be controlled with another application that data could be infiltrated unless the application is shutdown immediately.

Recovery and Investigation


Cloud computing security should not only focus itself on prevention. Ample resources should also be focused on recovery if the unfortunate event really strikes. Even before disaster happens, certain plans have to be in place to ensure that everyone will be working in unison towards recovery. The plans do not have to be focused on software attacks alone certain external disasters such as weather conditions should have separate recovery plans.

When everything has been recovered, developers and the company handling the application should have the means to investigate the cause of the problem. Through investigation, certain conditions that lead to the event could be realized and insecurities could be discovered. Even legal actions could be done if security has been breached on purpose.

Ads

Security is one of the most difficult task to implement in cloud computing. It requires constant vigilance against different forms of attacks not only in the application side but also in the hardware components. Attacks with catastrophic effects only needs one security flaw so its always a challenge for everyone involved to make things secured.

Software as a Service (SaaS) Model


Author: Exforsys Inc. Published on: 12th Apr 2009

Cloud computing can come in many forms. It could be launched as a purely platform services wherein the cloud computing vendor will be there to act as a webhost, a vendor that providers functions to be developed by the enterprise or as a framework for developing powerful RIAs (Rich Internet Application).

Ads

Each provides advantage and limitations to the enterprise which dictates the different versions of cloud computing applications and form of development. Businesses usually take considerable time and resources in choosing the right vendor. If they choose to localize cloud computing, more resources and preparation will be done to ensure correct use of cloud computing.

But the development of cloud computing will not be possible without one form of cloud computing: software as a service or SaaS. This form of cloud computing could practically define what cloud computing generally is. SaaS is basically a form of cloud computing that launches software in the cloud (internet) which will be later used as a service.

Cloud computing was formed to cater to the demands of the businesses which is practically the need of the enterprise. Although there are data storage needed in cloud computing, the ability to process the data without the need of local installation could only be provided by cloud computing. Software as a Service caters to these demands by launching application with business specific purposes online.

Characteristics in Software as a Service

Although the basic definition of cloud computing could also be used in Software as a Service, there are basic differences SaaS have when compared to other forms of cloud computing.

Network or Online Access SaaS is an online application or at least, a network based application. Users will never need any installation in their local gadgets which is connected to the local network or the internet. Usually, the application is launched through a browser which could provide access not only to the application but additional services from the vendor.

Centralized Management control, monitoring and update could be done in a single location. The businesses that maintain the application will never need to manually make some changes in the local gadget but would provide improvement instead on the online application.

Powerful Communication Features Software as a Service is not only based on the fact that it provides functions for online processing, it also has powerful communication features. The mere fact that SaaS is often used online provides a strong backbone for Instant Messaging (Chat) or even voice calls (VOIP).

Advantages/Disadvantages of SaaS
Software as a Service is geared towards specific type of business. Although they can easily work in most enterprise settings, there are certain requirements SaaS would have that make it undesirable for some businesses.

Powerful Internet Connection Required although connection online is available almost everywhere, the rate of connection is never the same. Some areas cant provide strong internet connection and SaaS (as an online application) will have to load everything in the browser. The expected function might not even move forward without strong internet connectivity.

Increased Security Risk attacks are highly likely if everything is launched online. This is probably the most challenging part in SaaS and in Cloud Computing industry. SaaS has increase security concerns compared to other platforms because of its consistent interaction with different users.

Load Balancing Feature one of the challenges the business would face in cloud computing and all SaaS applications is load balancing. Although industry giants offer load balancing, it will still require consistent monitoring from businesses.

API and Mash-ups in SaaS


SaaS is getting better and better as new trends in the industry are slowly being implemented. Among the trends in cloud computing is the powerful integration of API or Application Programming Interface. Although SaaS could provide the functionality the business needs, upgrades are important to keep up with the demands. Instead of

changing the application, businesses will just add an API in their application. The integration is easy and maximum efficiency of the additional function is expected.

Ads

Mash-up is another trend seen in SaaS which is a technique in combining two powerful applications. Although this is first used in Platform as a Service, new development techniques especially in RIA (Rich Internet Application) has made mash-ups a possibility. Although it might have increased insecurity a little bit, it will ultimately provide more functions.

Migrating to Cloud Computing


Author: Exforsys Inc. Published on: 13th Apr 2009

Cloud computing is greatly considered today by most businesses. Small and large-scale businesses alike have seen the advantage of cloud computing. Because of the ability of getting every data processes in the cloud (online), business will enjoy mobility without being held down to a single application.

Ads

The real time interaction of cloud computing is not only based on single person in the local gadget. Everyone involved will have real time interaction with the data and might even offer some changes.

But transferring from local data availability and processes to cloud computing is a lot difficult said than done. For highly established businesses transferring the entire database to businesses will take precious time and resources to make things work.

Local vs. Vendor Offered Cloud Computing


One of the biggest considerations for businesses who wanted to implement cloud computing is to choose between local cloud computing and vendor provided.

In local cloud computing, businesses would have to spend considerable amount in hardware as well as applications to properly implement cloud computing. The disadvantage, of course, is based on the funds needed for cloud computing. On the other hand, businesses would have the ability to manipulate the data centers to their advantage.

Vendor assisted cloud computing is probably the choice for most businesses today. They wont require any expensive hardware components as everything will be hosted in a separate server offered by the provider. Depending on the provider, the possibility of building an application through available functions is also a possibility. The downside of the vender assisted cloud computing is on the vendor themselves: if the business fails to select the right vendor, more harm will be done in the business process.

Proper Selection of Cloud Computing Provider


Not every cloud computing provider in the industry could fulfill the basic needs of every business. As every business has a unique need, the need could only be provided by a handful of providers. The quality might even very likely because of the disparity of hardware and its location.

The following are the steps recommended in selecting a provider:

Research before you start shopping for a provider, understand what you need first in cloud computing and what you could get from a provider. With the popularity of cloud computing, almost every company that offers this type of service today has a specific reviewer. Carefully research each providers advantages and disadvantages for your business process and compare them.

Physical Visit as much as possible pay a visit to the companys actual data center and/or server farm. This will give you a good idea on how the will help you in your services and what measures they observe to ensure your processes will not be interrupted at anytime. Notice the general environment of the data center and the outside conditions. This is highly recommended if you are dealing with a local provider.

On the other hand, if you cant physically visit their companys data center, research as much as you can on their capabilities. Companies such as Google, Amazon and other internet giants will most likely deny your request a visit to any of their data center.

The Worst Case Scenario always look for information or ask the vendor on how they deal with certain situations that will affect your business process. More often than not, business will provide you with a guaranteed uptime or SLA (Service Level Agreement). But dont settle for this data alone. Insist on what they usually do when a downtime happens. They will always have processes in place and you should know about it so that you have an idea on how they work in case something wrong happens.

Ability to Upgrade it is an imperative for a vendor to expect upgrades from their clients. However, some vendors miss this feature which could jeopardize the operations of the business. Most vendors do allow upgrades but with a certain fee. The additional payment is usually determined by the degree of upgrade a business would implement in their operations.

Ads

Whether local or vendor provided, businesses should carefully consider the advantages and advantages of these options. In selecting the right provider, it is important to carefully research as well as consider other data to ensure the vendor will work as expected.

Selecting a Cloud Computing Vendor


Page 1 of 2
Author: Exforsys Inc. Published on: 14th Apr 2009

Most, if not all small businesses today, do not have the capability of building an infrastructure that will support cloud computing. The required funds to build data centers that could support each other, not to mention the manpower support in needs are just too much.

Ads

The application development for cloud computing would also take time and considerable resources. Developers would be hired, testers will be needed and projected users (the employees) would have to be taken out of their regular operations just to test the cloud computing application. But even with this form of spending, the success of cloud computing is not even assured.

For that reason, small and even some large online businesses seek the assistance of a 3rd party provider. There are cloud computing providers that have the capability to handle any data processing needs of large and small businesses alike.

Internet giants such as Google and Amazon have the capability to offer highly extensive cloud computing support. Smaller companies based in the same area of the client also exist. Although they provide limited services to their clients, their capacity is more than enough for most small business needs.

But selecting the right cloud computing provider for small businesses is easier said than done. There are certain considerations every business would have to remember to ensure the selected provider would work as expected.

A Personal Checklist

Every business has different needs and those needs may or may not be provided by the cloud computing company. Before anything else could be said or transpire with the cloud computing provider, a business should have a checklist on what they sought from a provider.

The list should comprise on whether or not they need to emphasize on data processing rather than storage and other related functions. Although everything should be optimized, there are certain areas where the provider excels.

The list should never be compromised as much as possible. Providers would often replace a certain disadvantage with their advantage which might not be helpful at all with the client. Small changes in services provided may jeopardize the entire operation.

Financial Stability
Businesses who opted to work with a vendor with their cloud computing needs should consider the financial stability of the provider first. Although money is almost irrelevant when it comes to basic services, money provides the backbone to everything a provider could do from security of the data centers as well as the ability to handle enough manpower to deal with the updates as well as problems with the system. The worst case scenario for a company that does not have the financial stability is when they actually close down without notice.

Ads

Knowing the financial stability of the company requires research about the companys history, its investors and its general popularity. Although the popularity doesnt really mean financial stability, an increased focus will basically mean services that are either good or really bad.

Everything as a Service (EaaS) Model


Page 1 of 2
Author: Exforsys Inc. Published on: 15th Apr 2009

Cloud computing is often used by businesses in limited processes. The enterprise could contact a specific vendor if they wanted to implement some cloud computing processes in their business setting. As long as the business knows what they specifically need from the vendor, the services related to cloud computing could be easily set up and launched in no time.

Ads

Even if the cloud computing process is very small, the business could immediately see the changes in their processing. The rest of the processes in the enterprise will have to be handled without the assistance of cloud computing.

But like most things that constantly aims to improve, cloud computing is aiming to be more than just a small chunk of the business operations. Some of the renowned internet companies are not just content in being a small part of the operations.

With their hardware capabilities, they could become the provider of the entire business processes. Internet giants such Google, Microsoft and Salesforce have begun offering EaaS or Everything as a Service to different business.

In this setting, EaaS uses the vendor as the point of entry for different type of business transactions. From simple office documents to extensive customer service management, EaaS vendors could provide the tools through the cloud (internet).

Characteristics of EaaS
Decreasing dependency on the hardware as more and more applications are used in the cloud, it has become important for the service providers in EaaS to keep everything accessible. Any service in the cloud could be accessed online without relying on one gadget that stores the native application. Even without any application stored in the desktop, online services could be extracted. Most of the cloud computing services could be accessed online through major browsers.

No Specific Location EaaS is not just an application limited to certain location and gadgets. Providers will allow access from any type of gadget from any location as long as the user has the right credential (username/password) to use the system. Outsourcing even became a possibility for some companies without too expensive set-up pricing because of EaaS.

Improved Tenancy access of the system is not only limited to a number of users. The user will have the ability to get everything done on time through collaboration.

Extension to Consumers this type of service is not only limited to businesses. Cloud computing through EaaS is now available to consumers.

Ads

Some services could actually be used for personal use without having to deal with different business processes.

Moving Beyond the Desktop Experience


Author: Exforsys Inc. Published on: 16th Apr 2009

Cloud computing signifies the slow changes of the consumer and business experience with their local gadget. Combined with strong internet connection, the local applications and gadgets has become a platform as everything will be launched online.

Ads

The data processes and back-end operations of a business is now available in the cloud. Small businesses dont have to spend considerable amount for local installation. The desktop experience in large and small companies is slowly being replaced with applications that could be launched at anytime and in almost any gadget.

The evolution of cloud computing is even going beyond simple data processing in the cloud. Different types of services such as real time communications through voice and instant messaging has been integrated by different companies. The improvement of services in cloud computing has even pushed for more than just simple software as a service (SaaS) scenario in cloud computing. Different forms of development in cloud computing is now possible.

Developers could build application from the scratch, use a framework or have the vendor provide functions and build a efficient application from there. Almost any time of development is now available through cloud computing although the thrust of cloud computing is with the use of RIA programming languages.

The Client Side


Moving beyond the desktop experience is all about dealing with the client side. Although everything is loaded from the remote server without any required installation in the user, there are many things businesses have to consider to effectively launch their cloud computing application. Among the considerations businesses would need to remember is the effect of the application in the client side or in the users end.

To effectively launch a highly interactive application in the users end, the use of Web 2.0 programming languages are used. In platform based development of cloud computing, programming techniques such as Ajax, Ruby on Rails are extensively used by developers.

At this point, the first challenge to the client side is presented. These programming languages may offer the best interface for users but they do not provide optimal security measures for users. Client side development tools could be easily hacked such as Cross Site Scripting (XSS).

Aside from security challenges, the client side will also be required to make some adjustments in their local gadget. Although it is true that no application is required, the load on the local hardware will increase as the processing of data will now be based according to how fast the local gadget receives the transmitted data.

Local Applications
When cloud computing and applications in the client side is starting to became popular. The capability of cloud computing was pushed to its highest level. Some developers have proposed that the actual desktop experience could be emulated even in a browser. With the right development tools such as Ajax, the complete desktop experience could be there. But the result is less than desirable as the online desktop experience is too slow when compared to the actual desktop experience.

This just means that the actual desktop experience will still be here. The basic applications that everyone needs will never be out of the desktop. Although the internet is becoming more and more available even in remote areas, the possibility of having no application at all in a device while relying in cloud computing is very remote. There are still applications that are better left in the local gadget rather than online application.

Cloud Computing Beyond the Hype


But the persistence of local applications will not signify the doom of cloud computing. There are processes that could be better when they are launched through cloud computing. Processes such as real time customer management, online tools and even simple communication need efficient implementation of cloud computing.

Companies dont just go out and seek vendors who will effectively run their online processes. They have to consider the advantage and disadvantages of launching cloud computing. These considerations will protect them from harmful and possibility business ending effects of wrong implementation of cloud computing.

Ads

Cloud computing is currently ushering a big leap towards gadget independent applications. Although it has serious challenges to be reckoned with it does present great advantages to the current and future business setting in any industry.

The Future of Cloud Computing


Page 1 of 2

Author: Exforsys Inc.

Published on: 17th Apr 2009

Cloud computing may be a relatively new concept for some businesses and consumers. But even though some businesses are only starting to adopt and realizing the advantages of cloud computing, industry giants are already looking forward to the next big step of cloud computing.

Ads

For now, cloud computing could be easily identified with grid computing wherein the cloud become the application for business purposes. Although grid computing is more focused on the server capabilities of the application, their similarities are based on the focus on providing online and on-time services to the enterprise.

But cloud computing is so much more than simplified cloud processing. The business aim of getting things done no matter where they are without the necessary of a local or desktop software is realized. The ease of data processing with real time interaction and company-wide availability of data in an instant could be done through proper implementation of cloud computing. Best of all, these processes are aimed to be available with very little to no downtime.

The future of cloud computing should be highly considered by businesses in any industry. The possibility of full adaptation of cloud computing by almost any industry is slowly starting to happen. If a business will not consider their future in cloud computing, the challenges as well as the advantages of cloud computing may not be addressed and fully harnessed.

Level of Competition in Cloud Computing Industry


Competition is always good in any industry. Through competition, the best services as well as the most competitive prices will come out. The cloud computing industry is no exception to this rule. Companies such as Amazon, Google, Sun Microsystems and SalesForce.com are only some of the highly recognized companies in the cloud computing industry. These companies offer advantages that will fit the need of any businesses.

But the level of competition, as some industry experts predict, could soon be gone. The previously mentioned companies are aggressively promoting their services so that they could become the leader in the industry. These companies are now spending millions of dollars in hardware upgrades, human resources and even in advertising. Unfortunately, not every company will come out strong. Some industry experts predict that one of the companies will come out of top and might even become the synonym for cloud computing.

Ads

On the other hand, smaller companies who provide personalized services for cloud computing are slowly coming out in the open. Their personalized services would be limited to few clients which will give them the ability optimize the services to their clients.

Cloud Computing Design and Simulation

Cloud computing is a promising solution for the automotive industry that could help industry players transform some of their IT infrastructure expenditures into a more flexible on-demand resource provisioning cost model. Exceed onDemand enables cloud providers to deliver the most complex PLM and CAD-CAM applications, while providing a reliable and high-quality user experience to engineers for design or simulation operations.

Cloud Computing Challenges: getting the execution right to deliver on the promises
There is no doubt that cloud computing is a promising new solution for reducing infrastructure costs and increasing business agility. With that said, it is like most new technologies that look good on the paper: the devil will be in the execution. Among some of the challenges that await companies as they investigate and adopt cloud computing models, the following few stand out:

To deliver on its promises, cloud computing has to become the heavy-lifting infrastructure of business line applications. But are all applications ready to be moved to the cloud? At what cost? New technologies create new challenges, and when it comes to cloud computing, a key issue remains unsolved: what will data governance look like in a cloud enabled world? Like all other technologies before it, cloud computing success will strongly depend on user's adoption. How can organizations guarantee that their users will quickly and willingly move to a cloud-based business application infrastructure?

Exceed onDemand: fast and easy access to cloud-enabled business applications


OpenText Exceed onDemand is the Enterprise solution for dependable managed application access. It allows organizations to securely and reliably delivers the most complex line of business applications to thousands of Enterprise users regardless of their location. Every day millions of Enterprise users depend on Exceed technologies to power their business applications. Backed by 25 years of experience and a team of industry solution experts, OpenText Exceed onDemand is what organizations count on for faster time-to-market, compliance with corporate policies and government regulations while reducing operational costs. Exceed onDemand plays a critical role in the success of cloud computing projects by offering the application delivery backbone that will create the link between the cloud and the users. It is an

ideal application delivery solution for organizations looking to switch their infrastructure to the cloud or offer their own cloud-enabled solutions:

Ensure user adoption by offering a desktop-like experience without the hassle of desktop infrastructure management Promote a more effective cost structure to offer or consume business services Scale resources to match all business requirements Reduce capital expenditures and provision IT resources (license, storage, computing, bandwidth) on a per need basis Provide faster service to business lines and improve customer satisfaction Convert existing applications to the cloud with no effort

Exceed onDemand Cloud Computing Value Chain

admin on 14 Mar 2009 10:20 am


CloudSim offers Cloud Computing Simulation

CloudSim is proposed as a framework for modelling and simulation of Cloud Computing environments to support performance evaluation of policies for resource provisioning / application scheduling / policies of federation of Clouds (in a repeatable and controllable manner). For details, please check out their Tech Report (PDF).

CloudSim: A Novel Framework for Modeling and Simulation of Cloud Computing Infrastructures and Services
Rodrigo N. Calheiros1,2, Rajiv Ranjan1, Csar A. F. De Rose2, and Rajkumar Buyya1 1Grid Computing and Distributed Systems (GRIDS) Laboratory Department of Computer Science and Software Engineering The University of Melbourne, Australia 2Pontifical Catholic University of Rio Grande do Sul Porto Alegre, Brazil {rodrigoc, rranjan, raj}@csse.unimelb.edu.au, cesar.derose@pucrs.br Abstract
Cloud computing focuses on delivery of reliable, secure,fault-tolerant, sustainable, and scalable infrastructures for hosting Internet-based application services. These applications have different composition, configuration, and deployment requirements. Quantifying the performance of scheduling and allocation policy on a Cloud infrastructure (hardware, software, services) for different application and service models under varying load, energy performance (power consumption, heat dissipation), and system size is an extremely challenging problem to tackle. To simplify this process, in this paper we propose CloudSim: a new generalized and extensible simulation framework that enables seamless modelling, simulation, and experimentation of emerging Cloud computing infrastructures and management services. The simulation framework has the following novel features: (i) support for modelling and instantiation of large scale Cloud computing infrastructure, including data centers on a single physical computing node and java virtual machine; (ii) a self-contained platform for modelling data centers, service brokers, scheduling, and allocations policies; (iii) availability of virtualization engine, which aids in creation and management of multiple, independent, and co-hosted virtualized services on a data center node; and (iv) flexibility to switch between space-shared and time-shared allocation of processing cores to virtualized services.

1. Introduction

Cloud computing delivers infrastructure, platform, and software (application) as services, which are made available as subscription-based services in a pay-as-you-go model to consumers. These services in industry are respectively referred to as Infrastructure as a Service (Iaas), Platform as a Service (PaaS), and Software as a Service (SaaS). In a Feb 2009 Berkeley Report [11], Prof. Patterson et. al. stated Cloud computing, the long-held dream of computing as a utility, has the potential to transform a large part of the IT industry, making software even more attractive as a service. Clouds [10] aim to power the next generation data centers by architecting them as a network of virtual services (hardware, database, user-interface, application logic) so that users are able to access and deploy applications from anywhere in the world on demand at competitive costs depending on users QoS (Quality of Service) requirements [1]. Developers with innovative ideas for new Internet services are no longer required to make large capital outlays in the hardware and software infrastructures to deploy their services or human expense to operate it [11]. It offers significant benefit to IT companies by freeing them from the low level task of setting up basic hardware (servers) and software infrastructures and thus enabling more focus on innovation and creation of business values. Some of the traditional and emerging Cloud-based applications include social networking, web hosting, content delivery, and real time instrumented data processing. Each of these application types has different composition, configuration, and deployment requirements. Quantifying the performance of scheduling and allocation policy on Cloud infrastructures (hardware, software, services) for different application and service models under varying load, energy performance (power consumption, heat dissipation), and system size is an extremely challenging problem to tackle. The use of real test beds such as Amazon EC2, limits the experiments to the scale of the testbed, and makes the reproduction of results an extremely difficult undertaking, as the conditions prevailing in the Internet-based environments are beyond the control of the tester. An alternative is the utilization of simulations tools that open the possibility of evaluating the hypothesis prior to software development in an environment where one can reproduce tests. Specifically in the case of Cloud computing, where access to the infrastructure incurs payments in real currency, simulation-based approaches offer significant benefits, as it allows Cloud customers to test their services in repeatable and controllable environment free of cost, and to tune the performance bottlenecks before deploying on real Clouds. At the provider side, simulation environments allow evaluation of different kinds of resource leasing scenarios under varying load and pricing distributions. Such studies could aid the providers in optimizing the resource access cost with focus on improving profits. In the absence of such simulation platforms, Cloud customers and providers have to rely either on theoretical and imprecise evaluations, or on tryand- error approaches that lead to inefficient service performance and revenue generation. Considering that none of the current distributed system simulators [4][7][9] offer the environment that can be directly used by the Cloud computing community, in this paper, we propose CloudSim: a new, generalized, and extensible simulation framework that enables seamless modeling, simulation, and experimentation of emerging Cloud computing infrastructures and application services. By using CloudSim, researchers and industry-based developers can focus on specific system design issues that they want to investigate, without getting concerned about the low level details related to Cloud-based infrastructures and services. CloudSim offers the following novel features: (i) support for modeling and simulation of large scale Cloud computing infrastructure, including data centers on a single physical computing node; and (ii) a self-contained platform for modeling data centers, service brokers, scheduling, and allocations policies. Among the unique features of CloudSim, there are: (i) availability of virtualization engine, which aids in creation and management of multiple, independent, and co-hosted virtualized services on a data center node; and (ii) flexibility to switch between spaceshared and time-shared allocation of processing cores to virtualized services. These compelling features of CloudSim would speed up the

development of new algorithms, methods, and protocols in Cloud computing, hence contributing towards quicker evolution of the paradigm.

2. Related Works Cloud computing


Cloud computing can be defined as a type of parallel and distributed system consisting of a collection of interconnected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources based on service-level agreements established through negotiation between the service provider and consumers [1]. Some examples of emerging Cloud computing infrastructures are Microsoft Azure [2], Amazon EC2, Google App Engine, and Aneka [3]. The computing power in a Cloud computing environments is supplied by a collection of data centers, which are typically installed with hundreds to thousands of servers [9]. The layered architecture of a typical Cloudbased data center is shown in Figure 1. At the lowest layers there exist massive physical resources (storage servers and application servers) that power the data centers. These servers are transparently managed by the higher level virtualization [8] services and toolkits that allow sharing of their capacity among virtual instances of servers. These virtual instances are isolated from each other, which aid in achieving fault tolerant behavior and isolated security context.

Figure 1. Typical data center. Emerging Cloud applications such as Social networking, gaming portals, business applications, content delivery, and scientific workflows operate at the highest layer of the architecture. Actual usage patterns of many real-world applications vary with time, most of the time in unpredictable ways. These applications have different Quality of Service (QoS) requirements depending on time criticality and users interaction patterns (online/offline).

Simulation
In the past decade, Grids [5] have evolved as the infrastructure for delivering high-performance service for compute and data-intensive scientific applications. To support research and development of new Grid components, policies, and middleware; several Grid

simulators, such as GridSim [9], SimGrid [7], and GangSim [4] have been proposed. SimGrid is a generic framework for simulation of distributed applications in Grid platforms. GangSim is a Grid simulation toolkit that provides support for modeling of Grid-based virtual organisations and resources. On the other hand, GridSim is an event-driven simulation toolkit for heterogeneous Grid resources. It supports modeling of grid entities, users, machines, and network, including network traffic. Although the aforementioned toolkits are capable of modeling and simulating the Grid application behaviors (execution, scheduling, allocation, and monitoring) in a distributed environment consisting of multiple Grid organisations, none of these are able to support the
Virtual Machine Management
Windows Mac with Mono Linux with Mono

Storage Virtualization Management Storage Servers Virtual Machine Monitor Application Servers Virtual Machines Cloud Applications Social networking CDN Workflow Data processing

3 infrastructure and application-level requirements arising from Cloud computing paradigm. In particular, there is very little or no support in existing Grid simulation toolkits for modeling of on-demand virtualization enabled resource and application management. Further, Clouds promise to deliver services on subscription-basis in a pay-as-you-go model to Cloud customers. Hence, Cloud infrastructure modeling and simulation toolkits must provide support for economic entities such as Cloud brokers and Cloud exchange for enabling real-time trading of services between customers and providers. Among the currently available simulators discussed in this paper, only GridSim offers support for economic-driven resource management and application scheduling simulation. Another aspect related to Clouds that should be considered is that research and development in Cloud computing systems, applications and services are in the infancy stage. There are a number of important issues that need detailed investigation along the Cloud software stack. Topics of interest to Cloud developers include economic strategies for provisioning of virtualized resources to incoming user's requests, scheduling of applications, resources discovery, inter-cloud negotiations, and federation of clouds and so on. To support and accelerate the research related to Cloud computing systems, applications and services it is important that the necessary software tools are designed and developed to aid researchers and developers.

3. CloudSim Architecture
Figure 2 shows the layered implementation of the CloudSim software framework and architectural components. At the lowest layer is the SimJava discrete

event simulation engine [6] that implements the core functionalities required for higher-level simulation frameworks such as queuing and processing of events, creation of system components (services, host, data center, broker, virtual machines), communication between components, and management of the simulation clock. Next follows the libraries implementing the GridSim toolkit [9] that support high level software components for modeling multiple Grid infrastructures, including networks and associated traffic profiles, and fundamental Grid components such as the resources, data sets, workload traces, and information services. The CloudSim is implemented at the next level by programmatically extending the core functionalities exposed by the GridSim layer. CloudSim provides novel support for modeling and simulation of virtualized Cloudbased data center environments such as dedicated management interfaces for VMs, memory, storage, and bandwidth. CloudSim layer manages the instantiation and execution of core entities (VMs, hosts, data centers, application) during the simulation period. This layer is capable of concurrently instantiating and transparently managing a large scale Cloud infrastructure consisting of thousands of system components. The fundamental issues such as provisioning of hosts to VMs based on user requests, managing application execution, and dynamic monitoring are handled by this layer. A Cloud provider, who wants to study the efficacy of different policies in allocating its hosts, would need to implement his strategies at this layer by programmatically extending the core VM provisioning functionality. There is a clear distinction at this layer on how a host is allocated to different competing VMs in the Cloud. A Cloud host can be concurrently shared among a number of VMs that execute applications based on user-defined QoS specifications.

Figure 2. Layered CloudSim architecture. The top-most layer in the simulation stack is the User Code that exposes configuration related functionalities for hosts (number of machines, their specification and so on), applications (number of tasks and their requirements),

VMs, number of users and their application types, and broker scheduling policies. A Cloud application developer can generate a mix of user request distributions, application configurations, and Cloud availability scenarios at this layer and perform robust tests based on the custom Cloud configurations already supported within the CloudSim. As Cloud computing is a rapidly evolving research area, there is a severe lack of defined standards, tools and methods that can efficiently tackle the infrastructure and application level complexities. Hence in the near future there would be a number of research efforts both in academia and industry towards defining core algorithms, policies, application benchmarking based on execution contexts. By extending the basic functionalities already exposed by CloudSim, researchers would be able to perform tests based on specific scenarios and 4 configurations, hence allowing the development of best practices in all the critical aspects related to Cloud Computing.

Figure 3. Effects of different scheduling policies in the task execution: (a) Space-shared for VMs and tasks, (b) Space-shared for VMs and timeshared for tasks, (c) Space-shared for VMs, timeshared for tasks, and (d) Space-shared for VMs and tasks. One of the design decisions that we had to make as the CloudSim was being developed was whether to extensively reuse existing simulation libraries and frameworks or not. We decided to take advantage of already implemented, tested, and validated libraries such as GridSim and SimJava to handle low-level requirements of the system. For example, by using SimJava, we avoided reimplementation of event handling and message passing among components; this saved us a lot of time in software engineering and testing. Similarly, the use of the GridSim framework allowed us to reuse its implementation of networking, information services, files, users, and resources. Since SimJava and GridSim have been extensively utilized in conducting cutting edge research in Grid resource management by several researchers, bugs that may compromise the validity of the simulation have been already detected and fixed. By reusing these long validated frameworks, we were able to focus on critical aspects of the system that are relevant to Cloud computing, while taking advantage of the reliability of components that are not directly related to Clouds.

3.1. Modeling the Cloud


The core hardware infrastructure services related to the

Clouds are modeled in the simulator by a Datacenter component for handling service requests. These requests are application elements sandboxed within VMs, which need to be allocated a share of processing power on Datacenters host components. By VM processing, we mean set of operations related to VM life cycle: provisioning of a host to a VM, VM creation, VM destruction, and VM migration. A Datacenter is composed by a set of hosts, which is responsible for managing VMs during their life cycles. Host is a component that represents a physical computing node in a Cloud: it is assigned a pre-configured processing (expressed in million of instructions per second MIPS, per CPU core), memory, storage, and a scheduling policy for allocating processing cores to virtual machines. The Host component implements interfaces that support modeling and simulation of both single-core and multi-core nodes. Allocation of application-specific VMs to Hosts in a Cloud-based data center is the responsibility of the Virtual Machine Provisioner component (refer to Figure 2). This component exposes a number of custom methods for researchers, which aids in implementation of new VM provisioning policies based on optimization goals (user centric, system centric). The default policy implemented by the VM Provisioner is a straightforward policy that allocates a VM to the Host in First-Come-First-Serve (FCFS) basis. The system parameters such as the required number of processing cores, memory and storage as requested by the Cloud user form the basis for such mappings. Other complicated policies can be written by the researchers based on the infrastructure and application demands. For each Host component, the allocation of processing cores to VMs is done based on a host allocation. The policy takes into account how many processing cores will be delegated to each VM, and how much of the processing core's capacity will effectively be attributed for a given VM. So, it is possible to assign specific CPU cores to specific VMs (a space-shared policy) or to dynamically distribute the capacity of a core among VMs (time-shared policy), and to assign cores to VMs on demand, or to specify other policies. Each Host component instantiates a VM scheduler component that implements the space-shared or timeshared policies for allocating cores to VMs. Cloud system developers and researchers can extend the VM scheduler component for experimenting with more custom allocation 5 policies. Next, the finer level details related to the timeshared and space-shared policies are described.

3.2. Modeling the VM allocation


One of the key aspects that make a Cloud computing infrastructure different from a Grid computing is the massive deployment of virtualization technologies and

tools. Hence, as compared to Grids, we have in Clouds an extra layer (the virtualization) that acts as an execution and hosting environment for Cloud-based application services. Hence, traditional application mapping models that assign individual application elements to computing nodes do not accurately represent the computational abstraction which is commonly associated with the Clouds. For example, consider a physical data center host that has single processing core, and there is a requirement of concurrently instantiating two VMs on that core. Even though in practice there is isolation between behaviors (application execution context) of both VMs, the amount of resources available to each VM is constrained by the total processing power of the host. This critical factor must be considered during the allocation process, to avoid creation of a VM that demands more processing power than the one available in the host, and must be considered during application execution, as task units in each virtual machine shares time slices of the same processing core. To allow simulation of different policies under different levels of performance isolation, CloudSim supports VM scheduling at two levels: First, at the host level and second, at the VM level. At the first level, it is possible to specify how much of the overall processing power of each core in a host will be assigned to each VM. At the next level, the VMs assign specific amount of the available processing power to the individual task units that are hosted within its execution engine. At each level, CloudSim implements the time-shared and space-shared resource allocation policies. To better illustrate the difference between these policies and their effect on the application performance, in Figure 3 we show a simple scheduling scenario. In the figure, a host with two CPU cores receives request for hosting two VMs, and each one requiring two cores and running four tasks units: t1, t2, t3 and t4 to be run in VM1, while t5, t6, t7, and t8 to be run in VM2. Figure 3(a) presents a space-shared policy for both VMs and task units: as each VM requires two cores, only one VM can run at a given instance of time. Therefore, VM2 can only be assigned the core once VM1 finishes the execution of task units. The same happens for tasks hosted within the VM: as each task unit demands only one core, two of them run simultaneously, and the other two are queued until the completion of the earlier task units. In Figure 3(b), space-shared policy is used for allocating VMs, but a time-shared policy is used for allocating individual task units within VM. So during a VM lifetime, all the tasks assigned to it dynamically context switch until their completion. This allocation policy enables the task units to be scheduled at an earlier time, but significantly affecting the completion time of task units that head the queue. In Figure 3(c), a time-shared scheduling is used for VMs, and a space-shared one is used for task units. In this case,

each VM receives a time slice of each processing core, and then slices are distributed to task units on space-shared basis. As the core is shared, the amount of processing power available to the VM is comparatively lesser than the aforementioned scenarios. As task unit assignment is space-shared, hence only one task can be allocated to each core, while others are queued in for future consideration. Finally, in Figure 3(d) a time-shared allocation is applied for both VMs and task units. Hence, the processing power is concurrently shared by the VMs and the shares of each VM are concurrently divided among the task units assigned to each VM. In this case, there are no queues either for virtual machines or for task units.

3.3. Modeling the Cloud market


Support for services that act as a market maker enabling capability sharing across Cloud service providers and customer through its match making services is critical to Cloud computing. Further, these services need mechanisms to determine service costs and pricing policies. Modeling of costs and pricing policies is an important aspect to be considered when designing a Cloud simulator. To allow the modeling of the Cloud market, four market-related properties are associated to a data center: cost per processing, cost per unit of memory, cost per unit of storage, and cost per unit of used bandwidth. Cost per memory and storage incur during virtual machine creation. Cost per bandwidth incurs during data transfer. Besides costs for use of memory, storage, and bandwidth, the other cost is associated to use of processing resources. Inherited from the GridSim model, this cost is associated with the execution of user task units. So, if VMs were created but no task units were executed on them, only the costs of memory and storage will incur. This behavior may, of course, be changed by users.

4. Design and Implementation of CloudSim


The Class design diagram for the simulator is depicted in Figure 4. In this section, we provide finer details related to the fundamental classes of CloudSim, which are building blocks of the simulator. Datacenter. This class models the core infrastructure level services (hardware, software) offered by resource providers in a Cloud computing environment. It encapsulates a set of compute hosts (blade servers) that can be either homogeneous or heterogeneous as regards to their resource configurations (memory, cores, capacity, and storage). Furthermore, every Datacenter component instantiates a generalized resource provisioning component that implements a set of policies for allocating bandwidth, memory, and storage devices. DatacenterBroker. This class models a broker, which is responsible for mediating between users and service 6 providers depending on users QoS requirements and deploys service tasks across Clouds. The broker acting on behalf of users identifies suitable Cloud service providers

through the Cloud Information Service (CIS) and negotiates with them for an allocation of resources that meets QoS needs of users. The researchers and system developers must extend this class for conducting experiments with their custom developed application placement policies. SANStorage. This class models a storage area network that is commonly available to Cloud-based data centers for storing large chunks of data. SANStorage implements a simple interface that can be used to simulate storage and retrieval of any amount of data, at any time subject to the availability of network bandwidth. Accessing files in a SAN at run time incurs additional delays for task unit execution, due to time elapsed for transferring the required data files through the data center internal network. VirtualMachine. This class models an instance of a VM, whose management during its life cycle is the responsibility of the Host component. As discussed earlier, a host can simultaneously instantiate multiple VMs and allocate cores based on predefined processor sharing policies (space-shared, time-shared). Every VM component has access to a component that stores the characteristics related to a VM, such as memory, processor, storage, and the VMs internal scheduling policy, which is extended from the abstract component called VMScheduling. Cloudlet. This class models the Cloud-based application services (content delivery, social networking, business workflow), which are commonly deployed in the data centers. CloudSim represents the complexity of an application in terms of its computational requirements. Every application component has a pre-assigned instruction length (inherited from GridSims Gridlet component) and amount of data transfer (both pre and post fetches) that needs to be undertaken for successfully hosting the application. BWProvisioner. This is an abstract class that models the provisioning policy of bandwidth to VMs that are deployed on a Host component. The function of this component is to undertake the allocation of network bandwidths to set of competing VMs deployed across the data center. Cloud system developers and researchers can extend this class with their own policies (priority, QoS) to reflect the needs of their applications. MemoryProvisioner. This is an abstract class that represents the provisioning policy for allocating memory to VMs. This component models policies for allocating physical memory spaces to the competing VMs. The execution and deployment of VM on a host is feasible only if the MemoryProvisioner component determines that the host has the amount of free memory, which is requested for the new VM deployment. VMProvisioner. This abstract class represents the provisioning policy that a VM Monitor utilizes for allocating VMs to Hosts. The chief functionality of the VMProvisioner is to select available host in a data center,

which meets the memory, storage, and availability requirement for a VM deployment. The default SimpleVMProvisioner implementation provided with the CloudSim package allocates VMs to the first available Host that meets the aforementioned requirements. Hosts are considered for mapping in a sequential order. However, more complicated policies can be easily implemented within this component for achieving optimized allocations, for example, selection of hosts based on their ability to meet QoS requirements such as response time, budget. VMMAllocationPolicy. This is an abstract class implemented by a Host component that models the policies (space-shared, time-shared) required for allocating processing power to VMs. The functionalities of this class

Figure 4. CloudSim class design diagram. 7 can easily be overridden to accommodate application specific processor sharing policies. 4.1. Entities and threading As the CloudSim programmatically builds upon the SimJava discrete event simulation engine, it preserves the SimJavas threading model for creation of simulation entities. A programming component is referred to as an entity if it directly extends the core Sim_Entity component of SimJava, which implements the Runnable interface. Every entity is capable of sending and receiving messages through the SimJavas shared event queue. The message propagation (sending and receiving) occurs through input and output ports that SimJava associates with each entity in the simulation system. Since, threads incur a lot of memory and processor context switching overhead; having a large number of threads/entities in a simulation environment can be performance bottleneck due to limited scalability. To counter this behavior, CloudSim minimizes the number of

entities in the system by implementing only the core components (Users and Datacenters) as the inherited members of SimJava entities. This design decision is significant as it helps CloudSim in modeling a really large scale simulation environment on a computing machine (desktops, laptops) with moderate processing capacity. Other key CloudSim components such as VMs, provisioning policies, hosts are instantiated as standalone objects, which are lightweight and do not compete for processing power. Hence, regardless of the number of hosts in a simulated data center, the runtime environment (java virtual machine) needs to manage only three threads (User, Datacenter, and Broker). As the processing of task units is handled by respective VMs, therefore their (task) progress must be updated and monitored after every simulation step. To handle this, an internal event is generated regarding the expected completion time of a task unit to inform the Datacenter entity about the future completion events. Thus, at each simulation step, each Datacenter invokes a method called updateVMsProcessing() for every host in the system, to update processing of tasks running within the VMs. The argument of this method is the current simulation time and the return type is the next expected completion time of a task running in one of the VMs on a particular host. The least time among all the finish times returned by the hosts is noted for the next internal event. At the host level, invocation of updateVMsProcessing() triggers an updateGridletsProcessing() method, which directs every VM to update its tasks unit status (finish, suspended, executing) with the Datacenter entity. This method implements the similar logic as described previously for updateVMsProcessing() but at the VM level. Once this method is called, VMs return the next expected completion time of the task units currently managed by them. The least completion time among all the computed values is send to the Datacenter entity. As a result, completion times are kept in a queue that is queried by Datacenter after each event processing step. If there are completed tasks waiting in the queue, then they are removed from it and sent back to the user.

4.2. Communication among Entities


Figure 5 depicts the flow of communication among core CloudSim entities. In the beginning of the simulation, each Datacenter entity registers itself with the CIS (Cloud Information Service) Registry. CIS provides database level match-making services for mapping user requests to suitable Cloud providers. Brokers acting on behalf of users consult the CIS service about the list of Clouds who offer infrastructure services matching users application requirements. In case the match occurs the broker deploys the application with the Cloud that was suggested by the CIS.

Figure 5. Simulation data flow. The communication flow described so far relates to the basic flow in a simulated experiment. Some variations in this flow are possible depending on policies. For example, messages from Brokers to Datacenters may require a confirmation, from the part of the Datacenter, about the execution of the action, or the maximum number of VMs a user can create may be negotiated before VM creation.

5. Tests and Evaluation


In this section, we present tests and evaluation that we undertook in order to quantify the efficiency of CloudSim in modeling and simulating Cloud computing environment. The tests were conducted on a Celeron machine having configuration: 1.86GHz with 1MB of L2 cache and 1 GB of RAM running a standard Ubuntu Linux version 8.04 and JDK 1.6. To evaluate the overhead in building a simulated Cloud computing environment that consists of a single data center, a broker and a user, we performed series of experiments. The number of hosts in the data center in each experiment was varied from 100 to 100000. As the goal of these tests were to evaluate the computing power requirement to instantiate the Cloud simulation infrastructure, no attention was given to the user workload. 8 For the memory test, we profile the total physical memory used by the hosting computer (Celeron machine) in order to fully instantiate and load the CloudSim environment. The total delay in instantiating the simulation environment is the time difference between the following events: (i) the time at which the runtime environment (java virtual machine) is directed to load the CloudSim program; and (ii) the instance at which CloudSims entities and

components are fully initialized and are ready to process events. Figures 6 and 7 present, respectively, the amount of time and the amount of memory is required to instantiate the experiment when the number of hosts in a data center increases. The growth in memory consumption (see Fig. 7) is linear, with an experiment with 100000 machines demanding 75MB of RAM. It makes our simulation suitable to run even on simple desktop computers with moderated processing power because CloudSim memory requirements, even for larger simulated environments can easily be provided by such computers.

Figure 6. Time to simulation instantiation. Regarding time overhead related to simulation instantiation, the growth in terms of time grows exponentially with the number of hosts/machines. Nevertheless, the time to instantiate 100000 machines is below 5 minutes, which is reasonable considering the scale of the experiment. Currently, we are investigating the cause of this behavior to avoid it in future versions of CloudSim. The next test aimed at quantifying the performance of CloudSims core components when subjected to user workloads such as VM creation, task unit execution. The simulation environment consisted of a data center with 10000 hosts, where each host was modeled to have a single CPU core (1000MIPS), 1GB of RAM memory and 2TB of storage. Scheduling policy for VMs was Space-shared, which meant only one VM was allowed to be hosted in a

host at a given instance of time. We modeled the user (through the DatacenterBroker) to request creation of 50 VMs having following constraints: 512MB of physical memory, 1 CPU core and 1GB of storage. The application unit was modeled to consist of 500 task units, with each task unit requiring 1200000 million instructions (20 minutes in the simulated hosts) to be executed on a host. As networking was not a concern in these experiments, task units required only 300kB of data to be transferred to and from the data center.

Figure 7. Memory usage in resources instantiation.

Figure 8. Tasks execution with space-shared scheduling of tasks. After creation of VMs, task units were submitted in groups of 50 (one submitted to each VM) every 10 minutes. The VM were configured to use both spaceshared and time-shared policies for allocating tasks units to the processing cores. Figures 8 and 9 present task units progress status with increase in simulation steps (time) for the space-shared test and for the time-shared tests respectively. As expected, in the space-shared case every task took 20 minutes for completion as they had dedicated access to the processing core. Since, in this policy each task unit had its own dedicated core therefore number of incoming tasks or 9 queue size did not affect execution time of individual task units. However, in the time-shared case execution time of each task varied with increase in number of submitted taks units. Using this policy, execution time is significantly affected as the processing core is concurrently context switched among the list of scheduled tasks. The first group of 50 tasks was able to complete earlier than the other ones because in this case the hosts were not over-loaded at the beginning of execution. To the end, as more tasks reached completion, comparatively more hosts became available for allocation. Due to this we observed improved response time for the tasks as shown in Figure 9.

Figure 9. Task execution with time-shared scheduling of tasks. 6. Conclusion and Future Work The recent efforts to design and develop Cloud technologies focus on defining novel methods, policies and mechanisms for efficiently managing Cloud infrastructures. To test these newly developed methods and policies, researchers need tools that allow them to evaluate the hypothesis prior to real deployment in an environment where one can reproduce tests. Especially in the case of Cloud computing, where access to the infrastructure incurs payments in real currency, simulation-based approaches offer significant benefits, as it allows Cloud developers to test performance of their provisioning and service delivery policies in repeatable and controllable environment free of cost, and to tune the performance bottlenecks before deploying on real Clouds. To this end, we developed the CloudSim system, a framework for modeling and simulation of next-generation Clouds. As a completely customizable tool, it allows extension and definition of policies in all the components of the software stack, which makes it suitable as a research tool that can handle the complexities arising from simulated environments. As future work, we are planning to incorporate new pricing and provisioning policies to CloudSim, in order to offer a built-in support to simulate the currently available Clouds. We also intend to provide support for simulating federated network of clouds, with focus on designing and testing elastic Cloud applications.

References

[1] R. Buyya, C. S. Yeo, and S. Venugopal. Marketoriented cloud computing: Vision, hype, and reality for delivering IT services as computing utilities. In Proceedings of the 10th IEEE International Conference on High Performance Computing and Communications, 2008. [2] D. Chappell. Introducing the Azure services platform. White paper, Oct. 2008. [3] X. Chu et al. Aneka: Next-generation enterprise grid platform for e-science and e-business applications. In Proceedings of the 3rd IEEE International Conference on e-Science and Grid Computing, 2007. [4] C. L. Dumitrescu and I. Foster. GangSim: a simulator for grid scheduling studies. In Proceedings of the IEEE International Symposium on Cluster Computing and the Grid, 2005. [5] I. Foster and C. Kesselman (editors). The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann, 1999. [6] F. Howell and R. Mcnab. SimJava: A discrete event simulation library for java. In Proceedings of the first International Conference on Web-Based Modeling and Simulation, 1998. [7] A. Legrand, L. Marchal, and H. Casanova. Scheduling distributed applications: the SimGrid simulation framework. In Proceedings of the 3rd IEEE/ACM International Symposium on Cluster Computing and the Grid, 2003. [8] J. E. Smith and R. Nair. Virtual Machines: Versatile platforms for systems and processes. Morgan Kauffmann, 2005. [9] R. Buyya and M. Murshed, GridSim: A Toolkit for the Modeling and Simulation of Distributed Resource Management and Scheduling for Grid Computing, The Journal of Concurrency and Computation: Practice and Experience (CCPE), Volume 14, Issue 13-15, Wiley Press, Nov.-Dec., 2002. [10] A. Weiss. Computing in the clouds. NetWorker, 11(4):1625, Dec. 2007. [11] M. Armbrust, A. Fox, R. Griffith, A. Joseph, R. Katz, A.
Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, M. Zaharia. Above the Clouds: A Berkeley View of Cloud computing. Technical Report No. UCB/EECS-2009-28, University of California at Berkley, USA, Feb. 10, 2009.

You might also like