Professional Documents
Culture Documents
SaaS
PaaS
IaaS
Deployment Models
Private clouds
Community clouds
Public clouds
Hybrid clouds
We suggest one additional Model: Management Models (trust and tenancy issues)
Self managed
Third party managed (e.g. public clouds and VPC)
Features
Use of internet based services to sustain business process.
Rent IT services on a efficacy like source.
Attributes
Rapid exploitation
Low establish costs/ capital stash.
Costs base on handling or contribution
Multi tenant allotment of services/ resources.
Essential characteristics
On demand self services.
Eventually, if
The cloud providers protection people are better than yours (and
leveraged at least as efficiently),
The web services interface dont establish in addition several new
vulnerabilities, and
The cloud providers aim at least as high as you do, at security goals then
cloud computing has better security.
cloud communications but has control in excess of OS, storage space, and deploy
applications and possibly restricted manage in excess of selected network mechanism.
Examples: Amazon S3, Microsoft windows azure SQL.
Cloud Deployment Models (Cloud usage model)
Cloud computing is also can be divided into 4 main groups depends on procedure or
exploitation: Public cloud, Private cloud, Community cloud and Hybrid cloud.
1. Personal (Private) Clouds
Personal (Private) Clouds are characteristically own and/or lease by the particular
association or individuals. It might be manage and operate by the association or a Third
party or a grouping of them, and it might exist off or on property.
Example: eBay
provider provided that on demand self services contain amazons web services (AWS),
Microsoft, Salesforce.com and Google.
2. Broad network access
Consumers are provide with large system access accessible throughout typical machinery
using a extensive assortment of compute devices such as personal computers, laptops and
even the modern smart phones.
3. Resource pooling.
Resource Pooling involve provider by means of mutual computing possessions to make
available cloud services to abundant clients. Depends on clients demand, computing
possessions can be vigorously assigned and reassigns. Clients dont require offering their
possess storage space, memory, n/w bandwidth and dispensation take position in the
cloud slightly than at the users premise or on the user devices.
4. Rapid elasticity.
Rapid elasticity referred to ability of cloud possessions to be elastic sufficient to change
to altering clients require, this sometimes might be mechanical. That is, it allows
application to rapidly weighing machine exploitation both up and down, as the insist
change. This capability to the client can be purchase in any quantity at any time and
frequently emerge to be boundless.
5. Measured service
Cloud source usage can be calculated, manage, prohibited, and report thus provided that
clearness and responsibility for together the source and client of the utilize service. Cloud
computing system makes use of a metering ability which mechanically control and
optimized resource usage. This is through at various level of generalization
corresponding to the kind of service. Such services contain vigorous user relation, storage
space, n/w bandwidth, processing, etc.
this facility available and customers can together share the same infrastructure, thus
reducing costs and increasing effectiveness.
4. The Cloud Computing Era
The year 1999 saw the advent of the first ever cloud service. It occurred when
Sales force created a website committed to granting enterprises applications over the
internet. The once fuzzy dream of Paul McCarthy has now come into being as now
computing can be sold like a utility. Although this was a success, it would take some time
until it would become extensive.
Amazon, in 2002, launched the Amazon Web Services (AWS) which was considered
the next major development in this field. It offered services such as storage, computation,
to a large degree, human intelligence and other services to its customers. Then in 2006,
Amazon launched the Elastic Compute Cloud (EC2). This afforded small companies and
individuals the means to run their own computer applications in the cloud.
In 2009, cloud computing saw a completely remarkable defining point when cloud
enterprise applications became browser based. Cloud services became publicly available
an example being Google Apps. The big names in the industry have also joined the cloud
computing band wagon. Microsoft launched Windows Azure and Windows Azure SQL
Database. Other companies include HP and Oracle. Moving on from here, the only way
forward is cloud computing. This will bring to reality the dream where everyone can
access the applications and services they require how, when and as quickly as they need
them. There is no turning back.
CHALLENGES OF CLOUD COMPUTING
1. Security
The main hurdle in the fast adoption of cloud is the security concerns of the
customers.
(Ullah and Xuefeng, 2013). Security issue has played the most important role in
hindering Cloud computing acceptance. Various security issues, possible in cloud
computing are: availability, integrity, confidentiality, data access, data segregation,
privacy, recovery, accountability, multi-tenancy issues and so on. Solution to various
cloud security issues vary through cryptography, particularly public key infrastructure
dissimilar. To perceptive these users behavior and these implications on system traffic are
critical for the success of future mobile TeleVision industries.
Whereas mobile Internet environment is become broad, how to restores peer to
peer operation for mobile host is gaining further attentions. At this time, we carryout
experiential measurements of Bit Torrent user in a business WiMAX system. In this
project we examine how handover in WiMAX network crash the BitTorrent presentation,
how BitTorrent peers present from the aspect of connectivity, constancy and ability, and
how the BitTorrent procedure behave depending on client mobility.
Here we monitor these drawback of BitTorrent for mobile user is characterize
through reduced connectivity between peers, short download conference time, small
download throughput, insignificant upload donations, and high signal permanent charge.
These accomplishments of next generation mobile communication system depend
on the aptitude of service provider to engineer new added value multimedia rich services,
which impose stringent constraints on the underlying delivery/transport architecture. Here
the consistency of real-time services is essential for the capability of any such service
offer.
And the sporadic packet loss typical of wireless channels can be addressed using
appropriate techniques such as the widely used packet-level forward error correction.
While in design channel aware medium stream application, 2 consistent and demanding
issues be supposed to be tackle: Correctness of characterize channel fluctuations and
efficiency of application level adaptation.
Now the first challenge require thorough approaching into channel fluctuations
and their emergence at the appliance levels, whereas the second concern the way
individuals fluctuations are interpret and deal with FEC adaptive mechanism. At last in
this expose we evaluate the main issue that arises what time design a consistent media
stream organism for wireless network.
In later generation wireless networks, Internet service providers are expected to
offer services through several wireless technologies .Hence, mobile computers equipped
with multiple interfaces will be able to maintain simultaneous connections with different
networks and increase their data communication rates by aggregating the bandwidth
available at these networks.
To guarantee quality-of-service for these applications, this paper proposes a
dynamic QoS negotiation scheme that allows users to dynamically negotiate the service
levels required for their traffic and to reach them through one or more wireless interfaces.
Here such type of bandwidth aggregation scheme implies transmission of data belonging
to a single application via multiple paths with different characteristics, which may result
in an out-of-order delivery of data packets to the receiver and introduce additional delays
for packets reordering.
Further the proposed QoS negotiation system aims to ensure the continuity of
QoS perceived by mobile users while they are on the move between different access
points, and also, a fair use of the network resources. The performance of the proposed
dynamic QoS negotiation system is investigated and compared against other schemes.
Finally the obtained results demonstrate the outstanding performance of the proposed
scheme as it enhances the scalability of the system and minimizes the reordering delay
and the associated packet loss.
The combination of increased data rates, dedicated multicast/ broadcast services,
and the emergence of scalable video coding standards allows mobile operators to offer
multimedia-based services with a high quality of experience to mobile users. The H.264
SVC offers three dimensions of scalability.
Here we present a simulation framework to assess the video quality of scalable video
streamed over an LTE network through the use of multiple objective quality metrics such
as PSNR, SSIM, Blocking and Blurring and the framework integrates an LTE simulator
based on OPNET, combined with quality analysis of SVC/H.264 compressed video,
using the same metrics as detailed above.
Thus, we evaluate the performance of scalable video delivery, in both a
lossless scenario and in a scenario with packet losses in the LTE network. Then the
results advocate the use of no-reference evaluation metrics along with a frame-drop
metric over full-reference metrics which cant be used in real-life deployments. We also
observe that spatial scalability leads to maximum degradation of image quality compared
with temporal and quality.
Here we investigate the scheduling policy for collaborative execution in
mobile cloud computing. The mobile applications are represented by a sequence of finegrained tasks formulating a linear topology, and each of them is executed either on the
mobile device or offloaded onto the cloud side for finishing. While meeting a time limit,
the design objective is to minimize the energy consumed by the mobile device.
In this article we prepare this minimum-energy task scheduling problem
as a constrained shortest path problem on a directed acyclic graph, and adapt the
canonical LARAC algorithm to solving this problem roughly. Then the Numerical
simulation suggests that a one-climb offloading policy is energy efficient for the
Markovian stochastic channel, in which at most one migration from mobile device to the
cloud is taken place for the collaborative task execution. In addition, compared to
standalone mobile execution and cloud execution, the optimal collaborative execution
strategy can extensively save the energy consumed on the mobile device.