You are on page 1of 112

Zoran Pantic & M.

Ali Babar
IT University of Copenhagen, Denar!
"or#ic $yposi% on Clo%# Cop%ting &
Internet Technologies &"or#iClo%#'
August 21th & 22th, 2012, Helsinki, Finland
Building Private Cloud with Open Source
Software for Scientific Environment
Zoran Pantic
Infrastr%ct%re Architect & $ystes $pecialist
Corporate IT ( University of Copenhagen
)*ail+ ,opa(it%.#! & ,oran(pantic.#!
Aca#eic profile+ http+--it%.aca#eia.e#%-ZoranPantic
Blog+ http+--,oranpantic..or#press.co
/in!e#In+ http+--....lin!e#in.co-in-,oranpantic
M. Ali Babar
Agenda
"on*technical part+
0hy Private Clo%#1
0hy 2$$1
Technical part+
3eflections on #iverse IT*infrastr%ct%re aspects
2$$ Private Clo%# sol%tions+
U)C-)%calypt%s
2pen"eb%la
2pen$tac!
Concl%sion
4%estions1 &also #%ring the session'
Tutorial oal!
Un#erstan# the role an# %se of private clo%# in specific
environents, e.g., scientific & aca#eic
5ain !no.le#ge of the technologies for setting %p a private
clo%# .ith open so%rce soft.are
/earn abo%t the process for #esigning & ipleenting a
private clo%# sol%tion
Appreciate the socio*technical & technical challenges
involve# an# soe potential strategies
Cloud Computing
6Cloud computing is a model for enabling convenient, on-demand netork
access to a shared pool of configurable computing resources !e"g", netorks,
servers, storage, applications, and services# that can be rapidl$ provisioned
and released ith minimal management effort or service provider
interaction.7
&A #efinition by the U$ "ational instit%te of stan#ar#s an# technology * NIST'
Main A!pect! of a Cloud S"!tem
3epro#%ce# fro 8ig%re 9 of The 8%t%re of Clo%# Cop%ting+ 2pport%nities for )%ropean Clo%# Cop%ting beyon# :;9;.
Commercial Effort! # $%&!
8ro The 8%t%re of Clo%# Cop%ting+ 2pport%nities for )%ropean Clo%# Cop%ting beyon# :;9;
Service # 'eplo"ment Model!
Infrastructure as a Service (IaaS)
Amazon EC2 Eucalyptus IBM Computing
On eman! (oC)
"M#are
vSp$ere
%latform as a Service (%aaS)
&oogle App
Engine
Microsoft
Azure
'orce(com )a$oo
Open Strategy
Soft*are as a Service (SaaS)
&oogle Apps +o$o
Salesforce
C,M
Microsoft Clou!
Services
Public
Clouds
Private
Clouds
Community
Clouds
Virtual Private
Clouds
Hybrid Clouds
S
e
r
v
i
c
e

M
o
d
e
l
s
D
e
p
l
o
y
m
e
n
t

M
o
d
e
l
s
Private Cloud
Private clo%# has #ifferent eanings to #ifferent people
B%t basically, it<s a clo%# infrastr%ct%re set %p, anage#,
an# %pgra#e# by an organi,ation or co%nity for their
o.n %se
Coercial ven#ors are entering in this #oain =%ite
fast an# 2pen $o%rce provi#ers are also there+
)%calypt%s, 2pen "eb%la, 2pen$tac!
Step! for Setting (p Private Cloud
A#opt a achine virt%ali,ation strategy
Profile application cop%te,eory, an# storage %sage
an# perforance re=%ireents
Design a virt%al achine #evelopent cons%ltancy
Acco%nting an# recharge policies a#apte# to self*service
Architect a #eployent an# #eploy a private clo%#
$o%rce+ 8ive $teps to )nterprise clo%# cop%ting, a 0hite paper of )%calypt%s $ystes, Inc.
)h" Private Cloud* +,-
Us%ally, the b%#get is lo., an# the pro>ect sho%l# start as soon as
possible
5ro.ing strongly+
The nee# for processing large #ata vol%es
The nee# to conserve po.er by optii,ing server %tili,ation
"on*stan#ar# highly*a#aptable sol%tion nee#e#
Analy,ing large ao%nts of #ata to get res%lts
Many #ifferent research pro>ects in one organi,ation
)h" Private Cloud* -,-
%rivate clou!s-
?ave higher 32I than tra#itional infrastr%ct%re
Are ore c%stoi,able
Can =%ic!ly respon# to changes in #ean#s
$%pport rapi# #eployent
?ave increase# sec%rity
8oc%s on an organi,ation<s core b%siness
?ave effort re=%ire# for r%nning the ten#ing #o.n.ar#
)h" OSS*
In general+
/o.ering the costs !i"e" no licensing headaches%# @ the b%#gets
aren<t gro.ing * b%t the #ean#s are
Interchangeability & portability !general, avoiding vendor lock-in#
$ocio*organi,ational reasons
)nergy efficience
)Aaples+ U)C-)%calypt%s, 2pen"eb%la, 2pen$tac!, Boyent
$art2$
Private Cloud Challenge!
Challenges+
$ocio*technical
Technical
Socio.technical Challenge!
$ocio*technical challenges+ ostly political an# econoic+
)Aisting str%ct%res oppose ipleentation of private clo%#
0ea! transparency of .ho is in charge of systes an# econoy,
3esearches cannot be ar!et cost*effective,
A#inistrators #e facto in charge * instea# of scientific gro%ps
Ten#ency of ipleenting things beca%se they are interesting
an# 6f%n7, .hile aybe there is no nee# for those systes.
Technical Challenge!
Private clo%# at%rity,
Probles porting of prograing co#e,
IT #epartents sho%l# be big eno%gh, .ith eno%gh
eApertise,
2$$+ co%nity cannot fiA all yo%r probles.
/mplementing Cloud Solution!
Deterine the nee#s an# their nat%re @ eAtensive interaction
.ith all the a>or sta!ehol#ers, e.g., pro>ect lea#er
Top*#o.n steering of the process
Design an# ipleent a test case
)n# %sers also thoro%ghly test the sol%tion * free of charge,
Ma!e s%re that ipleentation s%ccee#s first tieC
5et a very clear pict%re of .hat services are to be offere#,
.ho .ill %se the, .hat they .ill %se the for, an# ho.C
Private Cloud in Scientific Environment
Base# on 2pen $o%rce $oft.are &2$$'
8oc%s on the logistical an# technical challenges, an#
strategies of setting %p a private clo%# for scientific
environent
5eneral scenarios+
/ocal DID
2$$ Private Clo%#
)nterprise Private Clo%# &.ith gt sol%tion'
Eirt%al Private Clo%#
... or >%st going P%blic Clo%#
%ocu! on Scientific Environment
Difference in ipleenting for 6infantry7 an# 6s%pply troops7
6Infantry7 * to s%pport research, scientific cop%ting an#
?igh Perforance Cop%ting &?PC'
6Supply7 * to s%pport #aily operational systes an# tas!s
i.e. >oint a#inistration
Boo!!eeping, a#inistration, Co%nications &telephony, e*
ail, essaging'
6Infantry7 @ stateless instances vs. 6$%pply7 @ statef%l
instances
Scientific Environment0 1/nfantr"2 +,-
Uses non*stan#ar# & a#vance# research instr%ents
Applicable in research, scientific cop%ting an# ?PC, i.e.+
5enerally if %sers nee# EMs that they a#inister theselves
&root access' * ore appropriate to s%pply the .ith achines
fro private clo%#, then giving access to virt%al hosts behin#
fire.all
2rgani,ations li!e ITU &Denar!'+ for n%ero%s #ifferent pro>ects
2rgani,ations li!e DC$C &Denar!'+ 9-F of the >obs .o%l# be
r%nnable on private clo%#
in ?PC+ 2nly in lo. en#, for lo. eory an# lo. core
n%ber >obs
Scientific Environment0 1/nfantr"2 -,-
$%ari,e# s%ggestions
?ave social psychology in in# as iportant factor
Cons%lt the professor in charge of oney for the pro>ect
Ipleent an open so%rce sol%tion @ 2pen$tac!, 2pen"eb%la,
U)C base# on )%calypt%s, Boyent $art2$, ...
Scientific environment0 1Suppl"2
"ee#s a stable an# s%pporte# sol%tion
$%ari,e# s%ggestions
?ave social psychology in in# as iportant factor
Cons%lt the syste o.ner in charge of oney for the pro>ect
Ipleent a proprietary sol%tion fro rep%table provi#er
Microsoft ?yper*E, EM.are Eirt%al Infrastr%ct%re, G
$ign a s%pport agreeent & negotiate a goo# $/A
CP( and Memor"
Processor architect%re+
Intel & AMD
Definitely HI*bit @ for perforance reasons
M%ltiprocessor, %lticore, hyper threa#ing
Eirt%al )Atensions enable# har#.are is a %st
Intel ET*J or AMD*E virt%ali,ation eAtensions @ virt%ali,ation
enable# har#.are &chec! by vie.ing /proc/cpuinfo'
?ost<s 3AM ini% I 5B
)nable K$M &Kernel $aePage Merging'
Storage Option!
Dis! intefaces+ parallel & serial
Conteporary #is! types+
$ATA
$C$I
$A$
$$D
?ybri# #rives
Storage T"pe! +,3
/ocal vs 3eote storage+
/ocal storage+
#is!s in the host itselv
DA$ @ attache# #irectly to the host
3eote storage+
"A$ * 8ile /evel $torage &"8$, $MB-CI8$'
Also #istrib%te# file systes &see i.e. Moose8$ an# 5l%ster8$'
$A" * Bloc! /evel $torage &8C-8Co), i$C$I'
2$$-free $A"-"A$ appliance eAaple+ "app*it , base# on
Z8$-"eAenta
Storage T"pe! -,3
$torage levels+
Bloc! @ bits store# se=%entially in a bloc! of fiAe# si,eL rea# & .rite ra.
#ata bloc!sL for file systes or DBM$s
8ile @ aintains physical location of files, apping the to bloc!s &i.e.
ino#e n%ber - pointers'
2b>ect @ #ata organi,e# in fleAible si,e# containers, ob>ects, consisting
of #ata &se=%ence of bytes' an# eta#ata &eAtensible attrib%tes
#escribing the ob>ect'L for static #ataL #istrib%te# storage sprea# accross
%ltiple #is! #rives an# serversL no Mcentral brain6 or Master point6 @
scalable, re#%n#ant, #%rable
Partitioning in /in%A %sing /ogical Eol%e Manger &/EM'
Physical Eol%e &PE'
/ogical Eol%e &/E'+ %ltiple PEs a!e one /E
Eol%e 5ro%p &E5'+ %ltiple /Es a!e one E5
Storage T"pe! 3,3
Dis! config%ration+
In#epen#ent #is!s &BBoD'
3AID+
M%ltiple #rives coprising one logical %nit
Can be base# on soft.are, har#.are or fir.are
$oe of the 3AID levels+
; @ bloc!*level striping .itho%t parity or irroring
9 @ irroring .itho%t parity or striping
N @ bloc!*level striping .ith #istrib%te# parity
H @ bloc!*level striping .ith #o%ble #istrib%te# parity
;9 &;O9' @ stripe# sets in a irrore# set
9; &9O;' @ irrore# sets in a stripe# set
4irtuali5ation +,-
Different types of virt%ali,ation+
?ar#.are
$torage
"et.or!
Meory
Application
Des!top
...
4irtuali5ation -,-
?ar#.are virt%ali,ation+
8%ll virt%ali,ation+ g%est %no#ifie#, %na.are
?0*assiste# virt%ali,ation+ h. architect%re s%pports virt%ali,ation
Partial virt%ali,ation+ partially si%lates the physical har#.are
of a achineL i.e. each g%est has in#epen#ent a##ress space
Paravirt%ali,ation+ g%est is a.are that it<s not Malone6L g%est
o#ification re=%ire# &#rivers'
2$*level virt%ali,ation &Container*base# virt%ali,ation'+
physical server virt%ali,e# at 2$*level, enabling %ltiple
isolate# an# sec%re virt%ali,e# servers to r%n on a single
physical serverL g%est an# host share the sae 2$
T"pe! of 6"pervi!or!
Types of hypervisors+
"ative - bare etal @ r%n #irectly on the host<s har#.are
?oste# @ r%n .ithin 2$
Ma>or virt%ali,ation ven#ors & technologies %se# in hypervisor layer+
http+--....clo%#cop%teinfo.co-virt%ali,ation &so%rce+ Pa%l Morse'
To#ays ost %se# hypervisors+
KEM-4)MU
Jen
Eirt%alBoA
EM.are
?yper*E
$art2$
74M
KEM * MKernel*base# Eirt%al Machine6, http+--....lin%A*!v.org
/in%A !ernel o#%le that allo.s a %ser space progra to %tili,e the
har#.are virt%ali,ation feat%res of vario%s processors &Intel an# AMD
processors * APH an# APHQHI, PPC II;, PPC RS;, $-FR;'
KEM incl%#e# in !ernel T ore recent !ernel gives %p#ate# KEM
feat%res, b%t is less teste#'
virt%ali,ation sol%tion that can r%n %ltiple virt%al achines r%nning
%no#ifie# /in%A or 0in#o.s g%ests
$%pports .ra., .=co.: an# .v#! #is! iage forats
Available as integrate# part of every /in%A #istrib%tion since :.H.:;
Coponents+
loa#able !ernel o#%le M!v.!o6 that provi#es the core virt%ali,ation
processor specific o#%le M!v*intel.!o6 or M!v*a#.!o6
KEMis only an intefrace that is calle# thro%gh a special syste file,
an# re=%ires 4)MU to be a f%ll virt%ali,ation environent
8EM(
4)MU @ 4%ic! )%lator * http+--.i!i.=e%.org
generic open so%rce achine e%lator an# virt%ali,er+
)%lator+ r%ns 2$<es a#e for one achine on #ifferent achine
Eirt%ali,er+
eAec%tes g%est co#e #irectly on the host CPU
)Aec%te# %n#er Jen hypervisor, or %sing the KEM !ernel o#%le
9en
2pen $o%rce virt%alisation technology * http+--....Aen.org
$tarte# as Jeno$erver pro>ect at Cabri#ge University
Use# as stan#alone hypervisor, or as hypervisor coponent in
other clo%# infrastr%ct%re frae.or!s
$%pports .ra. an# .v#! #is! iage forats
4irtualBo:
2racle Eirt%alBoA * https+--....virt%alboA.org
8ree soft.are release# %n#er 5"U 5P/
A APH virt%ali,ation platfor, create# by Innote!, p%rchase# by
$%n, an# no. o.ne# by 2racle
Installe# on a host 2$ as an application
4Mware
EM.are * http+--....v.are.co
Different hypervisors+
)$J @ ainline pro#%ctL coercial license
)$Ji @ ainline pro#%ct, free &not 2$$'L boot fro flash car#s s%pporte#
$erver @ free &not 2$$', installs on /in%A & 0in#o.s
0or!station-Player @ virt%ali,ation on %ser PC<s
$%pports .v#! #is! iage forat
4Mware
6"per.4
Microsoft ?yper*E http+--....icrosoft.co-en*%s-server*
clo%#-.in#o.s*server-hyper*v.aspA
3elease# in :;;P, ne. :;9: release eApecte# in "oveber
Eirt%ali,ation platfor that is integral part of 0in#o.s $erver
2nly for APH*HI
Can boot fro flash car# on servers otherboar#
Eariants+
$tan#*alone pro#%ct, free, liite# to coan# line interface
As ?yper*E role insi#e 0in#o.s $erver
$%pports .vh# #is! iage forat
SmartOS
Boyent $art2$ * http+--sartos.org
8ree, gone 2pen $o%rce A%g%st :;99, #escent fro
2pen$olars * Ill%os
?ypervisor po.ering Boyent<s $artDataCenter, can r%n
private, p%blic an# hybri# clo%#
)nables ?0*level an# 2$*level virt%ali,ation in a single 2$
8eat%res+ KEM, Zones, DTrace, Z8$
$etwor;ing Service! +,3
Provi#ing basic net.or! services &D"$, 50, "AT, ...' is a
goo# i#ea
Physical & virt%al net.or!s
%$ysical net*or.+
Ipleenting private clo%# %sing : or F net.or!s+ 0A",
Clo%# p%blic & Clo%# private
8ire.all+ 2$$ base# pf$ense * to a!e the .hole environent
in#epen#ent of the net.or! infrastr%ct%re - environent
.here it .ill be 6pl%gge# in7
$etwor;ing Service! -,3
"irtual net*or.s+ &i.e. "icira, Jsigo'
In#epen#ece fro net.or! ?0
3epro#%ction of the physical net.or!
2perating o#el of cop%ting virt%ali,ation
Different hypervi,or copatibility
Isolation bet.een virt%al an# physical net.or!, an# control
layer
$calling & perforance clo%#*li!e
Prograatic provisioning & control
$etwor;ing Service! 3,3
"et.or! virt%ali,ation &eAaple+ "icira'
&edundanc"
A%toatic-an%al failover-failbac!
Cl%sters+ active-active, active-passive &=%or%'
Private Clo%#
$oe ?A feat%res, b%t local to every provi#er
0or! in progress+Corosync O Pacea!er
MCorosync6 @ 2pen $o%rce cl%ster sol%tion
MPacea!er6 @ 2pen $o%rce ?A cl%ster reso%rce anager
Private Cloud Offering!
/ist of 2$$ Private Clo%# offerings+ &so%rce+ Pa%l Morse'
http+--....clo%#cop%teinfo.co-private*clo%#s
Covere#+
)%calypt%s &Ub%nt% )nterprise Clo%#, U)C'
2pen"eb%la
2pen$tac!
Eucal"ptu!
0as b%n#le# .ith Ub%nt% &U)C'L no. Monly6 s%pporte#
&Ub%nt% is b%n#ling 2pen$tac! fro 99.9;'
U)C-)%calypt%s is an on*preise private clo%# 2$$ base#
platfor, sponsore# by )%calypt%s $ystes
$tarte# as research pro>ect in :;;S ( UC$B
/in%A base# @ 3?)/, Cent2$, Ub%nt%
$%pport for EM.are
8or scalable private an# hybri# clo%#s
?ybri# clo%#s achieve# by API copatibility .ith Aa,on<s
)C:, $F, an# IAM services
"e. feat%re since U)C+ )%calypt%s ?A
All fig%res ta!en fro http+--....e%calypt%s.co
&e<uirement!
All coponents %st be on physical achines &no EMsC'
Processor Intel or AMD .ith : cores of : 5?,
Min I 5B 3AM
$torage+ in F; 5B for each achine, 9;;*:N; 5B an# ore for
$C & "C recoen#e#
"et.or!+ in 9 5bps "ICs, bri#ges config%re# on "Cs
/in%A @ if Ub%nt%, choose /T$ &/ong Tie $%pport' version
?ypervisors+ &Jen, KEM, EM.are'
3?)/ & Cent2$ %st have Jen
Ub%nt% %st have KEM
E.are
$$? connectivity bet.een achines
Component!
Designe# as a #istrib%te# syste .ith a set of N &H' eleents+
Clo%# Controller &C/C'
0alr%s $torage Controller &0$F'
Cl%ster Controller &CC'
$torage Controller &$C'
"o#e Controller &"C'
EM.are Bro!er &Bro!er or EB' * optional
Architectural =a"er!
Three levels+
9. Clo%# level
Clo%# Controller &C/C'
0alr%s $torage Controller
&0$F'
:. Cl%ster level
Cl%ster Controller &CC'
$torage Controller &$C'
EM.are Bro!er &Bro!er or EB'
F. Cop%ting level
"o#e Controller &"C'
Cloud Controller >C=C?
)ntry point to )%calypt%s clo%#
.eb interfaces for a#inistering the infrastr%ct%re
.eb services interface &)C:-$F copliant' for en# %sers
-client tools
8ronten# for anaging the entire U)C infrastr%ct%re
5athers info on %sage an# availability of the reso%rces in the
clo%#
Arbitrates the available reso%rces, #ispatching the loa# to the
cl%sters
)alru! Storage Controller >)S3?
)=%ivalent to Aa,on<s $F
B%c!et base# storage syste .ith p%t-get storage o#el
0$F is storing the achine iages an# snapshots
Persistent siple storage service, storing an# serving files
Clu!ter Controller >CC?
)ntry point to a cl%ster
Manages "Cs an# instances r%nning on the
Controls the virt%al net.or! available to the instances
Collects inforation on "Cs, reporting it to C/C
2ne or several per clo%#
Storage Controller >SC?
Allo.s creation of bloc! storage siilar to Aa,on<s )lastic
Bloc! $torage &)B$'
Provi#es the persistent storage for instances on the cl%ster
level, in for of bloc! level storage vol%es
$%pports creation of storage vol%es, attaching, #etaching
an# creation of snapshots
0or!s .ith storage vol%es that can be attache# by a EM or
%se# as a ra. bloc! #evice &no sharing tho%gh'
0or!s .ith #ifferent storage systes &local, $A", "A$,
DA$'
4Mware Bro;er >Bro;er or 4B?
2ptional coponent for )%calypt%s s%bscribers
)nables #eploying EMs on EM.are infrastr%ct%re
3esponsible for arbitrating interactions bet.een CC an#
)$J-)$Ji hypervisors
locate# .ith CC
$ode Controller >$C?
Cop%te no#e &6.or! horse7', r%ns an# controls the instances
$%pporte# hypervisors+
KEM &preferre#, open so%rce version'
Jen &open so%rce version'
EM.are &)$J-)$Ji, for s%bscribers'
Co%nicating .ith both 2$ an# the hypervisor r%nning on the
no#e, an# Cl%ster Controller
5athers the #ata abo%t physical reso%rce availability on the no#e
an# their %tili,ation, an# #ata abo%t instances r%nning on that
no#e, reporting it to CC
2ne or several per cl%ster
Plan /n!tallation
Integration .ith /DAP or AD
$%pport for reote storage &$A"-"A$ @ chec! s%pporte# #evices'
Choosing fro 6 installing "C on one server an# all other on
another6, to Meach of coponents on separate server6
Tra#e*off bet.een siplicity an# perforance & ?A
/n!tallation
Using Ub%nt%O)%calypt%s b%n#le# installation &not available
in ne. versions of Ub%nt%, since version 99.9; Ub%nt% incl%#es
2pen$tac! instea#'
Man%ally+
Install 2$
Eerify net.or! &connectivity, 80, E/A", D"$...'
Install hypervisor
Config%re bri#ges, "TP an# MTA
Install )%calypt%s
Config%re )%calypt%s &net.or! o#es, hypervisors, r%ntie
environent'
)vent%ally config%re ?A
Scale.out Po!!ibilitie!
: physical servers
$erver 9+
C/C-0$F-CC-$C
$erver :+ "C
F physical servers+
$erver 9+ C/C-0$F
$erver :+ CC-$C
$erver F+ "C
I physical servers
$erver 9+ C/C
$erver :+ 0$F
$erver F+ CC-$C
$erver I+ "C
N physical servers
$erver 9+ C/C-0$F
$erver :+ CC9-$C9
$erver F+ "C9
$erver I+ CC:-$C:
$erver N+ "C:
Scaling Out
NC NC NC NC NC NC
C/U$T)3 9 C/U$T)3 :
C/U$T)3
F
C/2UD
$etwor;ing
"et.or!ing o#es
offering #ifferent level
of sec%rity an#
fleAibility+
Manage#
Manage# "o E/A"
$yste
$tatic
6igh Availabilit"
3e#%n#ancy * )%calypt%s ?A+
By config%ring ?A, priary an# secon#ary clo%# an# cl%ster
coponents are intro#%ce#
?ot*s.appable coponents+ C/C, 0alr%s, CC, $C, an# EB
M%st have F "ICs if fearing net.or! har#.are fail%re
8or ?A $Cs, s%pporte# $A"s nee#e#
"Cs are not re#%n#ant
)Aternally accessible coponents &clo%# level' %st have D"$
3o%n#*3obin s%pport
Arbitrator service %ses ICMP essages to test reachability
If all arbitrators fails to reach soe coponent, failover is initiate#
)eb(/
(/ u!ing 6"brid%o:
Open$ebula
An 2pen $o%rce pro>ect aiing at ipleenting the
in#%stry stan#ar# for b%il#ing an# anaging virt%ali,e# #ata
centres an# clo%# infrastr%ct%re &Iaa$'
$ponsors+
)U thro%gh vario%s progras &via D$A, 3)$)3E2I3, ICaa$t,
$trat%s/ab, Bon8I3)'
"ational grants
C9:5 /abs
Microsoft
All fig%res ta!en fro http+--openneb%la.org
6i!tor"
Characteri!tic! +,3
Doesn<t have specific infrastr%ct%re re=%ireents, a!ing it easy
to fit in the eAisting environent
Try it on yo%r laptopC Does not re=%ire any special har#.are or
soft.are config%ration &single server O #istro of yo%r choice'
$%pports ipleentations as Private, ?ybri# &.ith both B%rsting
an# 8e#eration' an# P%blic Clo%#
Provi#es $torage syste &storing #is! iages in #atastoresL iages
can be 2$ installations, or #ata bloc!s', Teplate 3epository
&registering EM#efinitions', Eirt%al "et.or!ing & Manageent
&C/I & $%nstone 5UI, feat%res live an# col# igration, stop,
res%e, cancel'
Characteri!tic! -,3
?as great o#%larity, .hich eases the integration .ith other
sol%tions
Ipleente# on a pl%gin o#el, a!ing it easy to c%stoi,e
#ifferent aspects &virt%ali,ation, storage, a%thentication &
a%thori,ation, ...'
Any action is perfore# by a bash scirpt
Doesn<t ipleent a M#efa%lt6 hypervi,or
The core of 2pen"eb%la .ritten in COO, a!ing it rob%st
an# scalable
Monitoring+ Config%rations of EM<s an# all onitoring
inforation is store# in a &$4/' #atabase
Characteri!tic! 3,3
Uses coon open in#%strial stan#ar#s @ i.e. Aa,on )C:
API an# 2pen Clo%# Cop%ting Interface &2CCI'
2pen"eb%la<s native clo%# API+
available as Bava, 3%by, an# JM/*3CP API
gives access to all the f%nctions
enables integration of o.n proce#%res
$ec%rity at high level+ host co%nication %sing $$? &3$A'
an# $$/
4%ality+ relies on Co%nity an# o.n 4A
Ma!ing E"C sessions to r%nning EMs s%pporte#
Main component!
Main feature! >v3.@?
User $ec%rity & M%ltitenancy %sing 5ro%p Manageent
Eirt%al Data Centers
Control & Monitoring of Physical & Eirt%al Infrastr%ct%re
$%pports %ltiple hypervisors, #ata stores, net.or!
integrations, #atacenter onitoring &5anglia'
Distrib%te# 3eso%rce 2ptii,ation
?igh Availability
?ybri# Clo%# & B%rsting
$elf*service provisioning portal
/nternal Architecture +,A
The three layers of the internal architect%re+
/nternal Architecture -,A
Drivers co%nicate #irectly to the 2$
Transfer #river+ anage the #is! iages on the storage
syste, that co%l# be "8$ or i$C$I, or copying %sing $$?
Eirt%al Machine #river+ specific to the hypervisor
ipleente#L anage the EM<s r%nning on the hosts
Inforation #river+ specific to the hypervisor ipleente#L
sho.ing the c%rrent stat%s of hosts an# EM<s hosts
/nternal Architecture 3,A
$et of coponents to control an# onitor EMs, E"s, storage &
hosts+
3e=%est Manager+ han#les client re=%ests
Eirt%al Machine Manager+ anages & onitors EMs
Eirt%al "et.or! Manager+ anages virt%al net.or!s
?ost Manager+ anages & onitors physical reso%rces
Database+ persistent storage &state'
/nternal Architecture A,A
C/I+ an%al anip%lation of the virt%al infrastr%ct%re
$che#%ler+ invo!es actions on EMs &%sing JM/*3PC interface'
2ther+ Fr# party tools &%sing JM/*3PC interface or 2pen"eb%la Clo%#
API'
Open$ebula B h"pervi!or!
Jen
KEM-4)MU
EM.are
Open$ebula B hardware
Processor re=%ireent+ CPU .ith virt%ali,ation s%pport
Meory+
?ost+ ini% I 5B
5%est+ :NH MB for sallest instance
$torage base# on 3AID+ local #is! for PoC, $A" for
pro#%ction systes
"et.or!+ gigabit net.or! car#&s', event%ally b%n#ling
several car#s together &perforance & re#%n#ance'
Open$ebula B !"!tem component!
8ronten#
?osts
Iage 3epository
Physical net.or!
Open$ebula . networ;ing
$ervice "et.or! is recoen#e# to be #e#icate#
net.or!
EM<s net.or! interface is connecte# to a bri#ge in
the host &i.e. a host .ith t.o "ICs, p%blic an# private, sho%l#
have t.o bri#ges'
Create bri#ges .ith the sae nae in all the hosts
Drivers that ay be associate# .ith each host+
D%y
8.
P;:.94
)btables
2vs.itch
EM.are
'* Ovs* /02 e1t1l "M*
KEM Des Des Des Des "o
Jen Des Des Des Des "o
EM.are "o "o "o "o Des
/n!tallation +,C
Installation steps+
Planning an# preparing the installation
Installing 2$
Installing the 2pen"eb%la soft.are
Config%ring the 2pen"eb%la coponents
/n!tallation -,C
Planning & preparing+ 2pen"eb%la is a siple set%p consisting
of front en#&s' an# hosts &cl%ster no#es'.
Basic coponents+
8ront en#
?ost
Datastores
$ervice "et.or!
EM net.or!s
/n!tallation 3,C
$torage types+ share# & non*share#
"on*share# storage+
$iple to config%re
Initial start of an EM .ill be slo.er as iage is copie# to the host
$hare# storage+
Any host has access to the iage repository
Any operation on a EM goes =%ic!er beca%se there is #irect access
to the iages, no copying nee#e#
In saller environents or PoCs, ipleente# on front en#
In bigger environents, ipleente# on "A$-$A"
/n!tallation A,C
2$ installation+
Choose /in%A #istrib%tion &i.e. Ub%nt%'
Choose installation e#ia+ .iso or net.or!
Use #efa%lt installation steps, eAcept evt. for partitioning
Partitioning+
If ?0 rai# eAists, it .ill appear as single #is!L if $0 rai# sho%l# be
config%re#, can be #one after creating partitions
Partitions for syste, %ser an# s.ap files
Defa%lt %ser creation &onea#in'
The sae acco%nt an# gro%p nee#e# on both 8ront en# & host
All the acco%nts nee# the sae UID an# 5ID &%ser & gro%p IDs'
/n!tallation D,C
8ront en#+
Install 2pen"eb%la soft.are
3e=%ireent+
"ee#s access to storage &#irect or via net.or!'
"ee#s access to each host
$$? to hosts %sing $$? !eys &.itho%t pass.or#s, a%to*a## to !no.n hosts'
3%by &U v9.P.S'
?osts+
"o 2pen"eb%la soft.are nee#e#
Different hypervisors on #ifferent #istros insi#e a cl%ster possible
3e=%ireents+
?ypervisor
$$? server
3%by &U v9.P.S'
?ost sho%l# be registere# in 2pen"eb%la &onehost'
/n!tallation @,C
Config%ring the 2pen"eb%la coponents+
?ypervisor+ KEM by #efa%lt &an# easiest', b%t other #rivers can be
selecte# - o#ifie#
?ost onitoring
$torage+ share# filesyste %se# by #efa%lt, can be change#
"et.or!ing
Users & 5ro%ps &a#ins, reg%lar, p%blic & service %sersL integration
.ith /DAP infrastr%ct%re possible'
$%nstone &0eb 5UI .ith sae f%nctionality as C/I'
Acco%nting & $tatistics &info on %sage, acco%nting, graphs'
Zones &oZone server, anaging Zones an# EDCs'
?ybri# clo%#s &for pea! reso%rce %sages'
P%blic clo%#s &%sing p%blic interfaces, )C: =%ery an# 2CCI'
/n!tallation C,C
Manageent tas!s fter the installation+
Chec! if #eaons are r%nning
Chec! pass.or#less inter*host connectivity
Chec! - enable K$M
Managing hosts+
3egistering &a##ing a host to 2pen"eb%la'
Deleting &#eleting a host, i.e. #isissing a host'
)nabling-#isabling &no onitoring nor la%nc of ne. instances'
6"brid cloud
A0$ )C: or copatibile
Public cloud
5iving access to the Mo%tsi#e .orl#6 %sing+
)C: 4%ery interface %sing Aa,on )C: 4%ery API
2pen Clo%# Cop%ting Interface &2CCI'
Centrali,e# anageent %sing oZone
Zones+ several physical hosts .ith sae or #ifferent
hypervisors, controlle# by one front en#
EDCs &Eirt%al Data Centers'+ several hosts fro the sae
,one logically gro%pe#
&edundanc"
3e#%n#ant fronten#s, b%t no a%toatis
Use separate My$4/ bac!en# &tho%gh oZones c%rrently
s%ppors only $4/lite'
$%nstone can be #eploye# on a separate achine &not
necessarily on front en#'
OpenStac; +,A
Iaa$ platfor for b%il#ing clo%# sol%tions %sing any of the
#eployent o#els
2pen $o%rce, release# %n#er Apache license
Co*fo%n#e# by "A$A an# 3ac!space in :;;R in a >oint open
so%rce pro>ect, .ith "A$A #elivering clo%# cop%te co#e
&M"eb%la6', an# 3ac!space #elivering clo%# ob>ect storage
&MClo%# 8iles6'
8irst release to p%blic in "oveber :;9; @ MA%stin6
Bac!e# by i.e. ?P, Cisco, IBM, 3e#?at, Dell, CitriA,
Canonical, ...
All fig%res ta!en fro http+--....openstac!.org
OpenStac; -,A
?as consi#erable ta!e*off in %se ...
...tho%gh "A$A reporte# oving a part of its infrastr%ct%re to
Aa,on, saving V9 illion-yr
&http+--.....ire#.co-.ire#enterprise-:;9:-;H-nasa*.eb*services*openstac!-'
$oe contrib%tors left "A$A going to the private sector
&"eb%la, Piston Clo%# Cop%ting, 3ac!$pace, ...'
Active co%nity+
http+--for%s.openstac!.org
http+--.i!i.openstac!.org
http+--#ocs.openstac!.org
OpenStac; 3,A
$%pporte# #istros+ Ub%nt%, Debian, 3?)/, Cent2$, 8e#ora,
$U$), Piston )nterprise 2pen$tac!, $.ift$tac!,
Clo%#scaling & $tac!2ps
3eleases+ A%stin &:;9;', BeAar, Cact%s, Diablo &:;99', )sseA
&:;9:, c%rrent stable', 8olso &%n#er #evelopent'
?ypervisors+ KEM, Jen, )$Ji
2$*level virt%ali,ation also s%pporte#, i.e. /JC
"et.or!ing o#es+ 8lat &bri#ging', E/A" &vlan*s.itch'
Trying it &one or %ltiple servers'+
on free Msan#boA6 hoste# environent &trystac!.org', or
locally %sing a #oc%ente# script &#evstac!.org'
OpenStac; A,A
0ritten in Python
Consists of+Cop%te, "et.or!ing, $torage, $hare# $ervices
Manage# thro%gh a #ashboar#
Ipleents on stan#ar# har#.are, s%pporte# on A3M
Compute
Provi#es on*#ean# cop%ting resso%rces by provisioning EMs
Access thro%gh APIs an# .eb 5UIs
$cales hori,ontally &scale*o%t'
$oe feat%res+
Manage CPU, eory, #is!, net.or!
Distrib%te# an# asynchrono%s architect%re
/ive EM anageent
8loating IP
$ec%rity gro%ps & 3BAC &3ole Base# Access Control'
API .ith rate liiting an# a%thentication
3eso%rce %tili,ation+ allocating, trac!ing, liiting
EM iage anageent & cashing
Storage
$%pports both 2b>ect $torage an# Bloc! $torage+
2b>ect $torage @ #istrib%te#, API*accessible, scale*o%t storage %se#
by applications, for bac!%p, archiving an# #ata retention &static #ata'
Bloc! $torage * enables bloc! storage to be %se# by EMsL s%pports
integration .ith enterprise storage sol%tions &i.e. "etApp, "eAenta, ...'
$oe feat%res+
Eertical an# hori,ontal scalability
?%ge & flat naespace
B%ilt*in replication
3AID not re=%ire#
$napshot & Bac!%p API
$etwor;ing
Managing net.or!s an# IP a##resses
Pl%ggable, scalable an# API*#riven syste
8lat net.or!s & E/A"s
$tatic Ips, D?CP & 8loating IP
Shared !ervice!
Dashboar#+
5UI for a#ins an# %sers, bran#able
Pl%ggable - Fr# party+ billing, onitoring, a##itional anageent
I#entity $ervice+
Central #irectory of %sers appe# to services they can access
4%eryable list of all of the services #eploye#
Iage $ervice+
provi#es #iscovery, registration an# #elivery services for #is! an#
server iages
$tores iages, snapshots, teplates in 2pen$tac! 2b>ect $torage
$%pports follo.ing iage forats+ ra., AMI, E?D, EDI, =co.:,
EMDK, 2E8
Service familie!
"ova * Cop%te $ervice
$.ift @ 2b>ect $torage $ervice
5lance @ Iage 3egistry & Delivery $ervice
?ori,on @ User Interface $ervice, MDashboar#6
Keystone @ I#entity $ervice
4%ant% &in #evelopent' @Eirt%al "et.or! $ervice
$ova
Main part @ clo%# cop%ting fabric controller
2ne of 9
st
pro>ects, #escen#s fro "A$A<s "eb%la
provi#es API to #ynaically re=%est an# config%re EMs
T.o a>or coponents+ essaging =%e%e &3abbitM4' an# #atabase,
enabling asynchrono%s orchestration of copleA tas!s thro%gh essage
passing an# inforation sharing
Coponents+ Database, 0eb Dashboar#, API, A%th Mgr, 2b>ect$tore,
$che#%ler, Eol%e 0or!er, "et.or! 0or!er, Cop%te 0or!er
all of its a>or coponents can be r%n on %ltiple servers &#esigne# as
#istrib%te# application'
$%pporte# virt%ali,ation+ KEM, Jen, CitriA Jen, )$J-)$Ji, ?yper*E,
4)MU, /in%A User Mo#e & Containers
Uses a $4/*base# central #atabase &in f%t%re, for larger #eployents,
aggregate# %ltiple #ata stores are planne#'
Swift
2b>ect-blob storage
2ne of 9
st
pro>ects, #escen#s fro 3ac!space<s Clo%# 8iles
Coponents+ ProAy $erver, 3ing, 2b>ect $erver, Container
$erver, Acco%nt $erver, 3eplication, Up#aters, A%#itors
Can be cl%stere# %sing ProAy no#es an# $torage no#es
8illes cannot be accesse# thro%gh filesyste, b%t via API client
$calability an# re#%n#ancy+ .riting %ltiple copies of each ob>ect
to %ltiple storage servers .ithin separate ,ones
,one+ isolate# storage server gro%ps
Isolation levels+ #ifferent servers, rac!s, sections of a #atacenter,
#atacenters
Best practice+ .rite F replicas across N ,ones &#istrib%te#
.rites-rea#s'
lance
Discovers, registers an# retrieves EM iages
Uses 3)$Tf%l API for =%erying & retrieval
$%pports vario%s bac!*en# storage sol%tions+ EM iage can
be store# on siple file systes an# ob>ect storage systes
&$.ift'
Coponents+ 5lance API server, 3egistry $erver, $tore
A#apter
$%pporte# #is! forats+ ra., E?D, EMDK, =co.:, EDI,
I$2, AMI, A3I, AKI
$%pporte# container forats+ 2E8, AMI, A3I, AKI
6ori5on
7e"!tone
Clo%# i#entity service
provi#es I#entity, To!en, Catalog an# Policy services
ipleents 2pen$tac! I#entity API
8uantum
Eirt%al net.or! service &M"et.or!ing as a $ervice6'
$till %n#er #evelopent, to be release# .ith the release of
M8olso6 :S
th
$epteber :;9:
Provi#es API to #ynaically re=%est an# config%re virt%al
net.or!s
4%ant% API s%pports eAtensions provi#ing a#vance#
net.or!ing &i.e. Monitoring, 4o$, AC/s, ...'
Pl%gins for 2pen v$.itch, Cisco, /in%A Bri#ge, "icira "EP,
3y% 2pen8lo., ")C 2pen8lo., Mi#o"et
Advanced !etup
&edundanc"
Coing .ith M8olso6
MCorosync6 @ open so%rce cl%ster
?ave %ltiple $.ift an# "ova servers
Clo%# controller @ single point of fail%re &nova*api, nova*
net.or!'+
3%n %ltiple instances on %ltiple hosts &state is save# in DB'
Use &--multi host config%ration in "ova'
&ecommendation!
Altho%gh still at an early stage, easier to install b%t still har#
to anage an# aintain for a reg%lar a#in, an# having steep
learning c%rve &a#ins & %sers', ipleentation is
s%ggeste#, at affor#able, saller scale
Ipleent on a c%rrent-o#ern har#.are
Keep the !no.le#ge %p#ate#
Keep soft.are platfor an# har#.are %p#ate# if possible
Monitor & analy,e
costs, available feat%res an# copleAity, copare# to
b%#get, nee#s an# internal reso%rces available
Asses the ipleentation possibilities base# on the analyses
Source! of %urther Material +,D
http+--....openstac!.org-
http+--openneb%la.org-
http+--....%b%nt%.co-
http+--....e%calypt%s.co
http+--....napp*it.org-in#eAQen.htl
http+--....clo%#cop%teinfo.co-virt%ali,ation
http+--....clo%#cop%teinfo.co-private*clo%#s
http+--....lin%A*!v.org
http+--.i!i.=e%.org
http+--....Aen.org
https+--....virt%alboA.org
http+--....v.are.co
http+--....icrosoft.co-en*%s-server*clo%#-.in#o.s*server-hyper*v.aspA
http+--sartos.org
Source! of %urther Material -,D
Arbr%st, M., et al., :;9;, A Eie. of Clo%# Cop%ting, ACM,
NF&I', pp. N;*NP.
Zhang, 4., Cheng, /., Bo%taba, 3., Clo%# Coop%ting+ state*of*
the*art an# research challenges, Bo%rnal of Internet $ervices an#
Applications, :;9;, 9+S*9P.
The 8%t%re of Clo%# Cop%ting+ 2pport%nities for )%ropean
Clo%# Cop%ting Beyon# :;9;.
Chapan et. al. :;9;. $oft.are architect%re #efinition for on*
#ean# clo%# provisioning. In &roceedings of the 1'th AC(
)nternational *$mposium on High &erformance +istributed Computing
&?PDC W9;', :;9;, A Eie. of Clo%# Cop%ting, ACM, NF&I', pp.
N;*NP.
Ali Babar, M.L Cha%han M.A.L , A tale of igration to clo%#
cop%ting for sharing eAperiences an# observations, $)C/2UD
W99, ACM.
Source! of %urther Material 3,D
http+--nicira.co-
http+--....Asigo.co-
http+--....reservoir*fpS.e%-
http+--....c9:g.co-
http+--#sa*research.org-
http+--portal.%c.es-en-.eb-en*%c
http+--occi*.g.org-
http+--openneb%la.org-#oc%entation+relF.H+ganglia
http+--....nasa.gov-
http+--....rac!space.co-
http+--....neb%la.co-
http+--....pistonclo%#.co-
Source! of %urther Material A,D
http+--....pistonclo%#.co-openstac!*clo%#*soft.are
http+--s.iftstac!.co-
http+--....clo%#scaling.co-
http+--....stac!ops.co-
http+--....rabbit=.co-
http+--....corosync.org-
http+--....cl%sterlabs.org-
2pen"eb%la F Clo%# Cop%ting7, 5iovanni Toral#o, Pac!tp%b,
May :;9:
)%calypt%s 5%i#es, )%calypt%s $ystes, B%n :;9:
6Deploying 2pen$tac!7, Ken Pepple, 2reilly, B%ly :;99
2pen$tac! Man%als, #ocs.openstac!.org, May :;9:
Source! of %urther Material D,D
6Ub%nt% )nterprise Clo%# Architect%re7, Technical 0hite Paper, $ion
0ar#ley, )tienne 5oyer & "ic! Barcet @ A%g%st :;;R
6B%il#ing a Private Clo%# .ith Ub%nt% $erver 9;.;I )nterprise Clo%#
&)%calypt%s'7, 2$C2" :;9;
6)%calypt%s BeginnerWs 5%i#e7, U)C )#ition, :F Dec :;9;, Bohnson D,
KiranM%rari, M%rthy 3a>%, $%seen#ran 3B, Dogesh 5iri!%ar
6Dell releases Ub%nt%*po.ere# clo%# servers7, Boab Bac!son, ID5 "e.s
$ervice, "et.or!0orl#
Intervie. at Danish Center for $cientific Cop%ting &DC$C', F;th
March :;99
0hite Paper 6Ub%nt% * An Intro#%ction to Clo%# Cop%ting7
Deployent 5%i#e * Ub%nt% )nterprise Clo%# on Dell $ervers $)
0hite Paper 6Ub%nt% )nterprise Clo%# Architect%re7, 0ar#ley, 5oyer,
Barcet, A%g%st :;;R
6Practical Clo%# )val%ation fro a "or#ic e$cience User Perspective7,
)#l%n#, Koopans, "oveber :;99.
8ue!tion!
1
Than; "ouE
Than! yo% for yo%r attentionC
$till having =%estions1
aba(it%.#!
,opa(it%.#!

You might also like