You are on page 1of 31

EXECUTIVE GUIDE

Back to TOC

EXECUTIVE GUIDE

Next-generation data centers:


Opportunities abound

IT transformation begins at the data center, as enterprises


embrace technologies such as virtualization and cloud computing
to build adaptable, flexible and highly efficient infrastructures
capable of handling today’s business demands.

Sponsored by APC
www.apc.com
EXECUTIVE GUIDE

Table of Contents
Profile: APC by Schneider Electric 3
Introduction 4
Next-generation data centers: The new realities................................................................................................................................. 4

Trends 5
Elastic IT resources transform data centers ...................................................................................................................................... 5
What IT pros like best about next-generation technology................................................................................................................. 7
Thinking outside the storage box............................................................................................................................................................ 9

Emerging technologies 12
One in 10 companies deploying internal clouds..................................................................................................................................12
Google, Microsoft spark interest in modular data centers, but benefits may be exaggerated.................................................14
The big data-center synch.....................................................................................................................................................................16

Best practices 18
Seven tips for succeeding with virtualization....................................................................................................................................18
Don’t let the thin-provisioning gotchas getcha..................................................................................................................................21
The challenge of managing mixed virtualized Linux, Windows networks..................................................................................... 23

Inside the Data Center 26


The Google-ization of Bechtel ............................................................................................................................................................ 26
Why San Diego city workers expect apps up and running in 30 minutes or less........................................................................ 30

Sponsored by APC
www.apc.com 2
EXECUTIVE GUIDE
Back to TOC

Profile: APC by Schneider Electric


APC by Schneider Electric, a global leader in critical power and cooling services, provides

industry leading product, software and systems for home, office, data center and factory

floor applications. Backed by the strength, experience, and wide network of Schneider

Electric’s Critical Power & Cooling Services, APC delivers well planned, flawlessly installed

and maintained solutions throughout their lifecycle. Through its unparalleled commitment

to innovation, APC delivers pioneering, energy efficient solutions for critical technology and

industrial applications. In 2007, Schneider Electric acquired APC and combined it with MGE

UPS Systems to form Schneider Electric’s Critical Power & Cooling Services Business Unit,

which recorded 2007 revenue of $3.5 billion (€2.4 billion) and employed 12,000 people

worldwide. APC solutions include uninterruptible power supplies (UPS), precision cooling

units, racks, physical security and design and management software, including APC’s

InfraStruXure® architecture the industry’s most comprehensive integrated power, cooling,

and management solution. Schneider Electric, with 120,000 employees and operations in

102 countries, had 2007 annual sales of $25 billion (€17.3 billion). For more information on

APC, please visit www.apc.com. All trademarks are the property of their owners.

Sponsored by APC
www.apc.com 3
EXECUTIVE GUIDE
Back to TOC

Introduction
Next-generation data centers:
The new realities
By Beth Schultz

S
everal years ago, Geir Ramleth, CIO of construction a survey of 1,300 corporate software buyers
that about 11% of companies are deploying
giant Bechtel, asked himself this question: If you could internal clouds or planning to do so. As
discussed in “One in 10 companies deploying
build your IT systems and operation from scratch today, internal clouds,” that may not seem like a
huge proportion, but it’s a sign that private
would you recreate what you have? clouds are real.
As Vivek Kundra, CTO for the District of
Ramleth certainly isn’t the only one who next-generation technology,” Michael Geis, Columbia government says, private clouds
has contemplated such a question. And the director of IS operations for Lifestyle Family are “definitely not hype.” Kundra, for one,
answer always seems to be “no.” Yesterday’s Fitness, says he has discovered how strategic says he plans to blend IT services provided
technology can’t handle the agility required storage virtualization can be.The fitness chain from its own data center with external cloud
by today’s businesses. set out to resolve a performance bottleneck platforms such as Google Apps. And Gartner
The IT transformations often begin at the originating in its storage network, and wound predicts that by 2012 private clouds will
data center. At Bechtel, for example, Ramleth up upgrading its data center infrastructure in account for at least 14% of the infrastructure
began its migration to an Internet model a way that not only took care of the problem at Fortune 1000 companies, which will benefit
of computing by scrapping its existing data but also drastically changed how it made from service-oriented, scalable and elastic IT
centers in favor of three new facilities (see storage decisions. resources.
“The Google-ization of Bechtel,” page 26). Heterogeneous storage virtualization Of course, on their way to the cloud-based
The next-generation data centers are all provides much-needed flexibility in the data data center of the future enterprises will dis-
about adaptability, flexibility and responsive- center.“Five years ago, the day you made the cover, deploy – and possibly discard – myriad
ness, and virtualization is a building-block decision to go with EMC or IBM or HP or other technologies.
technology. anyone, you might get a great discount on the For example, some might embrace the
The numbers show how tactical virtualiza- first purchase, but you were locked into that concept of modularization, the idea of
tion has become. As discussed in “Seven tips platform for three to five years,” Geis says. But packaging servers and storage in a container
for succeeding with virtualization,” more than now, every time the chain adds storage, its cur- with its own cooling system. Microsoft and
4 million virtual machines will be installed rent vendor, IBM, has to work in a competitive Google have piqued interest in this approach
on x86 servers this year and the number of situation, he says. with their containerized data center schemes.
virtualized desktops could grow from less If virtualization is the starting point, an Others might look to integration, focusing
than 5 million in 2007 to 660 million by 2011, Internet model of computing a la Bechtel’s on an emerging technology such as Fibre
according to Gartner. More telling, however, is vision, increasingly is the end goal. Many Channel over Ethernet to bring together data
the shift from a tactical to strategic mindset. experts believe that corporate data centers and storage on one data center fabric. Still
The popularity of virtualizing x86 server and ultimately will be operated as private clouds, others might turn the notion of the traditional
desktop resources has many enterprise IT much like the flexible computing networks storage-area network on its nose, using Web
managers reassessing ways to update already operated by Internet giants Amazon and services that link applications directly to the
virtualized network and storage resources, too. Google, but specifically built and managed storage they need.
“Enterprise IT managers are going to have internally for an enterprise’s own users. In the new data center, possibilities
to start thinking virtual first and learn how Besides Bechtel, corporations already abound.
to make the case for virtualization across IT building their own private clouds include
disciplines,” advises James Staten, principal such notable names as Deutsche Bank, Merrill Schultz, formerly Network World’s New Data
analyst at Forrester Research. Lynch and Morgan Stanley, according to Center editor, is now a freelance writer. She can
In the article “What IT pros like best about The 451 Group.The research firm found in be reached at bschultz5824@gmail.com.

Sponsored by APC
www.apc.com 4
EXECUTIVE GUIDE
Back to TOC

Next-generation data centers:


opportunities abound
Section 1
Trends
Elastic IT resources transform data centers
Several IT trends converge as data centers evolve
to become more adaptable, Gartner says

T
By Jon Brodkin, Network World, 12/04/2008

The enterprise data center of the capital costs, Blake notes. server that appears to an operating system
future will be a highly flexible and For example,“when blade servers as a single, fixed computing unit. This
came out that completely screwed up all approach also will increase utilization
adaptable organism, respond- of our matrices as far as the power we rates by reducing the resources wasted
ing quickly to changing needs needed per square foot, and the cooling because blade servers aren’t configured
because of technologies like we needed because these things sucked optimally for the applications they serve.
virtualization, a modular build- up so much energy, used so much heat,” Data centers also will become
he says. more flexible by building in a modular
ing approach, and an operating
Virtualization of servers, storage, approach that separates data centers into
system that treats distributed desktops and the network is the key self-contained pods or zones which each
resources as a single comput- to flexibility in Blake’s mind, because have their own power feeds and cooling.
ing pool. hardware has long been tied too rigidly to The concept is similar to shipping con-
specific applications and systems. tainer-based data centers, but data center
The move toward flexibility in all data But the growing use of virtualization is zones don’t have to be enclosed. By not
center processes comes after years of far from the only trend making data cen- treating a data center as a homogenous
building monolithic data centers that ters more flexible. Gartner Group expects whole, it is easier to separate equipment
react poorly to change. to see today’s blade servers replaced in into high, medium and low heat densities,
“For years we spent a lot of money the next few years with a more flexible and devote expensive cooling only to the
building out these data centers, and the type of server that treats memory, proces- areas that really need it.
second something changed it was: ‘How sors and I/O cards as shared resources Additionally, this separation allows
are we going to be able to do that?’” that can be arranged and rearranged as zones to be upgraded or repaired without
says Brad Blake, director of IT at Boston often as necessary. causing other systems to go offline.
Medical Center.“What we’ve built up is so Instead of relying on vendors to decide “Modularization is a good thing. It gives
specifically built for a particular function, what proportion of memory, processing you the ability to refresh continuously
if something changes we have no flex- and I/O connections are on each blade, and have higher uptime,” Gartner analyst
ibility.” enterprises will be able to buy whatever Carl Claunch says.
Rapidly changing business needs and resources they need in any amount, a far This approach can involve incremental
new technologies that require extensive more efficient approach. build-outs, building a few zones and
power and cooling are necessitating a For example, an IT shop could com- leaving room for more when needed. But
makeover of data centers, which represent bine 32 processors and any number of you’re not wasting power because each
a significant chunk of an organization’s memory modules to create one large zone has its own power feed and cooling

Sponsored by APC
www.apc.com 5
EXECUTIVE GUIDE
Back to TOC

Section 1: Trends • • •
supply, and empty space is just that. This is some functionality stripped out of the the meta operating system, building in
in contrast to long-used design principles, general purpose operating system, a meta zones and pods, and more customizable
in which power is supplied to every operating system to manage the whole server architectures – are helping build
square foot of a data center even if it’s not data center will be necessary. toward a future when IT can quickly
yet needed. The meta operating system is still provide the right level of services to users
“Historical design principles for data evolving but is similar to VMware’s Virtual based on individual needs, and not worry
centers were simple – figure out what you
have now, estimate growth for 15 to 20
years, then build to suit,” Gartner states. “Newly built data centers often opened with huge areas
“Newly built data centers often opened of pristine white floor space, fully powered and backed
with huge areas of pristine white floor
space, fully powered and backed up by up by a UPS, water and air cooled, and mostly empty.
a UPS, water and air cooled, and mostly With the cost of mechanical and electrical equipment, as
empty. With the cost of mechanical and
electrical equipment, as well as the price
well as the price of power, this model no longer works.”
of power, this model no longer works.” Carl Claunch , analyst, Gartner
While the zone approach assumes
that each section is self-contained, that Datacenter Operating System. Gartner about running out of space or power. The
doesn’t mean the data center of the future describes the concept as “a virtualiza- goal, Blake says, is to create data center
will be fragmented. Gartner predicts that tion layer between applications and resources that can be easily manipulated
corporate data centers will be operated distributed computing resources … that and are ready for growth.
as private “clouds,” flexible computing utilizes distributed computing resources “It’s all geared toward providing that
networks which are modeled after public to perform scheduling, loading, initiating, flexibility because stuff changes,” he says.
providers such as Google and Amazon yet supervising applications and error “This is IT. Every 12 to 16 months there’s
are built and managed internally for an handling.” something new out there new and we
enterprise’s own users. All these new concepts and technolo- have to react.”
By 2012, Gartner predicts that private gies – cloud computing, virtualization,
clouds will account for at least 14% of the
infrastructure at Fortune 1000 companies,
which will benefit from service-oriented,
scalable and elastic IT resources.
Private clouds will need a meta
operating system to manage all of an
enterprise’s distributed resources as a
single computing pool, Gartner analyst
Thomas Bittman says, arguing that the
server operating system relied upon so
heavily today is undergoing a transition.
Virtualization became popular because
of the failures of x86 server operating
systems, which essentially limit each
server to one application and waste tons
of horsepower, he says. Now spinning up
new virtual machines is easy, and they
proliferate quickly.
“The concept of the operating system
used to be about managing a box,”
Bittman says.“Do I really need a million
copies of a general purpose operating
system?”
IT needs server operating systems with
smaller footprints, customized to specific
types of applications, Bittman argues. With

Sponsored by APC
www.apc.com 6
EXECUTIVE GUIDE
Back to TOC

Section 1: Trends • • •

What IT pros like best about


next-generation technology
Flexibility, costs savings and eco-friendly operations make the list
By Ann Bednarz, Network World, 10/20/2008

D
Do hundreds of gallons of used vendors’ disk systems into a
vegetable oil belong anywhere near single pool that is manageable
as a whole for greater utiliza-
a data center, let alone inside? Phil tion.An onboard cache helps
Nail thinks so. speed I/O performance; SVC
acknowledges transactions
Nail is CTO of AISO.Net, whose Romoland, once they’ve been committed
Calif., data center gets 100% of its electricity to its cache but before they’re
from solar energy. Now he’s considering waste sent to the underlying storage
vegetable oil as an alternative to using diesel controllers.
fuel in the Web hosting company’s setup for A key benefit of virtualizing
storing solar-generated power. storage is the ability to retain
“We’re never opposed to trying something older gear, rather than doing
new,” says Nail, who has eliminated nearly 100 the forklift replacements
underutilized stand-alone servers in favor of typically required of storage
four IBM System x3650 servers, partitioned upgrades, Geis says.“We didn’t
into dozens of virtual machines using VMware have to throw away our old
software. legacy equipment. Even though
Server virtualization fits right into AISO.Net’s we’d had it for a few years, it
environmentally friendly credo.The company still had a lot of performance
increased its average server-utilization level by value to us,” he says. Lifestyle
50% while achieving a 60% reduction in power uses the new IBM storage for
and cooling costs through its consolidation its most performance-sensitive
project. applications, such as its core
For Nail, virtualization technology lives up to databases and mail server, and
the hype. Nevertheless, it isn’t always easy for IT uses the EMC gear for second-
executives to find the right technology to help and third-tier storage.
a business stay nimble, cut costs or streamline Using storage gear from
operations. Read on to learn how Nail and and employee records, were sluggish. IT staff more than one vendor adds management
three other IT professionals made big changes confirmed the problem by looking at such overhead, however.“You have to have a relation-
in their data centers, and what they like best metrics as the average disk-queue length ship with two manufacturers and have two
about their new deployments. (which counts how many I/O operations are maintenance contracts.There’s also expertise
waiting for the hard disk to become available), to think about. Our storage team internally has
No more vendor lock-in recalls Michael Geis, director of IS operations to become masters of multiple platforms,” Geis
Vendor lock-in is a common plight for IT for the chain.“Anything over two is considered says.
storage buyers. Lifestyle Family Fitness, however, a bottleneck, meaning your disks are too slow. The payoff is worth it, however:“Now that
found a way out of the shackles.The 56-club We were seeing them into the 60, 80 and 100 I’ve got storage virtualization in place with
fitness chain in St. Petersburg, Fla., set out to range during peak times,” he says. the SVC, my next storage purchase doesn’t
resolve a performance bottleneck it traced to After rolling out IBM’s SAN Volume have to be IBM. I could go back and buy EMC
its storage network, and wound up upgrading Controller (SVC), queue lengths settled back again, if I wanted to, because I have this device
its data center infrastructure in a way that below two, Geis says.The two-node clustered in-between,” Geis says.
not only took care of the problem but also SVC software fronts EMC Clariion CX300 and Heterogeneous storage virtualization gives
extended the life of its older storage arrays. IBM System Storage DS4700 disk arrays, along buyers a lot more purchasing flexibility -- and
Lifestyle’s users were starting to notice that with two IBM Brocade SAN switches. SVC lets negotiating power.“Every time we add new
certain core applications, such as membership Lifestyle combine storage capacity from both storage, IBM has to work in a competitive

Sponsored by APC
www.apc.com 7
EXECUTIVE GUIDE
Back to TOC

Section 1: Trends • • •
situation,” Geis says.“Five years ago, the day you Small investment, big payoff Staying green
made the decision to go with EMC or IBM or IT projects don’t have to be grand to be Going green is more than a buzzword for
HP or anyone, you might get a great discount on game-changing.A low-priced desktop add-on AISO.Net, which from its inception in 1997
the first purchase, but you were locked into that is yielding huge dividends for the Miami-Dade has espoused eco-friendly operations.
platform for three to five years,” he says. County Public Schools. Other companies are buying carbon
What Geis likes best is the flexibility the The school district installed power-manage- offsets to ease their environmental con-
system affords and “knowing that I don’t have ment software on 104,000 PCs at 370 locations. science, but energy credits aren’t part of the
to follow in the footsteps of everybody else’s By automatically shutting down desktops that equation for AISO.Net. The company gets its
storage practices, that I can pick and choose the aren’t in use, the software has reduced the data center electricity from 120 roof-mounted
path that we feel is best for our organization.” district’s average PC-on time from 20.75 hours solar panels. Solar tubes bring in natural sun-
per day to 10.3 hours. In turn, it’s shaved more light, eliminating the need for conventional
Getting more out of resources than $2 million from the district’s $80 million lighting during the day; and air conditioning
A key addition to Pronto.com’s Web annual power bill. systems are water-cooled to conserve energy.
infrastructure made all the difference in its Best of all, because Miami-Dade already was As it did with its building infrastructure,
e-commerce operations. Fifteen million people using software from IT management vendor AISO.Net overhauled its IT infrastructure
tap the Pronto.com comparison-shopping site BigFix for asset and patch management, adding with green savings in mind. By consolidating
each month to find the best deals on 70 million the vendor’s power-management component dozens of commodity servers to four IBM
products for sale on the Web. If the site isn’t cost the district just $2 more per desktop.“There servers running VMware’s Infrastructure 3
performing, revenue suffers. was very little effort to implement the program software, it upped utilization while lowering
Pronto.com wanted to upgrade its load once we defined the operating parameters,” electricity and cooling loads. CTO Nail sold
balancers in conjunction with a move from a says Tom Sims, Miami-Dade’s director of the leftover servers on eBay.“Let somebody
hosting provider’s facility in Colorado to a Vir- network services. else have the power,” he says.
ginia data center operated by parent company For Nail, green business is good business,
IAC (which also owns Ask.com, Match.com Automated shutdown and that’s what he likes best about it.“Maybe
and Evite, among other Internet businesses). Besides saving money, the power-manage- it costs a little bit more, but it definitely pays
The New York start-up invested in more than ment project has kick-started a new wave of for itself, and it’s doing the right thing for the
load balancing, however, when a cold call green IT efforts – something that’s as important environment,” he says. Customers like environ-
from Crescendo Networks led to a trial of the to the school district’s managers as it is to mentally friendly technology too:“More and
vendor’s application-delivery controllers. users.“We all want to save energy and keep more companies are looking at their vendors
Load balancing is just one aspect of today’s the environment clean and functional for our to see what kind of environmental policies
application-delivery controllers, which combine kids, more so because we are a public school they have,” he says.
such capabilities as TCP connection manage- system,” Sims says. Today AISO.Net is designing a rooftop
ment, SSL termination, data compression, Miami-Dade is working on the program’s garden for its data center; it estimates
caching and network address translation.The second phase, the goal of which is to let IT the green roof could reduce cooling and
devices manage server requests and offload administrators customize shut-down times on a heating requirements by more than 50%. It’s
process-intensive tasks from content servers to school-by-school basis.That will result in more also looking for an alternative way to store
optimize Web application performance.“Our power saved and more reductions in carbon solar-generated power. That’s where the waste
team knew load balancing really well, but we emissions. vegetable oil comes in. Nail wants to replace
didn’t know optimization.And we didn’t know The district also is eyeing the chance for the company’s battery bank, which stores
that optimization would be something that we’d even bigger savings by turning off air condi- power from the solar panels, with a more
really want,” recalls Tony Casson, director of tioning units in areas where desktop computers environmentally friendly alternative. The
operations at Pronto.com. are powered off. IP-controlled thermostats will idea is to retrofit a small generator to run not
When Casson and his team tried out enable Miami-Dade to coordinate PC and air on diesel fuel but on recycled vegetable oil
Crescendo’s AppBeat DC appliances, however, conditioning downtime.“The potential cost acquired from local restaurants and heated
they were convinced. In particular, the devices’ savings is even bigger than the desktop-power in 55-gallon drums. The generator in turn
ability to offload TCP/IP and SSL transactions cost savings,” Sims says. would feed power to air conditioning units,
from Web servers won them over. For that to happen, the IT group has been Nail says.
A major benefit is that Pronto.com can delay working more closely with facilities manage- The idea came from seeing others use
new Web-server purchases even as its business ment teams — a collaboration Sims expects to waste vegetable oil to run cars, Nail says.“We
grows.“It really has extended the life of our grow. “There are IP-driven devices that will inter- figured, why can’t we take that technology
server platform,”Casson says.“The need for us to face with all kinds of facilities equipment.These and put it into something that would run our
purchase new front-line equipment has been devices allow remote management and control air conditioning?” he notes.“We’re kicking that
cut in half. Each Web server can handle approxi- by a central office via the organization’s data around, trying to design it and figure out how
mately 1.5 times the volume it could before.” network. So, the possibilities seem limitless.” we’re going to implement it.”

Sponsored by APC
www.apc.com 8
EXECUTIVE GUIDE
Back to TOC
Section 1: Trends • • •
Thinking outside the storage box
Unbridled growth in data storage and the rise in Web 2.0
applications are forcing a storage rethink. Is this the end
of the storage-area network as we know it?

W
By Joanne Cummings, Network World, 01/26/2009

With storage growth tracking at Group. Clients that have pursued


these strategies even have been able
60% annually, according to IDC,
to freeze storage spending for a year
enterprises face a dire situation. at a time, he says.“And when they get
Throwing disks at the problem back on the storage spending cycle,
simply doesn’t cut it anymore. they get back on at about half the
Andrew Madejczyk, vice president spending rate they were at before,”
he adds.
of global technology operations at Although few IT executives
pre-employment screening com- report such dramatic reductions in
pany Sterling Infosystems, in New storage spending, many are pursuing
York, likens the situation to an such strategies.
episode of “House,” the popular At Sterling, for example, moving
from tape- to disk-based backups
medical drama. via Sepaton’s S2100-ES2 virtual tape
“On ‘House,’ there are two ways to library reduced the time it takes for
approach a problem.You treat the symptoms, nightly backups from 12 to just a few
or you find out what the root cause is and gies to ensure efficiencies and overall storage hours, Madejczyk says. Sepaton’s deduplication
actually end the problem,” Madejczyk says. utilization, keeping costs down as storage needs technology provides an added measure. In
“With storage, up until now, the path of least increase. addition, he has virtualized more than 90% of
resistance was to treat the symptoms and buy In and of themselves, SANs aren’t enough, his server environment,“reducing our footprint
more disks” – a method that surely would ignite however. Enterprises increasingly are turning to immensely,’” and implemented EMC thin
the ire of the show’s caustic but brilliant Dr. technologies that promise to provide an even provisioning and storage virtualization.
Gregory House. bigger bang for the buck, including these: Still, his company’s storage needs keep
Were the doctor prone to giving praise, he’d • Data deduplication, which helps reduce growing, Madejczyk says.“In this economy,
give a call out to enterprise IT managers who redundant copies of data so firms can shrink Sterling is being very responsible and careful
are rethinking this traditional approach to not only storage requirements but also about what we spend on,” he says.“We’re
storage. He’d love that technologists are willing backup times. concentrating on the data-management part of
to go outside their comfort zones to find a • Thin provisioning, which increases storage the problem, and we’re seeing results. But it’s a
solution, and he’d thrive on the experimenta- utilization by making storage space that has difficult problem to solve.”
tion and contentiousness that surround the been overprovisioned for one application Tom Amrhein, CIO at Forrester Construction
diagnosis. available to others on an as-needed basis. in Rockville, Md., has seen similar growth.The
House probably would find an ally in • Storage tiering, which uses data policies company keeps all data associated with its
Casey Powell, CEO at storage vendor Xiotech. and rules to move noncritical data to slower, construction projects in a project management
“Everybody acknowledges the problem and less expensive storage media and leave database, so the vast majority of that stored data
understands it, but nobody’s solving it.As expensive Tier 1 storage free to handle only is structured in nature. Regulatory and compli-
technologists, we have to step back, look at the the most mission-critical applications. ance issues have led to increased storage
problem and design a different way,” Powell • Storage resource management software, needs nonetheless.
says. which helps users track and manage storage “Most companies need to keep their tax
usage and capacity trends. records for seven years, and that’s as long as
Optimizing the SAN “In the classic SAN environment, these tools they need to keep anything,”Amrhein says.
Today most organizations store some don’t just provide a partial solution.They allow “But tax records are our shorter-cycle data.
combination of structured, database-type and you to make fundamental improvements,” says Depending on the jurisdiction, the time we
unstructured file-based data. In most cases, they Rob Soderbery, senior vice president of Syman- could be at fault for any construction defect is
rely on storage-area network (SAN) technolo- tec’s Storage and Availability Management up to 10 years – and we’re required to have the

Sponsored by APC
www.apc.com 9
EXECUTIVE GUIDE
Back to TOC

Section 1: Trends • • •
same level of discovery response for a project those. I’m paying $25K a month to Connectria, tional data will grow at a 27.3% compounded
completed nine years ago as we would for a plus paying for about 10GB over my SLA annual rate over the next three to five years.
project that’s been closed out two weeks.” volume.That overage is a wash.” The rise in unstructured, file-based data will
Forrester Construction has cut down a bit For the firm’s unstructured data, Forrester dwarf that growth rate, however. IDC expects the
on storage needs by keeping the most data- Construction uses Iron Mountain’s Connected amount of storage required for unstructured,
intensive project pieces – building drawings, Backup for PCs service, which automatically file-based data to increase at an unprec-
for example – on paper. “Because the scanning backs up all PCs nightly via the Internet. If a edented 69.4% clip. By 2010, enterprises for the
rate is so high and paper storage costs so low, PC is not connected to the Internet at night, first time will find unstructured storage needs
retaining those as physical paper is more cost- the user receives a backup prompt on the next outstripping traditional, structured storage
effective,”Amrhein says. connection. demands.
The real key to keeping costs in check, “With 60% of the people out of the office, The rub here is that although SANs are
extremely efficient at handling structured,
transactional data, they are not well optimized
for unstructured data. “SANs are particularly
ill-suited to Web 2.0, scale-out, consumer-ori-
ented-type applications,” Symantec’s Soderbery
says.“No. 1, the applications’ architecture is
scale-out, so you have hundreds or thousands
of systems working on the same problem
instead of one big system, like you would have
with a database.And SANs aren’t designed that
way.And No. 2, these new applications – like
storing photos on Facebook or video or display
ads or consumer backup data – are tremen-
dously data intensive.”
Symantec hit the wall with this type of data
in supporting its backup-as-a-service offering,
which manages 26 petabytes of data, Soderbery
says.“That probably puts us in the top 10 or
20 storage consumers in the world.We could
never afford to implement a classic Tier 1 SAN
architecture,” he says.
Instead, Symantec went the commodity
path, using its own Veritas Storage Foundation
Scalable File Server software to tie it all together.
“The Scalable File Server allows you to add
file server after file server, and you get a single
namespace out of that cluster of file servers.
This in turn allows you to scale up your applica-
tion and the amount of data arbitrarily.And the
software runs on pure commodity infrastruc-
ture,” Soderbery explains. Plus, the storage
communicates over a typical IP network vs. a
however, is storage-as-a-service, Amrhein says. this is perfect for us,”Amrhein says.“Plus, Iron more expensive Fibre Channel infrastructure.
IT outsourcer Connectria hosts the company’s Mountain helps us reduce the data volume by Symantec’s approach is similar to that of the
main application servers, including Microsoft using deduplication,” he says.“Even for people big cloud players, such as Google and Amazon.
Exchange, SQL Server and SharePoint; on a job site with a wireless card or low-speed com.“We happen to build packaged software to
project management, finance, CRM and Citrix connection, it’s just a five- or 10-minute thing.” enable this, whereas some of the early adopters
Systems. It handles all that storage, leaving Still, the unstructured side is where the built their own software and systems. But it all
Forrester Construction with a predictable, flat construction company sees its biggest storage works the same way,” Soderbery says.
monthly fee. growth. E-mail and saved documents are the The prudent approach to storage as it
“I pay for a set amount of gigabytes of biggest problem areas. continues to grow, Soderbery says, is to optimize
storage as part of the SLA [service-level agree- and use SANs only for those applications that
ment], and then I pay $1 per gig monthly for The rise in Web 2.0 data merit them – such as high-transaction, mission-
any excess,”Amrhein explains.“That includes Forrester Construction is not alone there. In critical ERP applications. Look to emerging
the backup, restore and all the services around the enterprise, IDC reports, structured, transac- commodity-storage approaches for more

Sponsored by APC
www.apc.com 10
EXECUTIVE GUIDE
Back to TOC

Section 1: Trends • • •
scale-out applications, such as Web 2.0, e-mail Eventually Web services support will have re-sync that data so we don’t have to push it
and interactive call-center programs. virtualized storage environments realizing even across our Ethernet network,”Watermann says.
Does that mean enterprises need to support greater efficiencies, to the point where applica- Like Eagle, companies are starting to
SANs and new cloud-like scale-out architec- tions themselves provision and de-provision develop pieces to put Web-services-enabled
tures to make sure they’re managing storage as storage.“Today, when we provision storage, we storage all together,Watermann adds. Such
efficiently as possible? Perhaps. have to guess, and typically, we either over- or small, point approaches are the norm today,
Eventually, however, the need to support underprovision,” Powell says.“And then, when but experts say that in five or 10 years, every
unstructured, scale-out data will trump the the user is no longer using it, we seldom go application and every device will use some
need to support structured, SAN-oriented data, back and reclaim the storage. But the applica- kind of software glue, such as Web services, to
IDC research shows.With that in mind, smart tion knows exactly what it needs, and when it provide a range of storage and IT services in an
organizations gradually will migrate most appli- needs it.Via Web services, it can request what automated, efficient manner.
cations off SANs and onto new, less expensive, it needs on the fly, and as long as its request is “It would have to be levels of software
commodity storage setups. within the parameters and policies we set up that create services that include computing
initially, it gets it.” resources, network and storage infrastructure,
A new enterprise approach Web services already have proved an effi- and the retention and reliability metrics
One interesting strategy could provide an cient storage tack at ISE user Raytown Quality associated with all of those components,”
evolutionary steppingstone in the interim: Schools in Missouri, says Justin Watermann, Symantec’s Soderbery says.“It will be a combi-
using Web services. Championed primarily by technology coordinator for the school system. nation of the point solutions we see now, like
Xiotech, the idea is to use the Web-services APIs The system went with Xiotech shortly after it VMware and SAN virtualization via Cisco and
and standards available from such organiza- moved to a new data center and created an Brocade, plus thin provisioning, replication
tions as the World Wide Web Consortium all-virtual server infrastructure.A big plus has and deduplication.We’re going to require all
(W3C) as the communications link between been Xiotech’s Virtual View software, which uses of those things to work in concert with a level
applications and storage. Web services to communicate with VMware’s of software that ties them together cohesively
“The W3C has a nifty, simple model for how VirtualCenter management console for its ESX to provide those appropriate levels of service
you talk between applications and devices. It servers,Watermann says. He can manage his to the application.”
includes whole sets of standards that relate to virtualized server and storage infrastructure Others, including Ken Steinhardt, CTO
how you provision resources in your infrastruc- from a single console. for customer operations at EMC, are less
ture, back to the application,” says Jon Toigo, “When you create a new data store,Virtual optimistic.“If someone could write something
CEO of analyst firm Toigo Partners International. View shows you what port and LUN [logical magical that does things that we’d love to have,
“All the major application providers are unit number] is available to all of your ESX wouldn’t that be great?” he asks rhetorically.
Web-services enabled in that they ask the hosts in that cluster,”Watermann says.“And That would take a miracle, Steinhardt says.
infrastructure for services. But nobody on the when you provision it, it uses Web services to “The tools to write software are out there, but
storage hardware side is talking back to them.” communicate with VirtualCenter, and says,‘OK, the catch is it’s just not that simple.You need to
Nobody, that is, except Xiotech. this is the data store for these ESX hosts.’ And be able to have a solution that works broadly
Xiotech’s new Intelligent Storage Element you automatically have the data store there and across a range of applications as well, and
(ISE) is the first storage ware to talk back, available to use.You don’t even have to refresh typical, highly consolidated environments run
although other vendors quickly are readying or restore.” a mix of broad, diverse apps, not just a single
similar technology,Toigo says. ISE, based on That bit of automation saves on administra- application. I don’t see it happening,” he says.
technology Xiotech acquired with Seagate tion, but enabling the application to do the The Web services model is too much of a
Technology, is a commodity building-block of provisioning and de-provisioning would be an stretch to Steinhardt:“From a storage perspec-
storage, supporting as many as 40 disk drives even greater boon,Watermann says.“It’s really tive,Web services are a completely separate
plus processing power and cache that can be hard to get more staff, and you only have so issue.We’re talking about storing zeros and ones
added to any storage infrastructure as needed. many hours in the day. If you don’t have to tie on a storage device.That’s pretty agnostic to the
Xiotech claims ISE can support anything from your staff up with the repetitive tasks of carving individual application and always has been,”
high-performance transactional processing up space and assigning it, so much the better.” he says.
needs to scale-out Web 2.0 applications. Not so, analyst Toigo asserts.Web services
All storage vendors should work to Web- If they build it . . . provide a common framework, so by default,
services-enable their hardware and software so Now Watermann is using a Web service they can support every application.“People
they can communicate directly with applica- from Xiotech partner Eagle Software to improve need to tell their vendors,‘Look, I’m not buying
tions, Xiotech’s Powell says.This would preclude the school system’s backup times. your junk anymore if I can’t manage it using
vendor lock-in and let enterprises build storage Eagle, a storage reseller, provides Raytown this common [Web services] metaphor,’” he
environments using best-in-breed tools instead with backup software from CommVault says.“That puts you in the driver’s seat.”
of sticking with the all-in-one-array approach. Systems.“The Web-services tool lets us mirror
“They’d be able to add more storage, services or our data, pause that mirror, attach that to the Cummings is a freelance writer in North
management, without having to add everything backup server, back it up, then disconnect it Andover, Mass. She can be reached at joanne@
monolithically to a SAN,” Powell says. from the backup server, unpause the mirror, and cummingslimited.com.

Sponsored by APC
www.apc.com 11
EXECUTIVE GUIDE
Back to TOC

Next-generation data centers:


opportunities abound
Section 2
Emerging technologies
One in 10 companies deploying internal clouds

E
By Jon Brodkin, Network World, 12/15/2008

Enterprise IT shops are start- At the center of cloud computing is prise’s distributed computing resources.
ing to embrace the notion of a services-oriented interface between a It’s not clear exactly how fast this
provider and user, enabled by virtualiza- technology will advance. VMware plans to
building private clouds, model- tion, says Gartner analyst Thomas Bittman. release what might be considered a meta
ing their infrastructure after “When I move away from physical to operating system with its forthcoming
public service providers such virtual machines for every requirement, Virtual Datacenter Operating System.
as Amazon and Google. But I’m drawing a layer of abstraction,” Bitt- But cloud computing is less a new
while virtualization and other man says.“What virtualization is doing technology than it is a way of using
is you [the customers] don’t tell us what technology to achieve economies of
technologies exist to create server to get, you just tell us what service scale and offer self-service resources that
computing pools that can allo- you need.” are available on demand, The 451 Group
cate processing power, storage While virtualization technologies for says. Numerous enterprises are taking on
and applications on demand, servers, desktops and storage are readily this challenge of building more flexible,
available, Gartner says to get all the ben- service-oriented networks using existing
the technology to manage those
efits of cloud-computing enterprises will products and methodologies.
distributed resources as a whole need a new meta operating system that Thin clients and virtualization is the
is still in the early stages. controls and allocates all of an enter- key for Lenny Goodman, director of the
The corporations building their own
private clouds include such notable
names as Bechtel, Deutsche Bank, Morgan
Stanley, Merrill Lynch and BT, according
to The 451 Group. The research firm found
in a survey of 1,300 corporate software
buyers that about 11% of companies are
deploying internal clouds or planning
to do so. That may not seem like a huge
proportion, but it’s a sign that private
clouds are moving beyond the hype cycle
and into reality.
“It’s definitely not hype,” says Vivek
Kundra, CTO for the District of Columbia
government, which plans to blend IT
services provided from its own data center
with external cloud platforms like Google
Apps.“Any technology leader who thinks
it’s hype is coming at it from the same
place where technology leaders said the
Internet is hype.”

Sponsored by APC
www.apc.com 12
EXECUTIVE GUIDE
Back to TOC

Section 2: Emerging technologies • • •


desktop management group at Baptist While Kundra and Goodman have cloud. To build it on your own is quite an
Memorial Health Care in Memphis, Tenn. begun thinking of themselves as internal ambitious project. Where I see more enter-
Baptist uses 1,200 Wyse Technology thin cloud providers, many other IT shops prises going is down the path of renting
clients, largely at patients’ bedsides, and view cloud computing solely as it relates clouds that have already been built out by
delivers applications to them using Citrix to acquiring software-as-a-service and some service provider.”
XenApp application virtualization tools. on-demand computing resources from There is room for both internal and
Baptist also is rolling out virtual, customiz- external providers such as Salesforce. external cloud computing within the
able desktops to those thin clients using “Cloud computing is definitely the same enterprise, though. In Gartner’s view,
Citrix XenDesktop. hot buzzword,” says Thomas Catalini, a corporations that build their own private
Just as Internet users can access member of the Society for Information clouds will also access extra capacity from
Amazon, Google, Barnes & Noble or any Management and vice president of tech- public providers when needed. During
Web site they wish to use from anywhere, nology at insurance brokerage William times of increased demand, the meta
Goodman wants hospital workers to be Gallagher Associates in Boston.“To me it operating system as described by Gartner
able to move among different devices and means outsourcing to a hosted provider. will automatically procure additional
have the same experience. I would not think of it in terms of cloud capacity from outside sources, and users
“You get the advantage of taking that computing to my own company. [Out- won’t necessarily know whether they are
entire experience and making it roam sourcing] relieves me of having to buy using computing capacity from inside or
without the nurse having to carry or push hardware, software and staff to support a outside the firewall.
anything,” he says.“They can move from particular solution.” While “cloud” might strike some as
device to device.” Analyst Theresa Lanowitz of Voke, a an overused buzzword, Kundra views
Goodman also says a cloud-based model strong proponent of using external clouds cloud computing as a necessary transi-
where applications and desktops are deliv- to reduce management costs, says building tion toward more flexible and adaptable
ered from a central data center will make internal clouds is too difficult for most IT computing architectures.
data more secure, because it’s not being shops. “I believe it’s the future,” Kundra says.
stored on individual client devices. “That is a cumbersome task,” she says. “It’s moving technology leaders away
“If we relocate that data to the data “One of the big benefits of cloud com- from just owning assets, deploying assets
center by virtualizing the desktop, we can puting is the fact that you have companies and maintaining assets to fundamentally
back it up, we can secure it, and we can out there who can offer up things in a changing the way services are delivered.”
provide that data to the user wherever
they are,” he says.
In the Washington, D.C., government,
Kundra came on board in March 2007 with
the goal of establishing a DC.gov cloud
that would blend services provided from
his own data center with external cloud
platforms like Google Apps. Washington
moved aggressively toward server virtual-
ization with VMware, and made sure it had
enough network bandwidth to support
applications hosted on DC.gov.
The move toward acting as an internal
hosting provider as well as accessing
applications outside the firewall required
an increased focus on security and user
credentials, Kundra says. But that was a
necessary part of giving users the same
kind of anytime, anywhere access to data
and applications they enjoy as consumers
of services in their personal lives.
“The line is blurred,” he says.“It used
to be you would come to work and only
work. The blurring started with mobile
technologies, BlackBerries, people doing
work anytime, anywhere.”

Sponsored by APC
www.apc.com 13
EXECUTIVE GUIDE
Back to TOC

Section 2: Emerging technologies • • •

Google, Microsoft spark interest in modular data


centers, but benefits may be exaggerated
Experts question energy efficiency claims

I
By Jon Brodkin, Network World, 10/13/2008

Interest in modular data centers


is growing, fueled by high-profile
endorsements from Microsoft and
Google. But the model raises new
management concerns, and effi-
ciency claims may be exaggerated.
Modular, containerized data centers
being sold by vendors such as IBM, Sun
and Rackable Systems fit storage and
hundreds, sometimes thousands, of servers
into one large shipping container with its
own cooling system. Microsoft, using Rack-
able containers, is building a data center
outside Chicago with more than 150 con-
tainerized data centers, each holding 1,000
to 2,000 servers. Google, not to be outdone,
secured a patent last year for a modular
data center that includes “an intermodal
shipping container and computing systems
mounted within the container.”
Rackable Systems’ ICE Cube portable data center can be fitted
To hear some people tell it, a container- with as many as 22,400 processing cores in 2,800 servers.
ized data center is far easier to set up than
a traditional data center, easy to manage container part of its green strategy, says the according to Sams,“in almost all cases
and more power-efficient. It also should same efficiency can be achieved within they’re comparing a highly dense [con-
be easier to secure permits, depending on the four walls of a normal building. tainer] to a low-density [traditional data
local building regulations. Who wouldn’t IBM touts a “modular” approach to data center].”
want one? center construction, taking advantage Containers also eliminate one scal-
If a business has a choice between of standardized designs and predefined ability advantage related to cooling found
buying a shipping container full of servers, components, but that doesn’t have to be in traditional data centers, according to
and building a data center from the in a container.“We’re a huge supporter of Sams. Just as it’s more efficient to cool an
ground up, it’s a no-brainer, says Geoffrey modular. We’re a limited supporter of con- apartment complex with 100 living units
Noer, a vice president at Rackable, which tainer-based data centers,” says Steve Sams, than it is to cool 100 separate houses, it’s
sells the ICE Cube Modular Data Center. vice president of IBM Global Technology more cost-effective to cool a huge data
“We don’t believe there’s a good reason Services. center than many small ones, he says. Air
to go the traditional route the vast majority Containers are efficient because they conditioning systems for containerized
of the time,” he says. pack lots of servers into a small space, and data centers are locked inside, just like the
But that is not the consensus view by use standardized designs with modular servers and storage, making true scalability
any stretch of the imagination. Claims components, he says. But you can deploy impossible to achieve, he notes.
about efficiency are over-rated, according storage and servers with the same level of Gartner analyst Rakesh Kumar says
to some observers. density inside a building, he notes. it will take a bit of creative marketing
Even IBM, which offers a Portable Container vendors often tout 40% for vendors to convince customers that
Modular Data Center and calls the to 80% savings on cooling costs. But containers are inherently more efficient

Sponsored by APC
www.apc.com 14
EXECUTIVE GUIDE
Back to TOC

Section 2: Emerging technologies • • •


than regular data centers. Gartner is still A TV crew that follows sporting events Kumar notes. Besides limiting flexibility
analyzing the data, but as of now Kumar may want a mobile data center, says Robert at the time of purchase, this raises the
says,“I don’t think energy consumption Bunger, director of business development question of what happens when those
will necessarily be an advantage.” for American Power Conversion. APC servers reach end-of-life. Will you need the
doesn’t sell its portable data center, but vendor to rip out the servers and put new
Finding buyers in 2004 it built one into a tractor-trailer ones in, once again limiting your choice of
That doesn’t mean there aren’t any as a proof-of-concept. It was resilient.“We technology?
advantages, however. A container can pulled that trailer all over the country” for “At the moment, most vendors will fill
be up and running within two or three demos, Bunger notes. their containers only with their servers,”
months, eliminating lengthy building and But APC isn’t seeing much demand, Kumar says.
permitting times. But if you need an instant except in limited cases. For example, a IBM, however, says it uses industry-
boost in capacity, why not just go to a business that needs an immediate capacity standard racks in its portable data center,
hosting provider, Kumar asks. upgrade but is also planning to move its allowing customers to buy whatever
“We don’t think it’s going to become a data center in a year might want a con- technology they like.
mainstream solution,” he says.“We’re strug- tainer because it would be easier to move DeFanti said Sun’s Modular Data
gling to find real benefits.” than individual servers and storage boxes. Center allows him the flexibility to buy a
Kumar sees the containers being more
suited to Internet-based,“hyper-scale”
companies such as Google, Amazon and “I think it is key that the combination of
Microsoft. Containerized data centers offer
scalability in big chunks, if you’re willing to virtualization and distributed infrastructure
buy more containers. But they don’t offer produce a container that can be out of service
scalability inside each container, once it
has been filled, he says.
without impacting the application as a whole.”
Container vendors tout various benefits, Lee Kirby, general manager, Lee Technologies
of course. Each container is almost fully
self-contained, Rackable’s Noer says.
Chilled water, power and networking are UC-San Diego bought two of Sun’s heterogeneous mix of servers and storage.
the only things from the outside world Modular Data Centers. One goal is to Rackable, though, steers customers toward
that must be connected to each one, he contain the cost of storing and processing either its own servers or IBM BladeCenter
says. Rackable containers, which can be rapidly increasing amounts of data, says machines through a partnership with IBM.
fitted with as many as 22,400 processing Tom DeFanti, principal investigator of the “I think vendors are learning that
cores in 2,800 servers, are water-tight, school’s GreenLight energy efficiency people want more flexibility,” DeFanti says.
fitted with locks, alarms and LoJack-like research project. But it will take time to see Another consideration is failover capa-
tracking units. Sun’s Modular Data Center whether the container approach is more bilities, says Lee Kirby, who provides site
can survive an earthquake — the company efficient.“The whole idea is to create an assessments, data center designs and other
made sure of that by testing it on one experiment to see if we can get more work services as the general manager of Lee
of the world’s largest shake tables at the per watts,” DeFanti says. Technologies. If one container goes down,
University of California in San Diego. The Modular Data Center is not as con- its work must be transferred to another.
A fully-equipped Rackable ICE Cube venient to maintain as a regular computer Server virtualization will help provide this
costs several million dollars, mostly for the room, because there is so little space to failover capability, and also make it easier
servers themselves, Noer says. The con- maneuver inside, he says.”But it seems to to manage distributed containerized data
tainer pays for itself with lower electricity me to be an extremely well-designed and centers — an important consideration for
costs due to an innovative Rackable design thought-out system,” DeFanti says.“It gives customers who want to distribute com-
that maximizes server density, Noer says. us a way of dealing with the exploding puting power and have it reside as close to
But it’s still too early to tell whether amount of scientific computing that we users as possible, Kirby says.
containerized data centers are the way of need to do.” “I think it is key that the combination
the future.“We’re just at the cusp of broad of virtualization and distributed infrastruc-
adoption,” Noer says. Beware vendor lock-in ture produce a container that can be out
Potential use cases for containers Before purchasing a containerized data of service without impacting the applica-
include disaster recovery, remote locations center, enterprises should consider several tion as a whole,” Kirby says.
such as military bases, or big IT hosting issues related to their manageability and
companies that would prefer not to build usefulness. Vendors often want you to fill
brick-and-mortar data centers, Kumar says. the containers with only their servers,

Sponsored by APC
www.apc.com 15
EXECUTIVE GUIDE
Back to TOC

Section 2: Emerging technologies • • •

The big data-center synch


With Fibre Channel over Ethernet
now a reality, enterprises prep for
data and storage convergence

F
By Dawn Bushaus, Network World, 01/26/2009

For the County of Los Angeles – with


Channel competitor, Bro-
more than 10 million residents, the cade Communications,
nation’s largest county – the tim- is beta-testing an FCoE
ing couldn’t be better for the arrival switch. Juniper Networks
of Fibre Channel over Ethernet on also is considering
the data-center scene. adding FCoE capability to its data-center cooling and cabling, that will be a compel-
switches – if customer demand warrants it, ling reason for companies to look at it.”
L.A. County has undertaken a server virtu- the company says. At a minimum, adopting FCoE lets a
alization initiative, a data-center redesign and L.A. County is replacing eight Cisco Cata- company consolidate server hardware and
construction of a new data center scheduled lyst 6500 Ethernet switches with two Nexus cabling at the rack level. That’s the first step
for completion in late 2011.“When we put 7000 switches in its data-center core, and will in moving to FCoE. Then more consolida-
those three things together, we knew we add a Nexus 5000 switch to the data-center tion can take place in the core switching
needed a new design that could accommo- distribution network. The county expects to fabric, and that reduces costs even further.
date them all,” says Jac Fagundo, the county’s have live traffic running through the switches Ultimately, storage arrays will support FCoE
enterprise solutions architect. by spring. natively (NetApp already has announced
Because it can unite data and storage “This is a multiyear project where the such a product), making end-to-end FCoE
networks on a 10 Gigabit Ethernet fabric, platform features will evolve as we migrate possible.
FCoE enabled Fagundo to turn that design into it,” Fagundo says.“Our next-generation L.A. County’s Fagundo expects substantial
vision into a reality. data center, of which the Nexus switches and cost savings from using FCoE, although he
The plan for the new data center calls FCoE are a part, is a design that we plan to declined to share detailed estimates. He did
for extending virtualization beyond hosting implement over the next five years.” say, however, that power consumption alone
servers to data and storage networks, will be cut in half when the eight Cisco
and includes an overhaul of the county’s A technology for tough times Catalyst switches are consolidated into two
stovepipe approach to storage and server For large enterprises operating distinct Nexus switches. In addition, FCoE will let the
deployments.“We realized we were going to Fibre Channel storage-area networks (SAN) county reduce server hardware and cabling
have to integrate the storage,” Fagundo says, and Ethernet data networks, FCoE’s potential by using a converged network adapter
“so we decided, why not go a step further benefits and cost savings are promising, (CNA) to replace Ethernet network interface
and integrate the storage and IP networks in even though the technology requires new cards and Fibre Channel host bus adapters.
the data center?” hardware. The cost savings in infrastructure,
When the county started looking at “The rule of thumb is that during tight components, edge devices and cabling are
FCoE as an option, it turned to Cisco, its economic times, incumbent technologies compelling reasons to consider FCoE, agrees
Ethernet switch vendor. It completed a do well and it’s not wise to introduce a new Ian Rousom, corporate data-center network
proof-of-concept trial using the Nexus technology - unless your proposition is one architect at defense contractor Lockheed
7000 and 5000 switches – switches that of economic favor,” says Skip Jones, president Martin in Bethesda, Md.“Cabling is often
represent Cisco’s biggest push yet into the of the Fibre Channel Industry Association overlooked, but you can potentially elimi-
enterprise data center. The 5000 supports a (FCIA).“And that’s exactly what FCoE is.” nate one of two cable plants using FCoE.
prestandard version of FCoE, while the 7000 Jones has an obvious bias toward FCoE, If you consolidate racks, you eliminate the
will be upgraded with FCoE features once but Bob Laliberte, analyst at Enterprise biggest source of cable sprawl in the data
the American National Standards Institute Strategy Group, agrees.“The economy will center.”
finalizes the FCoE standards, probably certainly get more companies to look at Cisco estimates that enterprises can expect
in late 2009. Cisco currently is the only FCoE,” he says.“If vendors can show how a 2% to 3% savings in power consumption
vendor offering FCoE support. Its chief Fibre FCoE can reduce the cost of powering, during FCoE’s rack-implementation phase,

Sponsored by APC
www.apc.com 16
EXECUTIVE GUIDE
Back to TOC

Section 2: Emerging technologies • • •


and a total of about 7% savings once FCoE the configuration management process.” ramifications.“If the Fibre Channel network
is pushed into the data-center core. That could be a tall order in many has a firewall or special security features,
enterprises, FCIA’s Jones says.“Network and the FCoE products may not offer those as
Tall orders for FCoE storage administrators drink different kinds part of their feature sets,” he explains.
The promise of FCoE lies in a merged of tea; they’re different breeds. There will For now, FCoE is gaining traction with
data and storage network fabric, but be some turf wars.” such early adopters as L.A. County, where
making the integration happen isn’t easy L.A. County has dealt with those unique circumstances have made the time
and shouldn’t be done without careful differences by combining its data-center right to consider a big change. Many more
planning. Typically in an enterprise, storage and network groups under a single IT executives will take a more cautions
network and storage management teams management umbrella. Initially, the groups’ approach, however, as Rousom is doing.
don’t work together. That has to change engineering staffs were concerned about “The fact that FCoE is still in the
with FCoE. merging, Fagundo says.“Then they started standards process tells me that there is still
“Certainly at a minimum, collaboration contributing to the design and proof of potential for change at the hardware level
between Ethernet and Fibre Channel concept, and now they’re happy about it.” before this is all complete,” Rousom says.
departments will have to increase, and Beyond cultural issues, enterprises must “That, more than anything, is preventing
merging the two would not be a bad idea,” communicate with their server vendors to me from buying it.”
says Rousom, who is considering FCoE. find out whether their platforms support
“The people who manage the infrastruc- CNAs as opposed to separate Ethernet Bushaus is a freelance technology writer
ture have to agree on policy, change and Fibre Channel adapters, Rousom says. in the Chicago area. She can be reached at
processes, troubleshooting policies and IT managers also must consider security dbushaus@mindspring.com.

Sponsored by APC
www.apc.com 17
EXECUTIVE GUIDE
Back to TOC

Next-generation data centers:


opportunities abound
Section 3
Best practices
Seven tips for succeeding with virtualization
Experts share best practices for optimizing strategic
virtualization initiatives

A
By Denise Dubie, Network World, 10/20/2008

As server virtualization projects Here enterprise IT managers and with technologies in multiple areas, such as
gain scale and strategic value, industry watchers share best practices they servers and desktops, or with partnerships
enterprise IT managers must move say will help companies seamlessly grow across IT domains could help IT managers
from 30 to 3,000 virtual machines without better design their virtualization-adoption
quickly beyond tactical approaches worry. road maps. More important, however, is
to achieve best results. preventing virtualization implementations
Consider these Gartner forecasts: More 1. Approach virtualization from creating more problems via poor
than 4 million virtual machines will be holistically communications or antiquated organiza-
installed on x86 servers this year, and the Companies considering standardizing tional charts, industry watchers say.
number of virtualized desktops could grow best practices for x86-based server virtual- “With ITIL and other best-practice frame-
from less than 5 million in 2007 to 660 mil- ization should think about how they plan works, IT has become better at reaching
lion by 2011. The popularity of virtualizing to incorporate desktop, application, storage out to other groups, but the speed at which
x86 server and desktop resources has many and network virtualization in the future. things change in a virtual environment
enterprise IT managers reassessing ways IT has long suffered from a silo men- could hinder that progress,” says Jasmine
to update already virtualized network and tality, with technology expertise living Noel, a principal analyst at Ptak, Noel and
storage resources, too. in closed clusters. The rapid adoption of Associates.“IT’s job is to evolve with the
Virtualization’s impact will spread virtualization could exacerbate already technology and adjust its best practices,
beyond technology changes to opera- strained communications among such IT such as change management, to new
tional upheaval. Not only must enterprise domains as server, network, storage, security technologies like virtualization.”
IT executives move from a tactical to a and applications.
strategic mindset but they also must shift “This wave of virtualization has started 2. Identify and inventory virtual
their thinking and adjust their processes with one-off gains, but to approach the resources
from purely physical to virtual. technology strategically, IT managers need Understanding the resources available
“Enterprise IT managers are going to to look to the technology as achieving at any given time in a virtual environment
have to start thinking virtual first and learn more than one goal across more than one requires enterprise IT managers to enforce
how to make the case for virtualization IT group,” says Andi Mann, research director strict processes from a virtual machine’s
across IT disciplines,” says James Staten, at Enterprise Management Associates. birth through death.
principal analyst at Forrester Research. To do that, an organization’s virtualiza- Companies need a way to identify
“This will demand they change processes. tion advocates should champion the virtual machines and other resources
Technologies can help, but if managers technology by initiating discussions among throughout their life cycles, says Pete Lind-
don’t update their best practices to handle various IT groups and approaching vendors strom, research director at Spire Security.
virtual environments, nothing will get with a broad set of requirements that The type of virtual-machine tagging he
easier. address short- and long-term goals.Vendors suggests would let IT managers “persistently

Sponsored by APC
www.apc.com 18
EXECUTIVE GUIDE
Back to TOC

Section 3: Best practices • • •


identify virtual-machine instances over an he thought he had removed. server drops from 20% to 10%, it would be
extended period of time,” and help to main- “There are cases in which you build a helpful to know the change came about
tain an up-to-date record of the changes virtual machine for test and then for some because VMware Distributed Resource
and patches made to the original instance. reason it is not removed but rather it’s Scheduler (DRS) moved virtual machines
The process would provide performance still out there consuming resources, even to a new physical server, Haight says. In
and security benefits because IT managers though it is serving no purpose,” Ward says. addition, knowing when and where virtual
could weed out problematic virtual “Knowing what we already have and plan- machines migrate can help prevent a
machines and keep an accurate inventory ning our investments based on that helps. condition dubbed “VMotion sickness” from
of approved instances. We can reassign assets that have outlived cropping up in virtual environments. This
“The ability to track virtual machines their initial purpose.” occurs when virtual machines move repeat-
throughout their life cycles depends on a When they create virtual machines, IT edly across servers -- and bring problems
more persistent identity scheme than is managers also must plan for their deletion. they might have from one server to the
needed in the physical world. IT needs to “Assign expiration dates to virtual machines next, Haight says. Proper reporting tools,
know which virtual resources it created when they are allocated to a business for example, could help an administrator
and which ones seemed to appear over unit or for use with a specific application; understand that a performance problem
time,” Lindstrom explains.“The virtual world and when that date comes, validate the is traveling with a virtual machine unbe-
is so much more dynamic that IT will need need is no longer there and expire the known to DRS.
granular identities for virtual machines resource,” Forrester’s Staten says.“Park a
and [network-access control] policies that virtual machine for three months and if it 5. Eliminate virtual blind spots
trigger when an unknown virtual machine is no longer needed, archive and delete. The fluid environment created by
is in the environment. Rogue virtual Archiving keeps options open without virtualization often includes blind spots.
machines can happen on the client or the draining storage resources or having the “We monitor all physical traffic, and there is
hypervisor.” virtual machine sitting out there consuming no reason why we wouldn’t want to do the
For instance, using such tools as BMC compute resources.” same for the virtual traffic. It’s a huge risk
Software’s Topology Discovery, EMC’s Appli- not knowing what is going on, especially
cation Discovery Manager or mValent’s 4. Marry the physical and virtual when the number of virtual servers is
Integrity, an IT manager could perform an IT managers must choose the applica- double what you have for physical boxes,”
ongoing discovery of the environment and tions supported by virtual environments says Nick Portolese, senior manager of data
track how virtual machines have changed. wisely, say experts, who warn that few if center operations at Nielsen Mobile in San
Manual efforts couldn’t keep pace with any IT services will rely only on the virtual Francisco.
the configuration changes that would infrastructure. Portolese supports an environment with
occur because of, say,VMware VMotion “While some environments could about 30 VMware ESX servers and 500 to
or Microsoft Live Migration technologies. support virtual-only clusters for testing, 550 virtual machines. Early on, he realized
“IT has to stay on top of a lot more data the more common scenario would have, he wasn’t comfortable with the amount
in a much more dynamic environment,” for instance, two virtual elements and one of network traffic he could monitor in his
O’Donnell says. physical one supporting a single IT service,” virtual environment. Monitoring physical
says Cameron Haight, a Gartner research network traffic is a must, but he found
3. Plan for capacity vice president.“IT still needs to correlate the visibility into traffic within the virtual
Just because virtual machines are performance metrics and understand the environment was non-existent.
faster to deploy than physical ones, the profile of the service that spans the virtual Start-up Altor Networks provided Por-
task shouldn’t be taken lightly.“If you are and physical infrastructures. Sometimes tolese with what he considered necessary
not careful, you can have a lot of virtual people are lulled into a false sense of tools to track traffic in the entire environ-
machines that aren’t being used,” says Ed security thinking the tools will tell them ment. Altor’s Virtual Network Security
Ward, senior technical analyst at Hasbro in what they need to know or just do [the Analyzer (VNSA) views traffic at the virtual
Pawtucket, R.I. He speaks from the experi- correlation] for them.” -- not just the network -- switch layer. That
ence of supporting 22 VMware ESX host IT managers should push their vendors means inter-virtual-machine communica-
servers, 330 virtual machines, 100 worksta- for reporting tools that not only show what’s tions or even virtual desktop chatter won’t
tions and 250 physical machines. happening in the virtual realm but also be lost in transmission, the company says.
To prevent virtual-machine sprawl display the physical implications -- and VNSA provides a comprehensive look at
and to curb spending for licenses and potentially the cause -- of an event. Detailed the virtual network and analyzes traffic to
power for unused machines, Ward says he views of both environments must be mar- give network security managers a picture
uses VKernel’s Capacity Analyzer virtual ried to correlate why events take place in of the top application talkers, most-used
appliance. It alerts him to all the virtual both realms. protocols and aspects of virtualization
machines in his environment, even those For instance, if utilization on a host relevant to security. It’s a must-have for any

Sponsored by APC
www.apc.com 19
EXECUTIVE GUIDE
Back to TOC

Section 3: Best practices • • •


virtual environment, Portolese says.
“We didn’t have anything to monitor the virtual switch layer, and
for me to try to monitor at the virtual port was very difficult. It was
impossible to tell which virtual machine is coming from where,”
Portolese explains.“You will get caught with major egg on your
face if you are silly enough to think you don’t have to monitor all
traffic on the network.”

6. Charge back for virtual resources


Companies with chargeback policies should apply the prac-
tice to the virtual realm, and those without a set process should
institute one before virtualization takes off.
Converting physical resources to virtual ones might seem like
a no-brainer to IT folks, who can appreciate the cost savings and
administration changes, but business units often worry that having
their application on a virtual server might affect performance
negatively. Even if a company’s structure doesn’t support the IT
chargeback model, business units might be more willing to get
on board with virtualization if they are aware of the related cost
savings, Forrester’s Staten says.
“IT can provide some transparency to the other departments
by showing them what they can gain by accepting a virtual server.
This includes lower costs, faster delivery against [service-level
agreements], better availability, more-secure disaster recovery and
the most important one -- [shorter time to delivery]. It will take six
weeks to get the physical server, but a virtual server will be over in
more like six hours,” Staten says.
In addition, chargeback policies would be an asset to IT groups
looking to regain some of their investment in virtualization. At
Hasbro, IT absorbs the cost of the technology while the rest of the
company takes advantage of its benefits, Ward says.“The cost of
physical machines comes out of the business department’s budget,
but the cost of virtual machines comes out of the IT budget,” he
says.

7. Capitalize on in-house talent


IT organizations also must update staff to take on virtualization.
Certification programs, such as the VMware Certified Professional
(VCP) and Microsoft’s Windows Server Virtualization, are available,
but in-house IT staff must weigh which skills they need and how
to train in them.“Certifications are rare, though I do have two VCPs
on my staff. Most IT professionals who are able to take the exam
and get certified would probably work in consulting,” says Robert
Jackson, director of infrastructure at Reliance Limited Partnership nating the knowledge across a team would make an organization
in Toronto. more secure and improve the virtualization implementation
With training costing as much as $5,000 per course, IT workers overall with fewer duplicated efforts and more streamlined
might not get budget approval. Gartner’s Haight recommends approaches.”
assembling a group of individuals from the entire IT organization In the absence of virtualization expertise, Linux proficiency can
into a center of excellence of sorts. That would enable the sharing help, Hasbro’s Ward says. VMware support staff seems to operate
of knowledge about virtualization throughout the organization. most comfortably with that open source operating system, he says.
“We surveyed IT managers about virtualization skills, and about In general, moving from pilot to production means increasing
one-quarter of respondents had a negative perspective about the staff for the daily care and feeding of a virtual environment,
being able to retain those skills in-house,” Haight says.“Dissemi- Ward says.“Tools can help, but they can’t replace people.”

Sponsored by APC
www.apc.com 20
EXECUTIVE GUIDE
Back to TOC

Section 3: Best Practices • • •

Don’t let the thin-provisioning gotchas getcha


A how-to on getting the most out of this advanced
storage technology

N
By Sandra Gittlen, Network World, 01/26/2009

Next-generation storage is all


about dynamic allocation of
resources, and thin provisioning
can get that job done quickly and
easily – but not carefree.
As thin provisioning – also called dynamic
provisioning or flex volumes – becomes a
standard feature in virtual storage arrays,
IT executives and other experts warn that
dynamic resource allocation is not a one-size-
fits-all proposition. Applying the technology in
the wrong way could create a major disaster,
they caution.
“Vendors have made it so that IT teams
think,‘Thin provisioning is so easy, why
wouldn’t we use it?’ But some major thinking
has to go into how your storage network is
actually architected and deployed to benefit
from the technology,” says Noemi Greyzdorf,
research manager at IDC. IT director at New York media conglomerate an application reaching its threshold and
In traditional storage networks,IT has to IAC, speaking about one of the biggest miscon- storage being drained, and adjusts his alerts
project the amount of storage a particular appli- ceptions surrounding thin provisioning. accordingly. If the window is too close, he pads
cation will need over time,then cordon off that You’ve got to take the critical step of setting it.The goal is to allow sufficient time for adding
disk space.This means buying more hardware threshold alerts within your thin-provisioning capacity – and avoiding a disaster, he says.
than is needed immediately,as well as keeping tools because you’re allowing applications to By setting and fine-tuning his alerts,Yotko
poorly utilized disks spinning ceaselessly – a share resources,Yotko says. Otherwise, you can reports being able to realize utilization rates
waste of money and energy resources. max out your storage space, and that can lead of more than 80% among his array, a flip from
With thin provisioning, IT can keep storage to application shutdowns and lost productivity the less than 20% he had realized before thin
growth in check because it need not commit because users can’t access their data. provisioning.
physical disk space for the projected amount “You can get pretty close to your boundary,
of storage an application requires. Instead, IT fast, and that can lead to panicked calls No dumping here
relies on a pool of disk space that it draws from asking your vendor to rush you a bunch of Another common mistake IT teams make is
as the application needs more storage. Having disks.Alerting is an important aspect that we thinking that every application is a candidate
fewer idling disks means better capacity originally missed,”Yotko says. for thin provisioning, says Scott McCullough,
management, increased utilization, and lower Yotko has since integrated threshold alerts manager of technical operations at manufac-
power and cooling consumption. for IAC’s 3Par array into a central network turer Mine Safety Appliances in Pittsburgh.
management system, he says. Doing that lets “The only applications that can take advan-
Hold on a sec . . . him keep tabs on how myriad file, e-mail, tage of thin provisioning are those for which
IT executives shouldn’t let these ben- domain-controller and Web-application servers you can predict storage growth,” McCullough
efits, attractive as they are, blind them to the for 15 business units are handling the shared says. For that reason, he uses NetApp’s Provi-
technology’s requirements, early adopters say. resource pool.That pool supports more than sioning Manager to oversee resource pools for
You can’t just flip the switch on your storage 25TB of data, he adds. his Web servers, domain controllers and Oracle
pool and walk away, says Matthew Yotko, senior Yotko also calculates the time between database, but not the company’s high-volume

Sponsored by APC
www.apc.com 21
EXECUTIVE GUIDE
Back to TOC

Section 3: Best Practices • • •


SQL Server. That server would quickly drain
any projected resource allotment, he says.
Before he thin-provisions an application,
McCullough studies its performance to make
sure it won’t endanger the pool.“It doesn’t
make sense to take up all your resources and
potentially starve other applications,” he says.
“You definitely need to be able to forecast or
trend the application’s trajectory,”IDC’s Greyzdorf
says. Biomedical and other science programs
can be particularly tricky for thin provisioning,
she notes, because they can start off needing
200GB of storage and quickly skyrocket to 3TB
with the addition of a single project.
Choosing the wrong applications to
thin-provision not only endangers your
entire storage pool, but also negates any
management and budget relief you might
gain otherwise. Done correctly, thin provi-
sioning should reduce the overall time spent
configuring and deploying storage arrays. If
applications continuously hit their thresholds,
however, and you’re forced to add capacity on
the fly, that benefit is quickly negated, costing
you in terms of personnel and budget.
This concern has ResCare rolling out
thin-provisioning piecemeal, says Daryl Walls,
manager of system administration at the Louis-
ville, healthcare support and services provider.
“We are cautious about our deployments.We
evaluate each implementation to see whether
it makes sense from an application, server and
storage point of view,” he says.
Once his applications have been thin-pro-
visioned,Walls closely monitors them to make
sure that usage patterns don’t change dramati-
cally. In the worst case, that would require them
to be removed from the pool.“A few times we’ve
underestimated, usage has crept up on us, and
we’ve received alerts saying,‘You’re at 70% to take responsibility in managing the way people Nutraceutical. Each time an application nears its
80% utilization,’”he says. In those instances, receive and use storage. Otherwise you wind threshold,Vance turns to Compellent Technolo-
IT teams must decide whether to expand the up wasting space, and that’s hard to clean up gies’ Storage Center software tools to analyze
application’s allotment, procure more resources after the fact,” Vance says. how the server used its storage space.“We then
or move the application off the system. For instance, being lax about monitoring decide whether it was used appropriately or if
the amount of space users and applications we need to tweak our policies,”he says.
What goes where are absorbing can lead to overspending Vance says he is a solid proponent of thin
Thin provisioning can wreak havoc on your on hardware and software, and necessitate provisioning, but he cautions his peers to stave
network if you don’t have proper allotment an increase in system management.This is off the complacency that automation can
policies in place, says Matt Vance, CIO at particularly concerning in Vance’s environ- bring on:“We can’t let the pendulum swing so
Nutraceutical, a health supplements company ment, where the principal driver for moving far toward automation that we forget to identify
in Park City, Utah. to virtualization and thin provisioning was where IT still has to be managing its resources.”
“IT has always managed and controlled the need to bring high-performance database
space utilization, but with thin provisioning you applications online quickly without breaking Gittlen is a freelance technology editor in
can get a false sense of security.We’ve found the bank on storage requirements. the greater Boston area. She can be reached at
that even with a resource pool, you still need to Reporting tools have become essential at sgittlen@verizon.net.

Sponsored by APC
www.apc.com 22
EXECUTIVE GUIDE
Back to TOC

Section 3: Best Practices • • •

The challenge of managing mixed virtualized


Linux, Windows networks
Windows and virtualization are driving need for new
management standards, tools
By John Fontana, Network World, 10/27/2008

T
The sprawl of management is supporting Linux within its virtualiza- and from there you only start to see the
consoles, the proliferation of tion management tools slated to ship by stronger growth [of Linux],” says Ute Albert,
year-end rather than relying on third-party marketing manager of HP’s Insight manage-
data they provide and the rising partners. ment platform. HP is boosting its Linux
use of virtualization are adding And the vendor has said it will inte- support with features HP already supports
challenges to corporations look- grate the OpenPegasus Project, an open for Windows platforms, such as capacity
ing to more effectively manage source implementation of the DMTF’s CIM planning.
and Web-based Enterprise Management Analyst firm the Enterprise Management
mixed Linux, Windows and cloud
(WBEM) standards, so it can extend its Associates reports that use of Linux on
environments. monitoring tools to other platforms. mainframes has grown 72% in the past two
The trend toward services is forcing IT years while x86 Linux growth hit 57%.
Traditional standards are being tapped to think about management across systems In the trenches, users are moving to suck
in order to bridge the platform divide, and that may have little in common, including the complexity out of their environments
new ones are being created to handle the same LAN. Services are increasingly and make sense not only of individual
technologies such as virtualization that
create physical platforms running one
technology but hosting virtual machines
running something completely different.
“We are starting to see IT put more mission-
The goal is better visibility into what is critical applications on Linux and from there you
going right or wrong – and why – as com-
plexity rises on the computing landscape.
only start to see the stronger growth [of Linux].”
Some help is on the way. The Distributed Ute Albert, marketing manager of HP’s Insight management platform
Management Task Force (DMTF) has begun
hammering out virtualization management
standards it hopes will show up in products made up of numerous application compo- network and systems components but of
soon. Those standards will address interop- nents that can be running both internally composite services and how to aggregate
erability, portability and virtual machine and externally, complicating efforts to data from multiple systems and feed results
life-cycle management, as well as incorpo- oversee all the piece parts, their platforms back to administrators and notification
rate time-honored management standards and their dependencies. systems.
such as the Common Information Model The big four management vendors, BMC,
(CIM). CA, HP and IBM, are handling the mixed- Console reduction
Vendors such as Microsoft,VMware and environment evolution by upgrading their At Johns Hopkins University, managers
Citrix are on board with the DMTF and are monolithic platforms to better manage are trying to reduce “console sprawl” in a
creating and marketing their own cross- Linux as its use grows. And a crop of next- management environment that stretches
platform virtualization management tools tier vendors, start-ups and open source across 200 projects – many with their own
for x86 machines. Linux vendors, including players are angling for a piece of the pie IT support in some nine research and
Novell and Red Hat, and traditional by providing tools that work alone, as well teaching divisions, as well as healthcare
management vendors such as HP also are as plug into the dominant management centers institutes and affiliated entities.
joining in. frameworks. Project leader pick their own applica-
To underscore the importance of “We are starting to see IT put more tions and platforms with about 90% to
heterogeneous management, Microsoft mission-critical applications on Linux 95% running Windows and 5% to 10% on

Sponsored by APC
www.apc.com 23
EXECUTIVE GUIDE
Back to TOC

Section 3: Best Practices • • •


Linux. There are also storage-area networks, And that allows for managing distributed ITIL, a set of best practices for IT services
network devices, Oracle software, Red Hat, applications, which incorporate multiple management, and the Microsoft Operations
VMware, EMC, IronPort e-mail relays, and components on multiple platforms Framework.
hardware from Dell, HP and IBM. “Microsoft has a limited scope of what Taylor and Bakert also are testing System
John Taylor, manager of the man- they are bringing into System Center at this Center’s Virtual Machine Manager, which
agement and monitoring team, and point,” he says. will manage Windows, the VMware hyper-
Jamie Bakert, systems architect in the For instance, Bakert uses Quest Xten- visor and Suse Linux guest environments.
management and monitoring group, are sions to monitor IronPort relays that work
responsible for 15,000 desktops and 1,500 with Microsoft Exchange to ensure every- Virtualization getting
servers, nearly 50% of the university’s total thing in the e-mail service is monitored in Microsoft ironically had the title as first
environment. one tool. to support mixed hypervisor environments
“Our challenge is we do not want to The Quest tools also let Bakert store because it was last to release a hypervisor
create another support structure,” says security events on non-Windows machines – Hyper-V.
Taylor, who has standardized on Microsoft’s so he can report on both Windows and Without the benefit of the in-develop-
System Center management tools non-Windows platforms, which helps with ment Microsoft code,VMware, Novell,
anchored by Operations Manager 2007 and collecting compliance data. Red Hat, HP and others are momentarily
Configuration Manager 2007. Taylor and Bakert also are beta testing playing catch-up on cross-platform man-
Because Taylor doesn’t control what Microsoft’s System Center Service Manager, agement support.
systems get rolled out, he is using Quest slated to ship in early 2010, with hopes they Novell is using its February 2008 acquisi-
Software’s Management Xtensions for can reduce System Center consoles from tion of PlateSpin to support management
System Center to support non-Windows five to one. Eventually, Service Manager’s across both physical and virtual environ-
infrastructure. configuration management database will ments. The company’s existing partnership
“Quest allows us to bring in anything host data from Configuration Manager and and interoperability agreement with Micro-
with a heart beat,” Bakert says. Operations Manager, as well as incorporate soft has yielded virtualization bundles and

Sponsored by APC
www.apc.com 24
EXECUTIVE GUIDE
Back to TOC

Section 3: Best Practices • • •


the company’s acquisition of Managed Management Architecture for Server wrote to integrate Windows systems using
Objects will give IT admins and business Hardware (SMASH), used to unify data Microsoft’s native Windows Management
managers a unified view of how business center management, includes the SMASH Instrumentation.
services work across both physical and Server Management (SM) Command Line “We don’t provide the entire tool set
virtual environments. Protocol (CLP) specification, which simpli- you may want, but we at least take the
“In the data center we see that people fies management of heterogeneous servers time and energy out of providing the
are not saying consolidate [on a platform], in the data center. The Desktop and Mobile monitoring infrastructure,” Lilly says.Via
they are saying give me a universal remote,” Architecture for System Hardware (DASH) standards, GroundWork can plug into other
says Richard Whitehead, director of product provides standards-based Web services management tools such as service desk
marketing for data center solutions. management for desktop and mobile client applications.
Red Hat also is developing its portfolio. systems. Other open source management
Its February 2008 launch of the open resources include Open WS-Man, an XML
source oVirt Project has a stated goal Open source SOAP-based specification for management
of producing management products for Standards efforts are being comple- using Web services standards. The project,
mixed environments. mented by open source vendors who are which focuses on management of Linux
“The oVirt framework will be used to aligning their source-code flexibility with and Unix systems, is an open source imple-
control guests in a cloud environment, the interoperability trend. mentation of WS-Management, an industry
create pools of resources, create images, Upstarts such as GroundWork, Likewise, standard protocol managed by the DMTF.
deploy images, provision images and Hyperic, OpenQRM, Zenoss and Quest’s There are other WS-Man variations such as
manage the life cycle of those,” says Mike Big Brother platform are working the open the Java implementation called Wiseman.
Ferris, director of product strategy for the source route to build a united manage- “Interoperability is the end game,”
management business unit at Red Hat. ment front. DMTF’s Bumpus says.“You can have all the
HP has aligned its HP Insight Dynamics “We picked [tools] most people pick specs, but if you don’t have interoperability
– Virtual Server Environment (VSE) with when they use open source, and we pack- who cares.”
VMware and plans to add support for aged them together,” says Dave Lilly, CEO In today’s evolving data centers and
Microsoft’s Hyper-V in the next release, of GroundWork. The company’s package services revolution it turns out a lot of IT
according to HP’s Albert. In addition, HP includes 100 top open source projects, managers are beginning to care very much.
is increasing the feature set of its Linux including Nagios, Apache and NMap.
management and monitoring support. GroundWork also includes a plug-in it
And while the vendors work on their
tools, the DMTF is working on standards it
hopes will be as common as existing DMTF
standards CIM and WBEM.
The Virtualization Management Initia-
tive (VMAN) released by the DMTF in
September 2008 is designed to provide
interoperability and portability standards
for virtual computing. The initiative
includes the Open Virtualization Format
(OVF) for packaging up and deploying one
or more virtual machines to either Linux or
Windows platforms. Tools that are based on
VMAN will provide consistent deployment,
management and monitoring regardless of
the hypervisor deployed.
“The truth is we have been working on
this whole platform independence since
1998,” says Winston Bumpus, president of
the DMTF, in regard to the organization’s
goals.
Virtualization is only one of the DMTF’s
initiatives. The group has started its interop-
erability certification program around its
SMASH and DASH initiatives. The Systems

Sponsored by APC
www.apc.com 25
EXECUTIVE GUIDE
Back to TOC

Next-generation data centers:


opportunities abound
Section 4
Inside the Data Center
The Google-ization of Bechtel
How the construction giant is transforming its IT operations
to emulate Internet leaders and embrace SaaS

I
By Carolyn Duffy Marsan, Network World, 10/29/2008

If you could build your IT sys- slashing its portfolio of software applica- Carr predicts, however, that Bechtel’s
tions to simplify operations as well as the do-it-yourself SaaS strategy will be an
tems and operation from scratch
end user experience. interim step until the company is able to
today, would you recreate what Dubbed the Project Services Network, fully outsource its IT infrastructure. That
you have? That’s the question Bechtel’s new strategy applies the SaaS may take as long as 10 years, he adds.
Geir Ramleth, CIO of construc- computing model internally to provide IT “My guess is that over time -- and maybe
tion giant Bechtel, asked himself services to 30,000 users, including 20,000 it will start with the HR system -- Bechtel
employees and eventually 10,000 subcon- will look outside and start running some
several years ago. tractors and other business partners. aspects of its IT operations off of [SaaS]
The question – and the industry bench- We operate “as a service provider to a sites,” Carr says.“Then its cloud will start to
marking exercise that followed – prompted set of customers that are our own [con- blur with the greater Internet cloud.”
Bechtel to transform its IT department struction] projects,” Ramleth said.“Until we Bryan Doerr, CTO of utility computing
and model it after Internet front-runners can find business applications and SaaS provider Savvis, says many enterprises like
YouTube, Google, Amazon.com and models for our industry, we will have to do Bechtel are interested in the SaaS model
Salesforce.com. After all, these companies it ourselves, but we would like to operate for applications that don’t differentiate
have exploited the latest in network design, with the same thinking and operating them from their competition.
server and storage virtualization to reach models as [SaaS providers] do.” “The move to SaaS is simply a different
new levels of efficiency in their IT opera- Nicholas Carr, author of several books delivery model for an application that has
tions. Ramleth wanted to mimic these including “The Big Switch: Rewiring the little to do with intellectual property or
approaches as Bechtel turned itself into a World from Edison to Google” which innovation,” Doerr says.“Once you make
software-as-a-service (SaaS) provider for chronicles a shift to the SaaS model, called the decision to outsource to somebody
internal users, subcontractors and business Bechtel’s strategy a smart move. else’s software, the decision to host it
partners. “For the largest enterprises, the very yourself or use a SaaS provider is about
After researching the Internet’s stron- first step into the Internet cloud may well economics…. For many enterprises,
gest brands, Bechtel scrapped all of its be exactly what Bechtel is doing: building licensing by end user seat is much more
existing data centers and built three new their own private cloud to try to get the efficient than licensing as a bulk package,
facilities that feature the latest in server cost savings and flexibility of this new buying servers and storage and data center
and storage virtualization. model,” Carr says.“Large companies have space, and then training people to self-host
Bechtel also designed a new Gigabit such enormous scale in their own IT the application.”
Ethernet network with hubs at Internet operations that the outside providers, the Several business challenges are driving
exchange points that it is managing itself true utility providers, just aren’t big enough Bechtel’s SaaS strategy.
instead of using carriers. Now, Bechtel is yet…to make them a better option.” Bechtel is a leading construction,

Sponsored by APC
www.apc.com 26
EXECUTIVE GUIDE
Back to TOC

Section 4: Inside the Data Center • • •


engineering and project management firm protecting Bechtel’s intellectual property accommodating Web applications that can
with 42,500 employees and 2007 revenue when so many subcontractors and busi- operate more in an Internet mode than in
of $27 billion. The privately held company ness partners have access to its network an intranet mode,” Ramleth explains.
is working in more far-flung locations, and data. Perhaps most impressive is that Bechtel
and it has difficulty finding and retraining “A third of the people on our network is transforming its IT operations without
talented employees to work on its projects. are non-Bechtel employees. That exposure additional funding. Bechtel would not
Bechtel’s employees are demanding forms a security risk,” Ramleth says. release its annual budget for its Informa-
business software that is as intuitive as Bechtel started its transformation by tion Systems & Technology group, but
popular Web sites. The company doesn’t trying to figure out how to revamp its the company said it has 1,150 full-time
have time to train end users in software software applications to operate more like employees in its IS&T group and 75 to 100
applications, nor can it afford to maintain leading Web sites. But what Bechtel dis- contractors.
hundreds of applications. covered is that it had to fix the underlying “We’ve mostly paid for this by re-alloca-
“We needed a different way of doing IT infrastructure -- including data centers tion of the budgets that we otherwise
applications and supporting them,” and networks -- before it could change its would have used for refresh and mainte-
Ramleth says.“We have more employees applications. nance,” Ramleth says. We’re “doing a total
coming into the organization, and we need “Not only do you have to solve the change of the traditional way of doing
to get them up to speed fast.” IT architecture and the way you operate things, and we have done it with very little,
Another key business challenge is it, but you have to make sure that IT is if any, incremental funding.”

Sponsored by APC
www.apc.com 27
EXECUTIVE GUIDE
Back to TOC

Section 4: Inside the Data Center • • •


Leader benchmarks used 12 system administrators force.com, not only are they
The transformation began for every 200,000 servers, or running one application, but
with Bechtel’s IS&T group roughly 17,000 servers per they are running one version
spending a year trying to figure system administrator. Bechtel, and they are only running it
out how to drive faster adop- on the other hand, was oper- in one location,” Ramleth says.
tion of consumer technology ating with 1,000 servers per “They upgrade that applica-
such as Google and Amazon. system administrator. tion four times per year, and
com across the company. “What we learned is that they don’t disrupt the users by
“We asked ourselves: If we you have to standardize like having to retain them. Every
started Bechtel today, would crazy and simplify the environ- time we have a new version, we
we do IT the same way we are ment,” Ramleth says.“Google have to retrain our users.”
doing it today? The answer was basically builds their own With its benchmarking data
no. If we had a clean slate, we servers by the thousands or in hand, Bechtel decided to
wouldn’t do it the way we were gets them built in a similar revamp its IS&T operations
doing it,” Ramleth says. fashion, and they run the same to model itself as closely as
Ramleth decided to software on it. So we had to get possible after the SaaS model
benchmark Bechtel’s IT opera- more simplified and standard- pioneered by these four
tion against leading Internet ized.” Internet leaders.
companies launched in recent Bechtel studied Amazon. “If you take the ideal world,
years. He zeroed in on YouTube, com and determined that everything is done as a service:
Google, Amazon.com and Amazon.com must have a computing, storage, software
Salesforce.com for comparison. better storage strategy if it and operations,” Ramleth
Bechtel’s IS&T staff studied is offering disk space for a says.“It’s maybe the ultimate
the available information about fraction of Bechtel’s internal goal…but if you start where all
how these Internet leaders run costs. While Amazon.com the enterprises are today, that’s
their IT operations, and they was offering storage for 10 a very long road to go.”
interviewed venture capitalists cents per gigabyte per month,
with experience investing in Bechtel’s internal rates in the A trinity of data centers
consumer applications. United States was $3.75 per Once Bechtel committed to
Bechtel came up with gigabyte. the SaaS model, the firm real-
estimates for how much Ramleth says the key to ized it needed to revamp its
money YouTube spends on reducing storage costs was not data centers to copy Google’s
networking, Google on systems only to simplify and virtualize standardized, virtualized
administration, Amazon.com the storage environment, but model.
on storage, and Salesforce.com also to drive up utilization. Bechtel was operating seven
on software maintenance. What “Our average utilization data centers worldwide, but in
Bechtel discovered is that its streams a day for free. Bechtel was 2.3%,” Ramleth says. With 2007 replaced those with three
own IS&T group was lagging estimated that YouTube spent virtualization,“we now expect new data centers, one each in
industry leaders. $10 to $15 per megabit for to have utilization in the 70% to the United States, Europe and
“What we found were bandwidth, while Bechtel is 75% range.” However, he added Asia. The three data centers
tremendous discrepancies spending $500 per megabit for that the new virtualized storage have identical hardware from
between our metrics and what its Internet-based VPN. environment is “more complex HP, Cisco, Juniper and Riverbed.
these guys were dealing with,” YouTube was “paying a frac- to operate.” On the software side, Bechtel
Ramleth says.“You can learn tion of what we were paying,” Bechtel turned to Sales- is using Microsoft and Citrix
a tremendous amount from Ramleth says.“We learned force.com for its expertise in packages.
[companies] that have the you have to be closer to the running a single application “The hardware is the same,
privilege of starting recently.” high-bandwidth areas and not with millions of users. In the software is the same, and
When Bechtel researched haul the data away. We decided contrast, Bechtel operates 230 the same organization [is]
YouTube, it came to the we better bring the data to the applications, and it runs 3.5 managing all of them,” Ramleth
conclusion that YouTube must network, rather than bring the versions per application, which says.“It’s like one data center,
be getting much less expen- network to the data.” means it maintains approxi- but it operates in three dif-
sive network rates because Next, Bechtel studied how mately 800 applications at any ferent locations.”
otherwise it wouldn’t be able Google operated its servers. given time. The new data centers have
to send 100 million video Bechtel estimated that Google “When you look at Sales- virtualized servers with utiliza-

Sponsored by APC
www.apc.com 28
EXECUTIVE GUIDE
Back to TOC

Section 4: Inside the Data Center • • •


tion of around 70%, Ramleth says. These a heck of a lot more capacity,” Ramleth used applications that are not as critical,
centers boast the most energy-efficient says, adding that Bechtel is getting around we are using Citrix solutions to solve some
design possible, so they are a fraction of 10 times more capacity for the same of them so we don’t have to do too much
the size of the older data centers and use amount of money. with them. We’re doing that as a bridge to
significantly less electricity. Ramleth says the biggest cost-saving of get from our past to our new infrastruc-
“In square footage, we’re down by a the new network design came from aggre- ture.”
factor of more than 10,” Ramleth says. gating network traffic at Internet exchange Bechtel’s new portal is key to its SaaS
“Two-thirds of the power in a data center points, which is what leading e-commerce strategy and its goal of having software
is chilling, and if I don’t have to chill that vendors do. applications that are as simple to use as
extra space…I get a dramatic reduction in “We found that for the amount of traffic Google.
power needs.” and the amount of capacity that we put In its research, Bechtel found that 80%
The three new data centers are opera- it in, we could do it cheaper ourselves,” of its end users want to get information
tional, and Bechtel expects to close all the Ramleth says.“This is not something for a out of an application but don’t need to
older data centers by the end of 2009. small or medium-sized enterprise, but we understand how it works. With the new
Ramleth says one of the hardest aspects found that we were big enough to be able Project Services Network Portal, Bechtel’s
of the IS&T transition was closing data to do it ourselves.” end users can interact with applications
centers upgraded as recently as 2005. without requiring training.
“Six of our data centers were relatively Bechtel built the portal using Micro-
modern. That was a tough thing. We “Two-thirds of the power soft’s SharePoint software. Ramleth says the
finished a [data center] consolidation
in 2005, and already in 2006 we started in a data center is chilling, portal “gives us consumerization and also
gives us the new security model.”
talking about doing a re-do of our data and if I don’t have to chill Ramleth likens Bechtel’s security
centers again. [Our IS&T staff] felt like
they hadn’t really gotten dry from the last
that extra space … I get strategy to Amazon.com’s approach. With
Amazon.com, users can browse freely and
shower before they started getting wet a dramatic reduction in security is applied when a purchase is
again,” Ramleth says. power needs.” made. Similarly, Bechtel is trying to create
Web applications that apply security only
Do-it-yourself networking Geir Ramleth, CIO, Bechtel when needed.
At the outset of this research project, “We will apply different forms of secu-
Bechtel was operating an IP VPN that it rity based on what’s going on,” Ramleth
had installed in 2003. To drive down the Simple secure software says.“By using the portal, we can limit
cost of its network operations, Bechtel has The next aspect of transforming what you have access to by policy, by role,
redesigned that network using YouTube’s Bechtel’s IS&T operations is migrating its in a way where people won’t necessarily
do-it-yourself model. applications to the new SaaS model. go in and find [our intellectual property]
Bechtel has a Gigabit Ethernet ring con- Bechtel runs 230 software applications, without having the right to access it.”
necting the three new data centers, with 60% of which were developed internally. Ramleth says the new portal and the
dual paths for failover. Bechtel is buying Ramleth says he hopes to have 20 to 30 of policy-based security model it provides
raw bandwidth from a variety of providers the firm’s key applications converted to are among the top benefits that Bechtel
-- Cox, AboveNet, Qwest, Level 3 and Sprint the new SaaS model within a year. is gaining out of the IS&T transformation
-- but it is managing the network itself. So far, Bechtel has a dozen applications effort. He says this benefit will be fully
“We buy very little provisioned net- running in its new data centers, including realized when Bechtel’s business partners
working,” Ramleth says.“We do it ourselves e-mail, a Web portal, a time record system, are migrated to the portal, too.
because…it’s less costly than buying from and a workflow/document management The SaaS model allows us “to bring in a
others….We go to the Internet exchange application. different cost model that can afford us to
points, to the carrier hotels, where the Bechtel’s ERP applications, including have a high-capacity, global, collaborative
traffic terminates.” Oracle Financials for accounting and SAP work environment,” Ramleth says.
So far, Bechtel has migrated the three for human resources and payroll, will be The risk for enterprises that don’t start
new data centers to the new network, migrated to the new infrastructure by the a SaaS migration strategy soon is that
along with nine offices: San Francisco; end of 2009, Ramleth says. their IT organizational structures will be a
Glendale, Ariz.; Houston; Oak Ridge, Tenn.; “We will be getting applications in competitive disadvantage, Ramleth warns.
Frederick, Md.; London; New Delhi and and getting them certified in this new “If they don’t start thinking about this
Brisbane. environment,” Ramleth says.“The ones soon, the changeover in the future will be
The new network “is about the same that we can’t totally certify, a few we will really hard,” Ramleth says.
[cost] as what we paid before, but it offers let die….For the low usage, infrequently

Sponsored by APC
www.apc.com 29
EXECUTIVE GUIDE
Back to TOC

Section 4: Inside the Data Center • • •

Why San Diego city workers expect apps up


and running in 30 minutes or less
Major VMware project makes application deployment
a snap while easing the IT budget

D
By Jon Brodkin, Network World, 10/20/2008

Deploying a new application used For example, when the city needed as 35 virtual machines onto one physical
to be a month-long headache and to upgrade a purchase-order system for server. Such density has allowed the
outside contractors, SDDPC had to push organization to power off 150 physical
a budget-drainer at the San Diego the project out three to four weeks to get servers; it now runs 292 virtual machines
Data Processing Corp. Now the the infrastructure ready, Scherer says. The on 22 physical x86 servers -- leaving plenty
process takes as little as 30 min- same three- to four-week wait was in store of room for expansion.“A lot of those hosts
utes and costs next to nothing, when the organization needed to boost aren’t even being used; they’re just for
processing power for a Citrix Systems Pre- future growth,” Scherer says.
thanks to the extensive use of
sentation Server deployment. Besides the SDDPC also uses Sun’s virtualization
server virtualization. annoyance, each new HP ProLiant server technology on Sun Sparc servers, and now
The SDDPC, a private nonprofit that cost somewhere around $10,000, he says. runs 120 logical servers on 90 boxes.
handles IT for the San Diego municipal When SDDPC started with server The goal is to virtualize as much as
government and its more than 10,000 city virtualization, users were surprised at how possible:“We’ve set an initiative: For any
workers, embraced virtualiza-
tion with a vengeance several
years ago -- and has been
reaping substantial rewards
“We’ve set an initiative: For any new application
ever since. or service that needs to be deployed in our data
The first goal was server con- center, we’re going to do everything we can to
solidation, says Rick Scherer, a
Unix systems administrator who virtualize first.”
spearheaded the SDDPC virtu- Rick Scherer, CIO, Unix systems administrator, SDDPC
alization project. “We were just
like everyone else: We had tons
of x86 machines that were only
being 10% utilized,” he says. “I presented to speedily IT could turn up applications, new application or service that needs to
our directors that we could virtualize 20 Scherer says.“But what’s funny is now that be deployed in our data center, we’re going
machines into one box, get a better ROI and we’ve been doing it so long, they expect to do everything we can to virtualize first.
save tons of money plus data center space.” it. It has put a damper on management,” If there’s no way to virtualize it, we’ll look
Easier application deployment followed, he says. Users are disappointed now “if at physical hardware,” Scherer says, noting
which meant huge gains for business users, they put a request in for a server and it’s that the organization also is aggressively
he adds. not up in a half-hour,” he adds. One of the moving the city’s existing applications, as
Before using VMware’s ESX Server, every only things preventing further virtualiza- appropriate, to the virtual infrastructure.
application upgrade or new deployment tion right now is the time Scherer and his VMware remains the server-virtualiza-
meant the SDDPC had to buy -- then install colleagues must devote to day-to-day tasks tion tool of choice, even though products
-- a new server. Such is the nature of the and other projects. from rivals Citrix and Microsoft are now
Windows operating system, Scherer says. available and cost quite a bit less. Using
“Anytime there was a new application, we VMware loyalty VMware, SDDPC pays $10,000 to virtualize
couldn’t put it on a box with another appli- Before deploying server virtualization, a four-socket machine, but that one
cation, because either those applications the SDDPC had about 500 physical x86 physical host can support well over 20
wouldn’t work properly or the [applica- machines, largely from HP. With VMware, virtual machines, Scherer says. If you buy
tion] vendor didn’t support that.” the organization can consolidate as many 20 physical servers at $10,000 a pop, you’re

Sponsored by APC
www.apc.com 30
EXECUTIVE GUIDE
Back to TOC

Section 4: Inside the Data Center • • •


shelling out $200,000, he says. desktops. It’s testing several thin-client committing resources. It may be tempting
Numerous management advantages, devices to replace many of the city’s 8,500 to allocate nearly 100% of a server’s
disaster-recovery capabilities and security desktops, and plans to use VMware’s Virtual resources, but it’s better to leave room for
also make the investment well worth it, Desktop Manager to provision and manage future growth.
Scherer says. The SDDPC operates two clients. This software “includes the ability In addition, virtualization introduces a
data centers and contracts with a co- to create a work-from-home scenario. A potential single point of failure, because
location vendor in Chicago for disaster user can go home, not necessarily have dozens of applications and virtual
recovery. Initially, the organization included thin-client at home, but through a Web-site machines may reside on one physical
only mission-critical applications in its connect to his desktop,” he says.“That server.“Your biggest risk is having a
disaster-recovery plan, but it’s beginning could potentially eliminate the need for hardware failure and then no place for
to account for many other applications Citrix, which is a significant licensing cost those virtual machines to run,” Scherer
-- those it considers crucial though not to the city every year.” says. That’s why it’s important to use such
necessarily mission-critical -- because Nevertheless, desktop virtualization will disaster-recovery features as live migration,
virtualization makes it feasible to do so not happen immediately. The city refreshed and build out a strong network and storage
from a cost perspective, he says. most of its desktops in the past year, so the system to support virtual servers, he says.
In addition, SDDPC makes extensive desktop virtualization project won’t kick The SDDPC relies on about 250
use of VMware’s VMotion, a live-migration off for the next two or three years, Scherer terabytes of NetApp storage, and uses
function that moves virtual machines from says. such features as data deduplication and
one physical server to another without any thin-provisioning to maximize storage
downtime. VMotion comes in handy for Sticky points space and make sure each application has
balancing loads among servers, as well as Despite realizing benefits, Scherer has enough storage dedicated to it. Devoting
for routine maintenance, Scherer says.“We run into some roadblocks and challenges too many physical and virtual machines to
can migrate virtual machines on a specific related to virtualization. His goal is to one storage array can be problematic.
host to others within the cluster; this allows virtualize nearly everything, but vendor “That recently happened: We had
us to perform hardware and software licensing setups are holding him back. another application that wasn’t virtual-
upgrades with zero downtime and no The SDDPC has virtualized Citrix’s applica- ized and was on the same array as our
impact to the customer,” he says. tion-delivery software, SQL servers and virtual-machine environment,” Scherer says.
As for the security benefits, if a virtual all its Web services on VMware boxes. “Resources spiked up and caused conten-
server becomes compromised, it’s easy to Vendor support issues, however, meant it tion on our virtual-machine farm. Luckily,
shut it down or isolate it into a separate could run SAP software only on virtual we caught it in the early stage and moved
network group.“If we [want to] have a servers in its development environment, the data off, but it was definitely a lesson
Web zone, an application zone and a and run Exchange nowhere on a virtual learned,” he adds.
database zone, we can accomplish that infrastructure. SAP and Microsoft now sup- Beyond storage, IT pros who embrace
with a lot less hardware. We’re virtualizing port VMware, so Scherer will revisit those virtualization need to design full multipath
our network as well. Instead of having earlier decisions, he says.“Until virtualiza- networks, Scherer says.“You want to make
separate physical switches we have to pay tion is adopted by every single software sure that if a link fails, you have full redun-
for, maintain and manage, all of it can be company, you’re going to run into those dancy,” he says.“If all your virtual machines
done within the [VMware] ESX host just by issues,” he adds. are running on a data store that has only
creating separate virtual switches,” Scherer The potential for server overuse one path and the link dies, that’s the same
says. is another issue. Because it’s so easy thing as unplugging all your ESX hosts, and
SDDPC now is looking forward to its to deploy new virtual machines, an you’re completely dead.”
next big project, Scherer says -- virtualizing overzealous IT pro runs the risk of over-

Sponsored by APC
www.apc.com 31

You might also like