You are on page 1of 98

This research note is restricted to the personal use of jesus.rendon@itesm.

mx
G00264126

Hype Cycle for Emerging Technologies, 2014


Published: 28 July 2014

Analyst(s): Hung LeHong, Jackie Fenn, Rand Leeb-du Toit

This Hype Cycle brings together the most significant technologies from
across Gartner's research areas. It provides insight into emerging
technologies that have broad, cross-industry relevance, and are
transformational and high-impact in potential.
Table of Contents
Analysis.................................................................................................................................................. 3
What You Need to Know.................................................................................................................. 3
The Hype Cycle................................................................................................................................ 4
New on the 2014 Hype Cycle for Emerging Technologies...........................................................7
Major Changes........................................................................................................................... 7
The Priority Matrix.............................................................................................................................9
Off the Hype Cycle......................................................................................................................... 10
On the Rise.................................................................................................................................... 11
Bioacoustic Sensing................................................................................................................. 11
Digital Security..........................................................................................................................12
Virtual Personal Assistants........................................................................................................ 14
Smart Workspace.....................................................................................................................16
Connected Home..................................................................................................................... 17
Quantified Self.......................................................................................................................... 19
Brain-Computer Interface......................................................................................................... 21
Human Augmentation...............................................................................................................22
Quantum Computing................................................................................................................ 24
Software-Defined Anything....................................................................................................... 26
Volumetric and Holographic Displays........................................................................................28
3D Bioprinting Systems............................................................................................................ 30
Smart Robots........................................................................................................................... 32
Affective Computing................................................................................................................. 33

This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Biochips................................................................................................................................... 35
Neurobusiness..........................................................................................................................37
Prescriptive Analytics................................................................................................................ 39
At the Peak.....................................................................................................................................40
Data Science............................................................................................................................ 40
Smart Advisors......................................................................................................................... 41
Autonomous Vehicles............................................................................................................... 43
Speech-to-Speech Translation................................................................................................. 45
Internet of Things......................................................................................................................46
Natural-Language Question Answering.....................................................................................48
Wearable User Interfaces..........................................................................................................50
Consumer 3D Printing.............................................................................................................. 52
Cryptocurrencies...................................................................................................................... 54
Complex-Event Processing.......................................................................................................56
Sliding Into the Trough....................................................................................................................59
Big Data................................................................................................................................... 59
In-Memory Database Management Systems............................................................................ 61
Content Analytics......................................................................................................................63
Hybrid Cloud Computing.......................................................................................................... 65
Gamification............................................................................................................................. 67
Augmented Reality................................................................................................................... 70
Machine-to-Machine Communication Services......................................................................... 71
Mobile Health Monitoring.......................................................................................................... 74
Cloud Computing..................................................................................................................... 76
NFC..........................................................................................................................................78
Virtual Reality............................................................................................................................ 80
Climbing the Slope......................................................................................................................... 82
Gesture Control........................................................................................................................ 82
In-Memory Analytics................................................................................................................. 84
Activity Streams........................................................................................................................86
Enterprise 3D Printing............................................................................................................... 87
3D Scanners.............................................................................................................................89
Consumer Telematics............................................................................................................... 91
Entering the Plateau....................................................................................................................... 93
Speech Recognition................................................................................................................. 93
Appendixes.................................................................................................................................... 94
Page 2 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Hype Cycle Phases, Benefit Ratings and Maturity Levels.......................................................... 96


Gartner Recommended Reading.......................................................................................................... 97

List of Tables
Table 1. Hype Cycle Phases.................................................................................................................96
Table 2. Benefit Ratings........................................................................................................................96
Table 3. Maturity Levels........................................................................................................................97

List of Figures
Figure 1. The Journey to Digital Business............................................................................................... 5
Figure 2. Hype Cycle for Emerging Technologies, 2014..........................................................................8
Figure 3. Priority Matrix for Emerging Technologies, 2014.................................................................... 10
Figure 4. Hype Cycle for Emerging Technologies, 2013........................................................................95

Analysis
What You Need to Know
This is the 20th anniversary of the Gartner Hype Cycle. The Emerging Technologies Hype Cycle was
the first Hype Cycle. It is now complemented by more than 120 Hype Cycles. As in other years, the
Hype Cycle for Emerging Technologies contains a representative set of technologies that get a lot
of interest from our clients, and technologies that Gartner feels are significant ones that should be
monitored. This Hype Cycle targets business strategists, chief innovation officers, R&D leaders,
entrepreneurs, global market developers and emerging technology teams by highlighting a set of
technologies that will have a broad-ranging impact across the enterprise. It is the broadest
aggregate Gartner Hype Cycle, selecting from the more than 2,000 technologies featured in
"Gartner's Hype Cycle Special Report for 2014." For information on interpreting and using Gartner's
Hype Cycles, see "Understanding Gartner's Hype Cycles."
Gartner recommends that enterprises do at least an annual scan of the technologies on this Hype
Cycle to question if each technology could lead to significant value to customers or the enterprise.
As always, the scanning exercise should be extended to understand how others in your industry
may leverage these technologies. This year, we encourage enterprises to scan beyond the bounds
of their industry. One of the more prominent parts of a digital business strategy is the competitive
opportunity/threat section that identifies how industry dynamics and competition may change
because of digital technologies. For example, the popularity of wearables is forcing convergence in
the areas of health, fitness and consumer electronics. What were traditionally sporting equipment

Page 3 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

brands are now health and fitness companies that could be competing or partnering with any
combination of technology companies and healthcare providers to deliver health services.
Use this Hype Cycle to identify which technologies are emerging, and use the concept of digital
business transformation to identify which business trends may result.

The Hype Cycle


The theme for 2014 is digital business. As enterprises embark on the journey to become digital
businesses, they will leverage technologies that today are considered to be "emerging."
Understanding where your enterprise is on this journey and where you need to go will not only
determine the amount of change expected for your enterprise, but also map out which combination
of technologies supports your progression.
As set out on the Gartner road map to digital business (see Figure 1 in the HTML or PDF versions of
this document and "Get Ready for Digital Business With the Digital Business Development Path"),
there are six progressive business era models that your enterprise can identify with today and
aspire to tomorrow:

Stage 1: Analog

Stage 2: Web

Stage 3: E-Business

Stage 4: Digital Marketing

Stage 5: Digital Business

Stage 6: Autonomous

Page 4 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Figure 1. The Journey to Digital Business


Before the Web

Focus

Outcomes

After the Nexus of Forces

Analog

Web

E-Business

Digital
Marketing

Digital
Business

Autonomous

Build
relationships that
drive business
or lower cost

Extend
relationships into
new markets or
geographies

Transform sales
channel into a
global medium to
drive efficiencies

Exploit the nexus


to drive greater
efficiency

Extend potential
customers from
people to things

Smart,
semiautonomous
things become the
primary "customer"

Optimize
relationships

Extend
relationships

Optimize
channels

Optimize
interactions

Build new
business models

Maximize retention
of and relationships
with things

People

People

People

People

People

People

Business

Business

Business

Business

Business

Things

Things

Entities

Disruptions

Before the Nexus of Forces

Emerging
technologies

Internet
and digital
technologies

Automation
of business
operations

Deeper customer
relationships,
analytics

Creation of
new value and
new nonhuman
customers

Smart machines
and things
as customers

ERP,
CRM

CRM,
Web

EDI,
BI,
portals

Mobile,
big data,
social

Sensors,
3D printing,
smart machines

Robotics,
smarter machines,
automation

Technologies

Change of kind

Change of degree

Source: Gartner (July 2014)

Page 5 of 98

Gartner, Inc. | G00264126

This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Since the Hype Cycle for Emerging Technologies is purposely focused on more emerging
technologies, it mostly supports the last three of these stages: digital marketing, digital business
and autonomous. Let's take a look at each of three stages in detail, and the corresponding
technologies:

Digital Marketing (Stage 4): The digital marketing stage sees the emergence of the Nexus of
Forces (mobile, social, cloud and information). Enterprises in this stage focus on new and more
sophisticated ways to reach consumers, who are more willing to participate in marketing efforts
to gain greater social connection, or product and service value. Buyers of product and services
have more brand influence than previously. They see their mobile devices and social networks
as preferred gateways and enterprises at this stage, and grapple with tapping into buyer
influence to grow their business. Enterprises that are seeking to reach this stage should
consider the following technologies on the Hype Cycle:

Digital Business (Stage 5): Digital business is the first postnexus stage on the road map and
focuses on the convergence of people, business and things. The Internet of Things (IoT) and the
concept of blurring the physical and virtual worlds are strong concepts in this stage. Physical
assets become digitalized and become equal actors in the business value chain alongside
already-digital entities such as systems and apps. 3D printing takes the digitalization of physical
items further and provides opportunities for disruptive change in the supply chain and
manufacturing. The ability to digitalize attributes of people (for example, the health vital signs) is
also part of this stage. Even currency (which is often thought of as digital already) can be
transformed (for example, cryptocurrencies). Enterprises seeking to go past the Nexus of
Forces technologies to become a digital business should look to these additional technologies:

Software-defined anything, volumetric and holographic displays, neurobusiness, data


science, prescriptive analytics, complex-event processing, big data, in-memory DBMS,
content analytics, hybrid cloud computing, gamification, augmented reality, cloud
computing, NFC, virtual reality, gesture control, in-memory analytics, activity streams and
speech recognition

Bioacoustic sensing, digital security, smart workspace, connected home, 3D bioprinting


systems, affective computing, speech-to-speech translation, Internet of Things,
cryptocurrencies, wearable user interfaces, consumer 3D printing, machine-to-machine
communication services, mobile health monitoring, enterprise 3D printing, 3D scanners and
consumer telematics

Autonomous (Stage 6): Autonomous represents the final postnexus stage. This stage is
defined by an enterprise's ability to leverage technologies that provide humanlike or humanreplacing capabilities. Using autonomous vehicles to move people or products and using
cognitive systems to write texts or answer customer questions are all examples that mark the
autonomous stage. Enterprises seeking to reach this stage to gain competitiveness should
consider these technologies on the Hype Cycle:

Virtual personal assistants, human augmentation, brain-computer interface, quantum


computing, smart robots, biochips, smart advisors, autonomous vehicles, and naturallanguage question and answering

Page 6 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Although we have categorized each of the technologies on the Hype Cycle into one of the digital
business stages, enterprises should not limit themselves to these technology groupings. Many early
adopters have embraced quite advanced technologies (for example, autonomous vehicles or smart
advisors) while they continue to improve nexus-related areas (for example, mobile apps).

New on the 2014 Hype Cycle for Emerging Technologies


This Hype Cycle features new entrants that enable a more fine-grained analysis of major trends. The
following technologies have been added to the 2014 Hype Cycle and were not part of the 2013
Hype Cycle, although many have been previously featured on this and other Gartner Hype Cycles:

Data science added to reflect the growing need to combine mathematical know-how,
business domain expertise and modern computer science

Software-defined anything added to reflect the virtualization of any IT resource and the
emerging area of software-defined physical assets

Cryptocurrencies added because of the hype and potential significance of cryptocurrencies


like bitcoin

Hybrid cloud added due to the importance of recognizing that most cloud architectures
pursued by enterprises will be a hybrid

Smart advisors added to reflect the emergence and importance of cognitive-based systems
and advisors

Connected home an important part of the Internet of Things

Digital security to capture the growing importance of securing people, systems and things

Smart Workspace to recognize the emerging application of IoT to the day-to-day work
environment

Major Changes

Mobile robots changed to smart robots to reflect a broader class of robots.

Virtual assistants replaced with virtual personal assistants. Virtual assistants were specific to the
customer service areas (for example, chatbots). Virtual personal assistants include much
broader use cases (for example, Siri or Google Now).

Page 7 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Figure 2. Hype Cycle for Emerging Technologies, 2014

expectations

Internet of Things
Natural-Language Question Answering
Wearable User Interfaces
Consumer 3D Printing
Cryptocurrencies
Complex-Event Processing
Big Data
In-Memory Database Management Systems
Content Analytics

Speech-to-Speech Translation
Autonomous Vehicles
Smart Advisors
Data Science
Prescriptive Analytics
Neurobusiness
Biochips

Hybrid Cloud Computing


Affective Computing
Gamification
Smart Robots
Augmented Reality
3D Bioprinting Systems
Machine-to-Machine
Volumetric and Holographic Displays
3D Scanners
Communication
Software-Defined Anything
Services
Quantum Computing
Mobile Health
Enterprise 3D Printing
Human Augmentation
Quantified Self
Monitoring
Brain-Computer Interface
Activity Streams
Connected Home
In-Memory
Analytics
Cloud Computing
Gesture Control
NFC
Virtual Personal Assistants
Virtual Reality
Smart Workspace
Digital Security
Bioacoustic Sensing

Speech Recognition
Consumer Telematics

As of July 2014

Innovation
Trigger

Peak of
Inflated
Expectations

Trough of
Disillusionment

Slope of Enlightenment

Plateau of
Productivity

time
Plateau will be reached in:
less than 2 years

2 to 5 years

5 to 10 years

more than 10 years

obsolete
before plateau

Source: Gartner (July 2014)

Page 8 of 98

Gartner, Inc. | G00264126

This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

The Priority Matrix


This Hype Cycle has an above-average number of technologies with a benefit rating of
transformational or high. This is a deliberate goal of the selection process. We aim to highlight
technologies that are worth adopting early because of their potentially high impact. However, the
actual benefit often varies significantly across industries. Therefore, planners should ascertain which
opportunity relates closely to their organizational requirements:

Two to five years to mainstream adoption: These technologies are focused on the digital
marketing stage (think Nexus-related) areas such as cloud (cloud computing, hybrid cloud
computing) and information/analytics-related areas (in-memory DBMS and data science). The
only exception is enterprise 3D printing, which is a digital business stage technology.

Five to 10 years to mainstream adoption: Here, we find a mix of technologies that span all
three stages on the journey to become a digital business. However, with the exception of big
data and complex-event processing, most of the technologies are centered in the digital
business and autonomous stages (digital security, smart workspace, 3D bioprinting systems,
autonomous vehicles, consumer 3D printing, Internet of Things, cryptocurrencies, machine-tomachine communication services, smart advisors, virtual personal assistants).

More than 10 years to mainstream adoption: Human augmentation is the only technology
area that has been identified in this range. The cultural and ethical acceptance required for
employees, customers and citizens to augment themselves will cause this area to take many
years to reach mainstream adoption.

Page 9 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Figure 3. Priority Matrix for Emerging Technologies, 2014

benefit

years to mainstream adoption


less than 2 years

transformational

2 to 5 years

5 to 10 years

Cloud Computing

3D Bioprinting Systems

Data Science

Autonomous Vehicles

Enterprise 3D Printing

Big Data

Hybrid Cloud Computing

Complex-Event
Processing

In-Memory Database
Management Systems

more than 10 years


Human Augmentation

Consumer 3D Printing
Cryptocurrencies
Digital Security
Internet of Things
Machine-to-Machine
Communication Services
Smart Advisors
Smart Workspace
Virtual Personal Assistants

high

3D Scanners

Augmented Reality

Bioacoustic Sensing

Content Analytics

Biochips

Neurobusiness

Gesture Control

Connected Home

Quantum Computing

NFC

Consumer Telematics
Gamification
Natural-Language
Question Answering
Prescriptive Analytics
Quantified Self
Smart Robots
Software-Defined Anything
Speech-to-Speech
Translation
Wearable User Interfaces

moderate

In-Memory Analytics
Speech Recognition

Activity Streams

Affective Computing

Brain-Computer Interface

Mobile Health Monitoring


Virtual Reality

low

Volumetric and
Holographic Displays

As of July 2014
Source: Gartner (July 2014)

Off the Hype Cycle


Because this Hype Cycle pulls from such a broad spectrum of topics, many technologies are
featured in a specific year because of their relative visibility, but are not tracked over a longer period
of time. Technology planners can refer to Gartner's broader collection of Hype Cycles for items of
Page 10 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

ongoing interest. The following technologies that appeared in the "Hype Cycle for Emerging
Technologies, 2013" do not appear in this year's report:

Electrovibration

Mesh networks: sensor

Biometric authentication methods

Smart dust

On the Rise
Bioacoustic Sensing
Analysis By: Roberta Cozza
Definition: Bioacoustic sensing captures natural acoustic conduction properties in the human body
using different sensing technologies. An example of this technology is Skinput, which allows the
skin to be used as a finger surface. When a finger taps on the skin, the impact creates acoustic
signals that are captured by a bioacoustic sensing device. Variations in bone density, size and the
different filtering effects created by soft tissues and joints create distinct acoustic locations of
signals, which are sensed, processed and classified by software.
Position and Adoption Speed Justification: This technology is being developed by researchers
from Microsoft and the Human-Computer Interaction Institute of Carnegie Mellon University in
Pittsburgh. In a prototype system, researchers focused on touch inputs on the arm and hand, and
created an armband device for sensing. They evaluated different input locations, such as the
fingertips and along the forearm.
The technology can also be integrated to augment the experience with a pico projector that projects
dynamic graphical interfaces onto the hand or forearm. For example, a telephone keypad can be
projected onto the palm of the hand, allowing real-time dialing without the use of a mobile phone.
Researchers have also developed a scrolling interface for projection onto the forearm. Users tap the
top or bottom of the UI to scroll up or down, or go back one level in the UI hierarchy. Users can
perform a simple pinching gesture with their thumb and fingers. Accuracy of 95.5% for five input
locations on the whole arm has been demonstrated.
The technology is in the early stages of development, and future efforts will need to improve on the
noninvasiveness of wearable bioacoustic sensor devices. Additionally, the disturbance from
acoustic signals coming from other motions of the body will need to be reduced, particularly in
walking or running scenarios (such as operating an MP3 player while jogging and using Skinput).
The input method is limited to quick skin taps, which in its current form does not permit more
elaborate common gestures like sliding or dragging. Additionally, body mass index fluctuations can
decrease sensing accuracy, and there is a high learning curve in setting up the solution.

Page 11 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Since its first appearance at Microsoft TechFest 2010, this project has remained under
development, with no commercial product becoming available or expected in at least the next five
years. Latest developments include a focus on a significantly decreased size of armband and
improving accuracy of input. Research is ongoing but we have no reason to move this technology
forward on this Hype Cycle.
Another example is the bioacoustic system from AT&T Labs. They have developed a prototype
bioacoustic data transfer system that can send digital keys living as vibrations through the body
(bones), enabling a door to open only to the unique acoustic "signature" of the homeowner. This
works in conjunction with piezo sensors and a device such a smartphone if the bioacoustic
signature matches on both the device and the door knob, the door unlocks. Another application that
AT&T is looking into is the exchange of data between people, where contact information is
transferred via a handshake.
User Advice: Advances in this technology should be monitored and considered in scenarios where
users can benefit from always-available and easily accessible input without direct access to the
keypad of a device, such as a mobile phone or portable music player.
Business Impact: Using the human body as an input surface is an interesting concept for UIs. It
could enable consumers to use larger and easily accessible additional input surface areas for
interaction, compared with the small surface areas offered by the touchscreens on handsets.
Users could benefit by having large surfaces for input without needing to carry extra items. In
addition, this type of input would allow accurate "eyes-free" touch interactions, because of our
natural sense of body configuration (proprioception). Unlike other external input devices, most
interactions could be performed without looking at the surface of a device. Experiments have also
demonstrated a good level of accuracy in the input. Other external input approaches, such as smart
fabrics or wearable computing, typically require an input device to be built into a piece of clothing,
which is more complex.
Benefit Rating: High
Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Sample Vendors: AT&T; Microsoft

Digital Security
Analysis By: Earl Perkins
Definition: Digital security is the result of extending current security and risk practices to protect
digital assets of all forms in the digital business and ensure that relationships among those assets
can be trusted.
Position and Adoption Speed Justification: Gartner defines "digital" as all electronically tractable
forms and uses of information and technology, including such forms as social media, cloud-based
Page 12 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

services, embedded software and systems, operational technologies, and the Internet of Things
(IoT). Digital security technology is the convergence of information security, IT security, operational
technology (OT) security, IoT security and physical security technologies. It is the result of digital
impacts on security and risk organizations, and on process and technology architecture, and is the
next stage of enterprise security's evolution. Digital security's mission is to mitigate digital risk. The
evolving role of digital business in the enterprise places digital security at the post-Technology
Trigger phase of the Hype Cycle.
Cybersecurity awareness is growing with business leaders and is increasingly considered a required
part of new and existing business designs. Cybersecurity designs involve assets in the physical
world (OT and IoT) connected to new, nontraditional partners beyond the enterprise adding a level
of technology that creates peer-to-peer relationships among businesses, people and things. Digital
security aims to protect all assets in this new environment and ensures that relationships between
those assets can be trusted. Digital security expands present-day risk and security management
practices. It includes, but is not limited to, cybersecurity practices, and incorporates services from
outside of the business. Digital security is the means business leaders can use to leverage
cybersecurity to its full business advantage and helps extend security leaders' roles in becoming full
business partners.
User Advice: CIOs and enterprise architects should accelerate their efforts to become relevant in
organizations' business plans involving OT and IoT, and should align resources and processes to
foster integrated collaboration with security architecture, planning, management and operations.
Product managers should pursue new partners in security technologies and services to ensure that
business efforts to embrace OT and IoT assets will be accommodated. Strategic planners should
expand their knowledge and awareness of industrial automation and control, physical security, and
embedded system designs to accommodate long-term planning for digital security architecture.
Information security managers should establish organizational responsibilities for selected team
members to coordinate with OT counterparts and business managers to embrace the IoT in their
initiatives. Those managers must reshape enterprise security practices to be more inclusive and
collaborative across business disciplines that include industrial, commercial and consumer
enterprises. Information security managers can ultimately transform themselves into digital security
managers as their responsibilities expand into the digital business.
Business Impact: Digital security will reshape information security, IT security, OT security,
physical security, and related security processes and organizations, and will allow security leaders
to better relate to business processes. This will occur in the following areas:

Business scenario planning: Digital security will now be part of the business initiative planning
cycle to counter the expanding complexity of multiple asset relationships, multiple partners and
providers, and technology combinations.

Restructuring due to merger, acquisition or divestiture: Digital security practices will offset the
inertia often experienced due to conflicting IT/OT/IoT requirements in different companies by
delivering a security architecture and design layer more adaptive to such changes. This will not
be an overnight realization, but will evolve as digital security maturity improves.

Page 13 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Supply chain security: Digital security will be a means to coordinate and enforce security
practices across supply chain partnerships, including those that use cloud-based services to
deliver business solutions. The enforcement will be driven by specific business mandates
relative to trust with those relationships.

Security management and operations: Digital security teams will provide more direct, relevant
data to business teams involved in applications and services that use digital security
technologies and services as part of their business intelligence efforts. Cloud-based security
services will transform digital security practices by leveraging scale and capability in coverage.

Benefit Rating: Transformational


Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Sample Vendors: Accenture; BMW; GE; Google; IBM; Intel; Vodafone
Recommended Reading: "Agenda Overview for Digital Business, 2014"
"Digital Business Forever Changes How Risk and Security Deliver Value"
"Digital Business: 10 Ways Technology Will Disrupt Existing Business Practices"

Virtual Personal Assistants


Analysis By: Tom Austin; Brian Manusama; Kenneth F. Brant
Definition: A virtual personal assistant (VPA) performs some of the functions of a human personal
assistant. It observes its user's behavior, and builds and maintains data models, with which it draws
inferences about people, content and contexts. It does so to predict its user's behavior and needs,
build trust and, eventually, with permission, act autonomously on its user's behalf. It makes
everyday tasks easier (by prioritizing emails, for example) and its user generally more effective (by
highlighting the most important content and interactions).
Position and Adoption Speed Justification: VPAs represent a "perfect storm": a compelling
vision, a great leap forward in technology, plentiful supply, and significant demand driven by
transformational benefits.
Vision:
Apple's 1987 video "Knowledge Navigator" envisions a VPA.
The head of Microsoft's artificial intelligence (AI) research provides more recent examples in the
video "Making Friends With Artificial Intelligence: Eric Horvitz at TEDxAustin."
Technology:
There are new and better algorithms (such as deep neural nets), much better hardware, and large
bodies of information (big data) with which to train the systems underlying VPAs.
Page 14 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Supply:
There are already scores of VPA precursors, which lack one or more of the defining characteristics
of VPAs. Precursors include virtual assistants in customer service applications (such as Nuance's
Nina), conversational agents (such as Apple's Siri), and contextually aware proactive search
features (such as those emerging in Google Now).
Google's Gmail Priority Inbox, introduced in 2010, is a narrow-scope VPA that organizes the user's
email based on analysis of past behavior and content. Microsoft and IBM are expected to introduce
similar capabilities by the end of 2014. Microsoft's new Outlook feature is code-named "Clutter,"
while IBM's first VPA will appear in "Mail Next."
We predict that Google, Microsoft and IBM will introduce more fully featured, opt-in VPAs in their
cloud office systems in 2015 and 2016. At the Google I/O conference in 2013, Google outlined its
"Knowledge Graph" efforts. At its SharePoint Conference 2014, Microsoft described its "Office
Graph" and a client code-named "Oslo." Both look like strong precursors to more fully featured,
conversational, opt-in VPAs that are likely to appear in the medium term. IBM has yet to reveal its
plans, but we expect it have a lot to offer in the same time frame.
Venture capital investments in AI-related businesses are booming, and many startups are being
acquired very early on, leading us to believe that there will be no shortage of supply of VPAs (or
their subsystems and precursors).
Demand:
Initial demand for VPAs is likely to be driven by individual "bring your own" experiments, followed by
more serious investigations by enterprises into whether VPAs can deliver a transformative
advantage. Since the late 20th century, most progress in end-user-facing ad hoc tools has been
disappointing, due to a lack of compelling new user benefits. VPAs may be the first new technology
this century to present a real justification for investing ahead of everyone else (see " "The IT Role in
Helping High-Impact Performers Thrive").
This will not be a "winner takes all" segment. There will be many different VPAs for individuals and
enterprises to consider. Individuals may use several VPAs with different specializations, such as
health-related VPAs to help with diet, exercise, the quantified self, relationships and psychological
wellbeing; VPAs to serve as personal shoppers; personal-career development and financialmanagement VPAs; and others for office-specific tasks like calendar management, email handling
and external information monitoring.
User Advice: IT leaders should:

Encourage experimentation, while creating opportunities for employees to share experiences


and recommendations. Lead by doing.

Prepare for mail-centered VPAs first, followed by a blossoming of the full range of capabilities
envisioned in Apple's 1987 movie and more.

Page 15 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Recognize that privacy, security and innovation are at odds. Watch cautiously while
encouraging experimentation. Imposing too many controls too soon due to a lack of trust in
your employees could eliminate the opportunity to outflank competitors. Equally, though,
granting your employees too much trust could be self-defeating, unless you keep careful watch.

Carefully measure the impact of VPAs on people's behavior and performance. Use an everevolving set of metrics, identified by observation and crowdsourcing.

Business Impact: VPAs have the potential to transform the nature of work and the structure of the
workplace. They could upset career structures and enhance workers' performance. But they have
challenges to overcome beyond simply moving from research labs to product portfolios. It is far too
early to determine whether, or how, they will overcome privacy concerns (although opt-in
requirements make sense). Individuals will think long and hard about what they want each VPA to
see and who else might view that information. Similarly, enterprises will be concerned about
employees exposing confidential information via VPAs.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Sample Vendors: Apple; Google; Highspot; IBM; Microsoft; Nuance
Recommended Reading: "The IT Role in Helping High-Impact Performers Thrive"
"Cool Vendors in Smart Machines, 2014"
"Top 10 Strategic Technologies The Rise of Smart Machines"
"The Disruptive Era of Smart Machines Is Upon Us"
"Market Insight: Virtual Assistants Will Make Cognizant Computing Functional and Simplify App
Usage"

Smart Workspace
Analysis By: Mike Gotta; Matthew W. Cain; Tom Austin
Definition: Smart workspace enables embedded programmability to the physical work environment
that surrounds employees, such as meeting rooms, cubicles, in-building open spaces, home offices
or mobile settings, whether they are physically and/or virtually together. In the smart workplace,
"objects" (whiteboards, building interfaces, large digital displays, workstations, mobile devices,
wearable interfaces) participate in work activities via communications features that create a network
of "things," which contextually facilitate people's interactions.
Position and Adoption Speed Justification: The Internet of Things (IoT) has gained enormous
attention because of its potential to merge the physical with the digital, resulting in new business
models. There is growing interest in how the enterprise environment can exploit this physical/digital
Page 16 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

integration in similar ways that improve workforce engagement and the employee experience.
Smart workspace is primarily the result of the intersection of five trends:

IoT

Digitalization of business processes

Smart machines

Digital workplace graphs

The digital workplace

Smart workspace adopts a people-centric focus on how a connected enterprise of things can help
employee performance, promote new ways of working, and take advantage of smart machines
(such as virtual personal assistants, smart advisors and other "software things"). Smart workspace
innovation will be influenced by embedded technology advances in nonenterprise environments,
such as appliances, cities, fashion, security, transportation, homes and consumer electronics.
Smart workspace will also be constrained by the pace of the dependent contributors listed above
(all bar IoT). There are also synergies between smart workspace and quantified self, as personalized
sensors provide employees with analytics and feedback related to mood, stress, posture and where
the individual spent most of their focus (tasks, applications, conversations, for example).
User Advice: Enterprise strategists focusing on a digital workplace strategy and digitalized
business processes should follow smart workplace trends and look for deployment opportunities.
Emerging applications will expand beyond traditional productivity scenarios to include situations
that are more industry- and process-specific, such as an insurance professional using a digital pen
that interacts directly with back-end processing systems, or a patient remotely monitored via a
wearable interface in their home that interfaces with diagnostic systems and advises healthcare
professionals to improve care delivery.
Business Impact: Smart workspace is the application of IoT to the day-to-day work environment.
Strategies for IoT, digitalized processes, smart machines, digital workplace graphs and the digital
workplace should be used to inform adoption. Smart workspace will trigger its own form of
consumerization ("bring your own thing"), as employees will add their own objects to a smart
workplace environment. Automation impacts are broad, related to costs, efficiencies and business
effectiveness.
Benefit Rating: Transformational
Market Penetration: 1% to 5% of target audience
Maturity: Embryonic

Connected Home
Analysis By: Fernando Elizalde

Page 17 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Definition: A connected home is networked to enable the interconnection of multiple devices,


services and apps, ranging from communications to entertainment, and healthcare to security.
These services and apps are delivered over multiple interlinked devices, providing a connected
experience for the household and enabling its inhabitants to control and monitor it remotely.
Position and Adoption Speed Justification: The connected home is a concept that overarches
several technologies, devices, applications, services and industries. As such, it is defined in this
technology profile to provide a framework for the Hype Cycle of the same name.
The connected home concept has been around for a while. It has evolved from the "smart home"
idea to a much more complex concept that expands, without being exhaustive, to:

Media entertainment

Home security

Monitoring and automation

Health and fitness

Education

Energy-management products and services

Until recently, aspects of the connected home such as home automation systems or wireless audio
systems were viewed as luxury household items. In the past 12 to 18 months, solutions at massmarket prices have been introduced, placing the idea of the connected home closer to the average
household budget.
The connected home exists today mostly as silos of services and products, and underlying enabling
technologies that sometimes compete with each other. So far, few companies offer a managed,
integrated connected home experience, with the concept an increasingly complex one. There is
confusion with terms and overlapping between apps, services, devices and connection methods.
Yet the interconnection of home electronics and devices has been simplified enormously in the past
few years, with content and information being distributed throughout the home via a variety of
devices. This is largely the result of several things, including:

The maturity of access technologies (such as broadband, Wi-Fi and 4G)

The development and standardization of radio technologies, including low-energy networking


standards (such as Bluetooth LE, ZigBee and Z-Wave), which have allowed low-cost wireless
connectivity to be added to any device in the home

The simplification of user interfaces

In recent months, the market has seen the introduction of several initiatives to create true
connected home ecosystems. Many of them are being driven by carrier service providers such as
AT&T, Deutsche Telekom and Telefonica; others by technology providers such as Technicolor,
iConnect and Insteon. Yet some of these solutions are focused more around home automation and
energy management than a full connected home solution. More recently, vendors such as

Page 18 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Samsung, Google and Apple have announced intentions to provide partial or complete connected
home ecosystems.
Whether technology and service providers in the connected home succeed is likely to be influenced
by factors such as the evolution of the technology that drives not only new services and apps, but
also consumer expectations; changing business models; and regional differences.
User Advice:

Develop partnership strategies to build your existing expertise in devices, services and
customer relationships. Provide a unified user experience and compelling integrated connected
home solutions.

Partner with software providers for a unified platform. Base your solutions on standardized
protocols and home gateways to speed up market adoption.

Offer ease of use and reasonable hardware costs, differentiating the quality of experience on
the services you have on offer by providing efficient support.

Business Impact: Connected home solutions affect a wide spectrum of manufacturers (white
goods, entertainment electronics, home automation, security, fitness and health products), as well
as service providers ranging from energy utilities and surveillance to healthcare providers,
communications and digital entertainment services.
Benefit Rating: High
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: ADT; Apple; AT&T; Deutsche Telekom; Google; iControl; Insteon; Samsung
Electronics; Technicolor
Recommended Reading: "Market Trends: An Integrated Approach Will Pay Dividends in the
Connected Home"
"Market Trends: New Money-Making Apps and Services for the Connected Home"
"Market Trends: CSPs Invite Themselves Into the Connected Home"

Quantified Self
Analysis By: Mike Gotta; Whit Andrews; Frank Buytendijk
Definition: Quantified self is a movement promoting the use of self-monitoring through a wide
variety of sensors and devices. Applications or services based on user data about activities,
biometrics, environment and experiences provide a higher level of value from wearable and mobile
devices, mobile apps, sensors and other "things" that offer self-tracking analytics, cross-sensor

Page 19 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

aggregation, social facilitation, observational learning and individualized coaching. Many different
entities will provide these applications.
Position and Adoption Speed Justification: Analysis of this data allows individuals to gain a better
understanding of their experiences and improve their wellbeing. Integration with social media allows
users to connect with peers, share information, gain community support and learn from others. The
quantified self movement has become a catalyst for the socialization of new types of technology
and behavior. However, we now believe it will take five to 10 years before these are adopted by the
mainstream due to cultural concerns (surveillance), societal acceptance (etiquettes), and business
model fluctuations.
Although there are multiple types of applications, the most successful commercial implementations
can be found in sports, fitness and health. There are thousands of fitness and health-related apps in
smartphone app stores. Although application scenarios are broad, the dominant use case focuses
on motion trackers and vital-sign monitoring (blood pressure and heart rate). However, application
scenarios are expanding into areas such as mood monitoring and food/nutrition.
The breadth of devices itself is evolving rapidly as well. Many objects are being turned into sensorbased devices, including helmets, sneakers, glasses, watches, clothing and jewelry. The popularity
of these devices and the immaturity of the technology can sometimes cause privacy, stability and
quality issues. Proliferation of devices and apps without standards-based interoperability has
created a market opportunity for new entrants to focus on data aggregation and normalization.
Quantified self is also beginning to move into the workplace. For example, the inclusion of wearable
devices and self-tracking apps as part of corporate wellness programs is becoming an aspect of
employee engagement and digital workplace initiatives. Strategists are also looking at the potential
of quantified self to improve personal and business productivity.
User Advice: The number and variety of personal devices and self-tracking mobile apps that collect
data and provide feedback to users is increasing. Many different entities such as device makers,
brands, software vendors, health-related firms, and developers of virtual personal assistants and
smart machines will provide these applications.
While a dedicated community of people are interested in quantified self as a life philosophy to
improve their own well-being, there are other populations interested in it to obtain medical insight or
improve more serious health conditions for themselves or in their caregiver role.
Marketers, innovation teams and community strategies should examine quantified self to help
create a more social and collaborative brand experience, while leveraging personal analytics to
establish greater customer intimacy.
Business Impact: Business strategists should ensure that proper policies and controls are in place
to address user privacy concerns related to sharing personal data gathered via wearable devices,
sensors and mobile apps. Organizations also need to invest in community management processes,
and ensure that the personal participation needs and goals of community members are addressed.
As people connect with peers, build relationships and interact with each other through the use of
wearable devices, sensors and mobile apps, there may be a need for customized applications and

Page 20 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

unanticipated integration with other sites or internal systems. There are also behavioral, cultural and
societal factors that come into play that strategists need to address early during design activities.
As more people use mobile and social technologies to collect and assemble data about themselves
and their immediate surroundings, business opportunities emerge to apply insights gained from
personal analytics and community participation to improve brand/customer relationships and
product/service innovation. Within the workplace, organizations can create quantified selfincentives or requirements for employees to apply such analytics to measure performance or wellbeing, or to track employees in hazardous environments for health and safety reasons.
Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Fitbit; Jawbone; Nike
Recommended Reading: "Technology Overview: Quantified Self"

Brain-Computer Interface
Analysis By: Jackie Fenn
Definition: A brain-computer interface is a type of user interface, whereby the user voluntarily
generates distinct brain patterns that are interpreted by the computer as commands to control an
application or device. The best results are achieved by implanting electrodes into the brain to pick
up signals. Noninvasive techniques are available commercially that use a cap or headband to detect
the signals through external electrodes.
Position and Adoption Speed Justification: Brain-computer interfaces remain at an embryonic
level of maturity, although we continue to advance them slightly along the Hype Cycle to
acknowledge the growing visibility of several game-oriented products (such as those from Emotiv
and NeuroSky) in the emerging field of neurogaming. The major challenge for this technology is
obtaining a sufficient number of distinctly different brain patterns to perform a range of commands
typically, fewer than five patterns can be distinguished. However, this proves sufficient to play
interactive games and control equipment or even some vehicles. One approach that operates well
within these constraints is to watch for the distinctive brain pattern associated with recognizing a
desired goal for example, brain-driven typing flashes letters on the screen until the desired letter
is recognized by the user's brain. Further advances are likely to arise from research on activating
prosthetic limbs, whereby functional magnetic resonance imaging (fMRI) and other brain-scanning
techniques are being used to identify people's natural brain patterns when performing various
actions (such as closing their hands). fMRI is also proving effective in reading emotions and
determining what type of object a person is looking at or thinking about. The Obama
administration's decade-long Brain Activity Map project will also drive improved interpretation of
brain signals. Several of the commercial systems also recognize facial expressions and eye
movements as additional input.

Page 21 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Outside of medical uses, such as communication for people with "locked in" syndrome (a condition
in which a patient is aware and awake but cannot move or communicate verbally), other hands-free
approaches, such as speech recognition, gaze tracking or muscle-computer interfaces, offer faster
and more-flexible interaction than brain-computer interfaces. The need to wear a headband to
recognize the signals is also a serious limitation in most consumer or business contexts.
Researchers at Brown University have succeeded in reading brain signals from a low-power
wireless system implanted in animals for more than a year, paving the way for research on human
brain signal implants during the next decade.
User Advice: Treat brain-computer interfaces as a research activity. Some niche gaming and
disability-assistance use cases might become commercially viable for simple controls; however,
these will not have capabilities that will generate significant uses in the mainstream of business IT.
Business Impact: Most research is focused on providing severely disabled individuals with the
ability to control their surroundings. Commercialization is centered on novelty game interfaces and
applications that help users become more aware of their own brain state, and thus, they are better
able to relax or focus. As wearable technology becomes more commonplace, applications will
benefit from hybrid techniques that combine brain, gaze and muscle tracking to offer hands-free
interaction.
Benefit Rating: Moderate
Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Sample Vendors: Brain Actuated Technologies; Emotiv; InteraXon; neurowear; Neural Signals;
NeuroSky; Personal Neuro Devices
Recommended Reading: "Cool Vendors in Human-Machine Interface, 2013"
"Market Trends: New Technologies Benefit Employees and People With Disabilities"
"Maverick* Research: The Future of Humans: Get Ready for Your Digitally, Chemically and
Mechanically Enhanced Workforce"
D. Orenstein, "Brown Unveils Novel Wireless Brain Sensor," Brown University, 28 February 2013
L.R. Hochberg, D. Bacher, B. Jarosiewicz, N.Y. Masse, J.D. Simeral, J. Vogel, S. Haddadin, J. Liu,
S.S. Cash, P. van der Smagt, J.P. Donoghue, "Reach and Grasp by People With Tetraplegia Using a
Neurally Controlled Robotic Arm," National Center for Biotechnology Information (NCBI), 16 May
2012

Human Augmentation
Analysis By: Jackie Fenn

Page 22 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Definition: The field of human augmentation focuses on creating cognitive and physical
improvements as an integral part of the human body. An example is using active control systems to
create limb prosthetics with characteristics that can exceed the highest natural human
performance.
Position and Adoption Speed Justification: Human augmentation moves the world of medicine,
wearable devices and implants from techniques to restore normal levels of performance and health
(such as cochlear implants and eye laser surgery) to techniques that take people beyond levels of
human performance currently perceived as "normal." In the broadest sense, technology has long
offered the ability for superhuman performance from night-vision glasses (or even a simple
flashlight) that help people see in the dark to a financial workstation that lets a trader make splitsecond decisions about highly complex data.
Although most techniques and devices are developed to assist people with impaired function,
development of superhuman capabilities has started. Power-assisted exoskeletons provide
increased strength and endurance to soldiers and caregivers. Hearing aids, such as the GN
ReSound LiNX, offer their wearers superior hearing ability through wireless real-time adjustments on
a mobile phone app; for example, these may be used to mute music and increase directional focus
in a noisy environment. Researchers are experimenting with creating additional senses for humans,
such as the ability to sense a magnetic field to develop the homing instinct of birds and marine
mammals; and with sensory substitution, such as allowing a blind person to drive a car by
translating visual information into vibrations. Brain stimulation techniques, such as transcranial
direct current stimulation, are proving effective in enhancing concentration and accuracy. To date,
these systems are worn or strapped onto the body, rather than surgically attached or implanted; but
with advances such as thought activation of mechanical limbs, the distinction between "native"
versus augmented capabilities will start to blur.
Increasing specialization and job competition are demanding levels of performance that will drive
more people to experiment with enhancing themselves. Augmentation that reliably delivers
moderately improved human capabilities will become a multibillion-dollar market during the next
quarter century. However, the radical nature of the trend will limit it to a small segment of the
population for most of that period. The rate of adoption will vary according to the means of
delivering the augmentation. Drugs are already used extensively for off-label performance
enhancement, such as anabolic steroids for strength and modafinil for alertness and concentration.
Wearable devices are likely to be adopted more rapidly than those involving surgery, although
individuals are already experimenting with implanting technology for purposes such as storage and
listening to music. The huge popularity of cosmetic surgery is an indicator that even surgery is not a
long-term barrier, given the right motivation.
Ethical controversies regarding human augmentation will emerge even before the technology
becomes commonplace. Several states have already passed bills banning employers from requiring
chip implants as a condition of employment. Future legislation will need to tackle topics such as
whether an employer is allowed to prefer a candidate with augmented capabilities over a "natural"
one. Longer term, the potential for genetic and epigenetic manipulation to improve desirable
characteristics will further inflame deep ethical divides.

Page 23 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

User Advice: Organizations aiming to be very early adopters of technology, particularly those
whose employees are engaged in physically demanding work, should track lab advances in areas
such as strength, endurance or sensory enhancement. Employers will need to weigh the value of
human augmentation against the growing capabilities of robot workers, particularly as robots may
involve fewer ethical and legal minefields than augmentation. Cognitive enhancement through
technology is already represented by the growing use of and dependence on instant mobile
access to information and community, and organizations must continue to be ready for consumerand employee-led adoption of the latest wearable or even implantable technology. Organizations
can gain an early understanding of some of the opportunities and issues by tracking the Quantified
Self movement, which promotes self-monitoring through a wide variety of sensors and devices with
a goal of improving physical and mental well-being.
Business Impact: The impact of human augmentation and the ethical and legal controversies
surrounding it will first be felt in industries and endeavors demanding extreme performance, such
as the military, emergency services and sports. In parallel, consumer applications using sensory
enhancement through augmented reality (for example, collision alerts or "friend nearby"
notifications) will be delivered initially through mobile or wearable devices.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Sample Vendors: Cyberdyne; Raytheon
Recommended Reading: "Maverick Research: The Future of Humans: Get Ready for Your
Digitally, Chemically and Mechanically Enhanced Workforce"
"Technology Overview: Quantified Self"
"Conjuring Images of a Bionic Future"
"Blind Man Drives High-Tech Car at Daytona Speedway"
FeelSpace belt for directional awareness

Quantum Computing
Analysis By: Jim Tully
Definition: Quantum computers use quantum mechanical states for computation. Data is held in
quantum bits (qubits), which have the ability to hold all possible states simultaneously. This
property, known as "superposition," gives quantum computers the ability to operate exponentially
faster than conventional computers as word length is increased. The data held in qubits is
influenced by data held in other qubits, even when physically separated. This effect is known as
"entanglement." Achieving both superposition and entanglement is extremely challenging.

Page 24 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Position and Adoption Speed Justification: A large number of technologies are being researched
to facilitate quantum computing. These include:

Lasers

Superconductivity

Nuclear magnetic resonance (NMR)

Quantum dots

Trapped ions

No particular technology has found favor among a majority of researchers, supporting our position
that the topic remains in the relatively early research stage.
Hardware based on these technologies is unconventional, complex and leading-edge, yet most
researchers agree that hardware is not the core problem. Effective quantum computing will require
the development of algorithms (quantum algorithms) that will solve real-world problems while
operating in the quantum state. The lack of these algorithms is a significant problem although a
few have been developed. The output is typically in the form of a probability distribution, requiring
multiple runs to achieve a more accurate result.
One example is Grover's algorithm, designed for searching an unsorted database. Another is Shor's
algorithm, for integer factorization. Many of the research efforts in quantum computing use one of
these algorithms to demonstrate the effectiveness of their solution.
The first execution of Shor's algorithm was carried out in 2001 by IBM and Stanford University.
Since then, the focus has been on increasing the number of qubits available for computation. The
latest published achievement is a factorization of the number 21 at the University of Bristol in 2012.
The technique used in that case was to reuse and recycle qubits during the computation process in
order to minimize the required number of qubits. The practical applications indicated by these
examples are clearly very limited in scope, and we expect this situation to continue through the next
10 years or more.
D-Wave Systems has demonstrated various configurations of quantum computers, based on
supercooled chips. These systems focus on the use of quantum techniques for a range of
optimization applications. The technique finds the mathematical minimum in a dataset very quickly.
Lockheed Martin, NASA and Google are making use of D-Wave's products and services for, among
other things, research on machine learning.
To date, D-Wave's demonstrations have involved superposition but have not demonstrated
entanglement in any significant way. Without quantum entanglement, D-Wave computers cannot
attack the major algorithms demonstrated by the smaller quantum computers that do achieve
entanglement.
Most of the research we observe in quantum computers relates to specialized and dedicated
applications. Given the focus and achievements of research in quantum computing, Gartner's view

Page 25 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

is that general-purpose quantum computers will never be realized; they will instead be dedicated to
a narrow class of use. This suggests architectures where traditional computers offload specific
calculations to dedicated quantum acceleration engines. A lack of programing tools such as
compilers is another factor that is restricting the broader potential of the technology. Specific
applications include optimization, code breaking, image analysis and encryption.
The technology continues to attract significant funding, and a great deal of research is being carried
out. However, we have not seen any significant progress on the topic over the past year, although
publicity and hype have increased a little.
User Advice: If a quantum computer offering appears, check its usefulness across the range of
applications that you require. It will probably be dedicated to a specific application, and this may be
too narrow to justify a purchase. Check if access is offered as a service. D-Wave has now moved in
this direction, and it may be sufficient at least for occasional computing requirements. Some user
organizations may require internal computing resources, for security or other reasons. In these
cases, use of the computer on a service basis at least initially would offer a good foundation
on which to evaluate its capabilities.
Business Impact: Quantum computing could have a huge effect, especially in areas such as
optimization, code breaking, DNA and other forms of molecular modeling, large database access,
encryption, stress analysis for mechanical systems, pattern matching, image analysis and (possibly)
weather forecasting. "Big data" analytics is likely to be a primary driver over the next several years.
Benefit Rating: High
Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Sample Vendors: D-Wave Systems; Delft University of Technology; IBM; Stanford University;
University of Bristol; University of Michigan; University of Southern California; Yale University

Software-Defined Anything
Analysis By: Philip Dawson
Definition: Software-defined anything (SDx) is a collective term that encapsulates the growing
market momentum for improved standards for infrastructure programmability and data center
interoperability driven by automation inherent to cloud computing, DevOps and fast infrastructure
provisioning. As a collective, SDx also incorporates various initiatives like OpenStack, OpenFlow,
the Open Compute Project and Open Rack, which share similar visions.
Position and Adoption Speed Justification: The trend to use the terminology "software defined"
started with software-defined networking (SDN), which enables a separation of the networking logic
and policies into software, from the individual devices. Because SDN separates the hardware and
software, this potentially decouples the purchasing decision and may allow the adoption of generic
hardware, which would become very disruptive. As SDx matures, the scope to extend this concept

Page 26 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

to servers and storage will grow as well. While SDx is cloud like, SDx does not generally include
self-selection, metering and chargeback models.
Individual SDx terms range from embryonic to beyond the Peak, although the collective term is
emerging. SDx is achieved through the concept of an infrastructure policy framework and
interoperability through open APIs (although not necessarily standard APIs). Gartner takes this
concept one step further in that the future of IT infrastructure will be model-based, with business
KPIs (such as throughput, uptime, response time, input/output per second, etc.) driving the
selection of infrastructure to meet the service needs, which in turn fosters repeatable engineering
and a direct connection between business requirements and infrastructure. The goal of SDx is to
abstract conventional, proprietary vendor hardware/software-specific implementations so that users
have less lock-in and more vendor choice over time. Most SD initiatives start life as vendor-led
strategies that encourage the creation of communities around proprietary interfaces, and true
interoperability only follows as the market commoditizes. Relative maturity and the speed of
evolution between different SDx definitions also vary widely.
SDx is seen by vendors as a way of abstracting infrastructure away from the software, management
and high availability (HA)/disaster recovery (DR) characteristics of a given workload. Across the
spectrum of SDx definitions, true standards and interoperability are weak, and mechanisms for
defining and policing standards are only slowly emerging. Many vendor differentiation claims focus
on basic infrastructure positioning or, at best, infrastructure and platform delivery. In order to
achieve full potential, SDx messaging that is aimed at transforming hardware deployment must
venture more aggressively into the application and software space. Some SDx definitions are more
naturally suited to workload transformation. OpenStack, for example, defines APIs and functionality
of the infrastructure and is supported by many vendors, thus delivering a standard interoperability
layer that can counter Amazon APIs. An additional benefit of new APIs is that new applications can
be written to bring new value at the automation layer. This will potentially create a whole new
industry segment.
Meanwhile, it is very easy for some vendors to blur the distinction between different SDx definitions,
or between an SDx definition and its "Open" stack counterpart. For instance, SDN and OpenFlow
are closely related initiatives to drive more open networking standards; they are not analogous to
each other. Similarly, OpenStack is an example of an SDx that will be supported by most relevant
vendors, but many of them will seek to create alternative SDx approaches to benefit their own
platform characteristics. Therefore, a vendor may be a nominal member of the OpenStack
community, but in reality prefer its own SDx to OpenStack (to monetize it or enable greater
technology lock-in). SDx definitions may also be thinly disguised variants of existing concepts; for
instance, most of the implementations of software-defined storage (SDS) today are really variants of
the storage resource management (SRM) concept that has been commonplace for a decade.
User Advice: SDx provides a way for vendors to leverage their installed base presence to drive
broader ecosystem acceptance from users and partners in their own domains. In doing so, there is
a danger that this defeats the purpose of greater substitutability and commoditization. Across
domains, standards are patchy (and some will take years to evolve), but SDx represents a powerful
set of trends that will become increasingly tangible over time especially where they force vendor
collaboration that benefits user choice and heterogeneity. Over time, as SDx matures, the

Page 27 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

development of standards that multiple vendors align with will drive broader infrastructure
component choice and interoperability likely to form vendor alliance ecosystems from their
internal partnering, rather than being fully open in the medium term. Properly positioned and
articulated, SDx will become the next level of differentiation, integration and management of
infrastructure/platform positioning and needs across multiple vendors in a client portfolio.
Business Impact: As individual SDx technology silos evolve and consortiums arise, look for
emerging standards and bridging capabilities to benefit your portfolio, but challenge individual
technology suppliers to demonstrate their commitment to true interoperability standards within their
specific domains. While openness will always be a claimed vendor objective, different
interpretations of SDx definitions may be anything but open. Vendors of SDN (network), SDDC (data
center), SDS (storage), SDC (compute) and SDI (infrastructure) technologies are all trying to maintain
leadership in their respective domains, while deploying SDx initiatives to aid market adjacency
plays. So vendors who dominate a sector of the infrastructure may only reluctantly want to abide by
standards that have the potential to lower margins and open broader competitive opportunities,
even when the consumer will benefit by simplicity, cost reduction and consolidation efficiency.
Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Cisco; EMC; HP; IBM; Intel; Microsoft; NetApp; Symantec; VMware

Volumetric and Holographic Displays


Analysis By: Stephen Prentice
Definition: Volumetric displays create visual representations of objects in three dimensions, with a
360-degree spherical viewing angle in which the image changes as the viewer moves around.
Unlike most 3D planar displays, which create the illusion of depth through visual techniques
(stereoscopic or autostereoscopic), volumetric displays create lifelike images in three-dimensional
space.
Holographic displays can recreate a 3D image, but they are not true volumetric displays.
Position and Adoption Speed Justification: Volumetric displays have barely emerged from the
laboratory. The iconic volumetric image of Princess Leia created by R2-D2 in the first Star Wars
movie (released in 1977) remains an elusive, yet aspirational, goal.
True volumetric displays fall into two categories: swept volume displays, and static volume displays.
Swept volume displays use the persistence of human vision to recreate volumetric images from
rapidly projected 2D "slices." One approach is to project images onto a rapidly rotating mirror inside
a protective enclosure (to protect viewers from injury, should they attempt to touch the images).
Static volume displays use no major moving parts within the image display volume, but rather, rely
on a 3D volume of active elements (volumetric picture elements, or voxels) that change color (or
transparency) to create a 3D image within the display volume. Low-resolution displays may use
Page 28 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

transparent elements such as LEDs, while some higher-resolution displays use techniques such as
pulsed lasers that are directed by scanning mirrors to create balls of glowing plasma at the location
of each voxel.
Swept and static volumetric displays suffer from the significant dangers of rapidly moving parts or
ionized particles in the vicinity of people, especially because the volumetric nature of the generated
image convinces the brain that it is solid and "real" and, therefore, can be touched. In all cases, the
volume of data required to generate a volumetric image is considerable typically on the order of
1,000 times more to create a 24-bit voxel image (1,024 layers on the z-axis) than the corresponding
2D image. In all cases, the amount of CPU processing required is equally significant, compared with
creating a 2D image.
Holograms can be deployed as an alternative to a volumetric display, but with a more restricted
viewing angle. It should be noted that the term "holographic display" is frequently (but incorrectly)
applied to any image that creates an appearance of 3D. Some current theatrical and conferencing
displays allow realistic images to appear out of thin air and can, with care, allow individuals to walk
"around" them. However, they are simply 21st-century implementations of the 19th-century
Pepper's ghost illusion, using high-intensity projectors and Mylar display films, and not true
volumetric or holographic displays.
Several companies, including InnoVision Labs, Sony and Realfiction, have demonstrated 3D or
holographic images generated from their projectors, but not one of the images has been
commercialized yet.
Competing with volumetric and holographic displays, 3D displays such as those increasingly found
in televisions create a visual impression of depth, but rely on spatially multiplexed images that
deliver different views to each eye and allow the brain to reconstruct a 3D representation. They are
planar displays that simulate depth through visual effects, rather than true volumetric displays that
create an image in a display volume with real depth.
User Advice: Outside of specialized areas, where budgets are not significant constraints, this
technology remains firmly in the lab, rather than in commercial applications. Current technologies
limit the size of volumetric space that can be displayed, and the mechanical solutions create
potentially dangerous, rapidly moving parts. Until alternative approaches can be delivered (which
seems unlikely in the near future), volumetric displays will remain an extremely niche product.
Concurrently, the rapid growth and continuing development of 3D televisions in the mainstream
markets threaten to overwhelm the continuing development of volumetric and holographic displays
outside of specialized markets.
Business Impact: General applications are not well-developed for business use. To date, simple
applications in marketing have been deployed usually targeted at high-end retail environments,
and there are some specialized applications for geospatial imaging to enhance 2D maps, and for
use in architectural rendering. However, most of these can be achieved at much lower costs using
other more-commercialized technologies, such as 3D displays. Potential application areas include
medical imaging, consumer entertainment and gaming, and design, but costs will need to fall
dramatically for these to be viable for using true volumetric displays.

Page 29 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Benefit Rating: Low


Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Sample Vendors: HP; Musion; Realfiction; Sony

3D Bioprinting Systems
Analysis By: Vi Shaffer
Definition: 3D bioprinting systems produce tissue and "products" that function like human organs.
The process is directed by medical imaging data and software that specifies the design of living
tissue and organs, plus the printing device to create usable tissue or a functioning human organ
from an individual's own or other cells.
Position and Adoption Speed Justification: This technology profile was previously named 3D
Bioprinting. The change in 2014 to 3D Bioprinting Systems better reflects the nature of this profile.
3D bioprinting for medical patient application is an arena with very complex scientific and adoption
challenges to overcome, and very profound potential impact when they are conquered. We have
nudged this up a bit again, based on another year of tangible breakthroughs. However, it is still
early in the Hype Cycle, requiring substantial further R&D. We combine the tracking of "relatively
easier" tissue delivery for scientific R&D (which has a low barrier for adoption) and "very difficult"
organ generation for human transplantation in this profile. Because of progress toward
commercialization targeted to the pharmaceutical industry, we project Time to Plateau as five to 10
years, noting that development for human transplantation will cover an unpredictable course,
including regulatory and adoption hurdles on top of pure R&D. Players agree that the earlier
("easier") use cases will be coming in the areas of drug testing/screening, and tumor and wound
studies. Thus, research organizations and the pharmaceutical industry are the earliest beneficiaries.
Other early uses of 3D bioprinting have provided interesting life-saving anecdotes such as a
custom stents for infants or personalized prosthetics. While important, these are not included as
bioprinting examples in this category.
While significant experimental and scientific informatics hurdles need to be overcome before broad
adoption, even within the earlier R&D/life science market, advances are coming steadily now, from
both academic research centers and commercial companies like Organovo (which calls itself a
"three-dimensional biology company").
So far, 2014 has seen a small amount of progress in this early-stage tissue and organ production
effort for patient use. Important milestones in the past year include:

Wyss Institute for Biologically Inspired Engineering at Harvard University announced it had
successfully printed multiple types of cells and blood vessels, a combination it says is
necessary to create more complex tissue. The team addressed this problem by incorporating
blood vessels into a mix of living cells and extracellular matrix.

Page 30 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Organovo (still operating with slight revenue and substantial grant funding) has made a series of
announcements about progress in its tissue delivery business, such as that it initiated
contracting for toxicity testing using its 3D human liver tissue for select pharmaceutical
companies for preclinical drug discovery programs. (These are not functioning livers for
transplantation into humans, but are a sign of milestones met in the commercial arena.) The
company also announced new collaborative agreements with the U.S. National Institutes of
Health to develop eye tissues, integrate 3D bioprinting with traditional drug screening
technologies and develop more clinically predictive tissue models using its NovoGen MMX
Bioprinter.

A new competitor, Regenovo, exhibited internationally for the first time at the 2014 International
CES. The company was founded to commercialize technology developed out of Zhejiang
University of Science and Technology in Hangzhou, the capital of Zhejiang Province in eastern
China. The company announced it has printed an ear cartilage sample made from real tissue,
and has been dubbed "the Chinese Organovo."

Further fueling interest, in October 2013, the EuroStemCell organization, a collaboration of more
than 90 European stem cell and regenerative medicine research labs funded by the European
Commission's Seventh Framework Programme, held an event on "Opportunities and
Challenges in 3D Bioprinting" in Cambridge, England.

User Advice: Life science companies and academic medical centers that lead in the investigation of
such potential breakthroughs will be participating in, or closely following, approaches to tissue
engineering. Although this area falls more into the realm of major emerging technologies and life
science or biomedical developments, as opposed to "classic" healthcare IT, it illustrates the
continuing significance of IT's application to the transformation of medicine. Uses like this are still
far in the future.
HDO CIOs are getting closer both to the core, clinical processes of healthcare, and to biomedical
device and clinical engineering departments. Tracking technology advances such as this one
remind the CIO of the constant potential for dramatic medical innovations. Enabling technologies
like 3D bioprinting remind CIOs of the weighty changes in the landscape of medical technologies.
In addition, the detailed organ design, bioprinter device used, and organ production and placement
data will no doubt need to be incorporated into the EHR system of the future, and custom organs
would be one more type of computerized order set. This is yet another example of how the volume
and variety of data to incorporate into EHR systems and enterprise data warehouses will continue to
explode in years to come.
Business Impact: 3D bioprinting is one approach to solving a difficult dream for tissue engineers
to fulfill engineering designs and market demand for tissues, functioning human organs, arteries and
the like. This is one of the most dramatic examples of the potential breakthroughs that the future
fusion of medicine, engineering and IT may hold. The impact of successful commercialization on the
business of healthcare and on its definition of services offered will be profound, creating an
unprecedented demand for new, custom production services of replacement organs. It would
change the business fundamentals of currently lucrative transplant centers, and offer an intriguing
service line for medical tourism centers. Moreover, it would create new dilemmas with regard to

Page 31 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

cost-benefit analysis and medical-necessity approvals for public and private payers and
policymakers.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Sample Vendors: Cornell Creative Machines Lab; EOS; Organovo; Regenovo; TeVido BioDevices;
The University of Iowa; Wake Forest Institute for Regenerative Medicine
Recommended Reading: "Predicts 2014: 3D Printing at the Inflection Point"
"Technology Overview for Material Extrusion 3D Printing"
"Market Trends: 3D Printing, Worldwide, 2013"

Smart Robots
Analysis By: Kenneth F. Brant
Definition: Smart robots are literally smart machines that have a physical form factor unlike
virtual personal assistants and smart advisors and that can work autonomously in the physical
world and learn from their experiences. Smart robots sense conditions in their local environments,
recognize and solve basic problems, and learn how to improve. Some have a functional form, such
as warehouse robots from Amazon's Kiva subsidiary, while others have humanoid appearances,
such as Baxter from Rethink Robotics. They may work alongside humans or replace human labor.
Position and Adoption Speed Justification: While industrial robots have been around for a long
time and are certainly more advanced in their life cycles, the subset of smart robots is much newer
and has had significantly less adoption to-date. That is why smart robots are positioned at the
midpoint between the Technology Trigger and the Peak of Inflated Expectations. Hype and
expectations will continue to build around these smart robots over the next few years as a dynamic
set of large and small suppliers develops more solutions across the wide spectrum of generic and
industry-specific use cases. Several recent key events have expedited the adoption speed we now
expect to see in this category: (1) the acquisition of Kiva Systems by Amazon and Amazon's
subsequent plans to deploy 10,000 Kiva robots to fill customer orders by the end of 2014; (2)
Google's acquisition of Boston Dynamics and seven other robotics companies within a six-month
span in the second half of 2013 and its ability to incorporate machine learning in these acquired
robot assets; (3) Rethink Robotics' launch of Baxter, which can work alongside human employees
to perform simple assembly line tasks by being shown what to do (rather than requiring
programming), at prices starting around $25,000; and (4) the transfer of military technology to
commercial and consumer robotics from companies like iRobot. These events will create a
competitive race on the supply side of the market to build scale in this category, now that we have
witnessed initial pilots and limited trials on the demand side of the market. Users, too, will race to
find competitive advantage after leaders in their industry segment have begun their journey with
smart robots.
Page 32 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

User Advice: Consider smart robots as real substitutes and complements to your human
workforce. Begin pilots designed to assess product capability and maturity, feasibility to your
business processes, and potential returns on investment. Users in e-commerce, manufacturing,
distribution and retail segments, healthcare, and government services that have high labor costs
associated with repetitive workflows and requirements for agility and worker safety should explore
the benefits of smart robots in their operations now and plan their adoption in stages.
Business Impact: Smart robots will make their first business impact across a spectrum of product
and service-centric industries. Their ability to do physical work, with greater potential reliability,
lower costs, greater safety and higher productivity, is common across these industries. Their initial
impact will be greatest in industries that have the highest cost of labor in commercial industry
operations (like material handling and logistics in manufacturing, healthcare and retail) or face the
highest risk to their human workforce in public-sector operations (like inspecting and defusing
bombs or investigating natural disasters or other threats to citizens and military personnel). Their
business impact will be in improving productivity, reducing the costs of labor in industries that face
great employee attrition and training expenses, and greater agility in industries that have repetitive
but changing work routines. The ability for organizations to assist, replace or redeploy their human
workers in more value-adding activities creates potentially high but not transformational
benefits. Typical and potential use cases include medical materials handling; hazardous waste
materials disposal; prescription filling and delivery; patient care; direct materials handling; stock
replenishment; product assembly; finished goods movements; product pick and pack; e-commerce
order fulfillment; package delivery; shopping assistance and customer care; and citizen protection.
Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Aethon; Amazon (Kiva); Google; Honda; iRobot; Intelligent Hospital Systems;
InTouch Health; Panasonic; Rethink Robotics; Swisslog; Symbotic; VGo Communications

Affective Computing
Analysis By: Jan-Martin Lowendahl
Definition: Affective computing technologies sense the emotional state of a user (via sensors,
microphone, cameras and/or software logic) and respond by performing specific, predefined
product/service features, such as changing a quiz or recommending a set of videos to fit the mood
of the learner. Affective computing tries to address one of the major drawbacks of online learning
versus classroom learning the teacher's capability to immediately adapt the pedagogical
situation to the emotional state of the student in the classroom.
Position and Adoption Speed Justification: True affective computing technology, with multiple
sensor input, is still mainly at the proof-of-concept stage in education, but it is gaining more interest
as online learning expands and seeks means to scale with retained or increased quality. A major
hindrance in its uptake is the lack of consumerization of the needed hardware and software

Page 33 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

involved. It has to be inexpensively available for students because they use their personal devices
before education institutions can deploy affective computing software. However, products such as
Affectiva's Affdex or ThirdSight's EmoVision are promising because they enable relatively low-cost,
packaged access to affective computing functionality, even if these particular products are geared
toward testing media/advertising impact on consumers. Another industry, the automotive industry,
is more advanced. Here, the technology has not yet found its way into mainstream vehicle
production, but lightweight emotion detection for example, being tired behind the wheel is an
option in trucks on the market today. Addressing issues such as driver distraction and driving while
tired creates more awareness for mood sensing in a practical and ubiquitous product the car.
The leading research lab in this field is MIT's Affective Computing Research Group, which has many
projects and is working on sensors, such as wristband electrodermal activity sensors connected by
Bluetooth to a smartphone, and software, such as the MIT Mood Meter, that assess the mood on
campus based on frequency of smiles as captured by ordinary webcams. Developments like these
can speed up the application of affective computing in education, but the road ahead still seems
long due to complexity. It is possible that there needs to be a breakthrough in a more consumeroriented area such as gaming before affective computing can be applied at a larger scale. One thing
that might jump-start implementation would be if facial recognition services for identification and
proctoring in online learning, from companies such as Smowl and KeyLemon, were implemented
more often and if affective computing were sold as an add-on to that kind of service. An interesting
and more specialized branch of affective computing involves robots such as the emote project. This
"artificial tutor" approach has many interesting possibilities. It uses a robot's movements to
strengthen affective feedback with the student, but it has the drawback of needing a physical robot.
The latter is likely to make this approach more costly for education institutions and delay
implementation.
Successful affective computing will most likely involve a complex architecture in order to combine
sensor input and provide an accurate response in real time. Mobile learning via cloud services and
handheld devices, such as smartphones and tablets, is likely to play a key role in the first few
generations, with a larger market penetration due to the relatively controlled ecosystem it provides
(high-capacity computing combined with a discrete device with many sensors). As content (for
example, textbooks) becomes more digitized and is consumed on devices that have several
additional sensors (for example, tablets with cameras and accelerometers), interesting opportunities
will arise to mash up the capabilities of, for example, Knewton's Adaptive Learning Platform and
ThirdSight's EmoVision, making affective computing for untutored learning more accessible. This
could potentially increase the number of data points available for statistically based adaptive
learning.
Altogether, this merits a position that is still in the trigger phase, with at least 10 years until it
reaches the Plateau of Productivity.
User Advice: Most institutions should only continue to follow the research and development of
affective computing in education and other industries. However, in order to be prepared for the
strategic tipping point of implementation, institutions should start estimating the potential impact in
terms of possible pedagogical gains and financial impact, such as increased retention for online
learning. Institutions with a large online presence, or that want to exploit the hype for brand

Page 34 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

recognition, should get involved now. Partner with automotive suppliers, consumer electronics
companies and universities (particularly online) to further explore this field.
Affective computing can involve collecting sensitive data about students, which makes it important
to make sure that any privacy laws and concerns of the users are met (such as policy about if, when
and how data is stored). Preferably, any use of affective computing should involve an "opt-in"
process.
Business Impact: Affective computing is an exciting area with the potential to bring back a bit of
the lost pedagogical aspects of classroom learning and to increase the personalization of online
learning. One important advantage of this technology is that, even if it is inferior to a face-to-face
student-teacher interaction, it scales well beyond the 100-plus-student lectures that today offer
limited individual pedagogical adaptivity. A potential complement or competition to remedy the
scalability problem is the social-media-based peer-mentoring approach, as exemplified by
Livemocha and, more lately, by massive open online courses (MOOCs). In the Livemocha example,
a sufficient scale of the community of quality subject matter mentors can be reached by tapping the
full Internet community of more than 2 billion users.
In general, affective computing is part of a larger set of approaches to further personalize the
educational experience online. Another example is adaptive learning that depends on the statistical
data of learners in the same pedagogical situation. It is also related to context-aware computing in
general.
The ultimate aim of affective computing in education is to enhance the learning experience of the
student, which should result in tangible results like higher grades, faster throughput and higher
retention. These results will benefit students, institutions and society.
Benefit Rating: Moderate
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: Affectiva; Affective Media; IBM; Pearson Education; ThirdSight
Recommended Reading: "Business Model Innovation Examples in Education"

Biochips
Analysis By: Jim Tully
Definition: Biochips relate to a number of technologies that involve the merging of semiconductor
and biological sciences. The most common form is based on an array of molecular sensors
arranged on a small surface typically referred to as "lab-on-chip." The underlying mechanism
utilizes microfluidic micro-electromechanical systems (MEMS) technology. These devices are used
to analyze biological elements such as DNA, ribonucleic acid and proteins, in addition to certain
chemicals.

Page 35 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Position and Adoption Speed Justification: Operation of lab-on-chip devices normally involves
loading a reference quantity of the material to be tested onto the array before shipping to the place
of use. The sample to be tested is then placed onto the array at the point of use. The chip detects if
the samples match and sends an appropriate signal.
A number of uses for lab-on-chip devices are emerging:

Medical applications for clinical purposes. An area of focus for specific devices is a biochip
to detect flu viruses. H5N1 (bird flu) and H1N1 (swine flu) versions have been produced.
Urinalysis is another application for detection of urinary tract infection and kidney function. One
of the benefits of this technology is faster analysis than traditional techniques, because multiple
tests can be carried out in parallel.

Detection of food pathogens. This involves the analysis of bacterial contaminants in food and
water. STMicroelectronics and Veredus Laboratories have developed a device that can detect
E. coli, salmonella, listeria and other pathogens. The device can detect more than 10 such
pathogens simultaneously.

Chemical and biohazard analysis. Further extensions of the technology are aimed at chemical
analysis, particularly for detecting explosives and biohazards.

Mobile device manufacturers have experimented with the addition of biochips onto cases of mobile
phones. This could facilitate health screening services offered by mobile operators or their partners.
Biometric sensing, including DNA fingerprinting development, is also receiving attention.
The use of biochips for clinical applications is at the stage where penetration is growing steadily in
clinical laboratories. The need to demonstrate consistent accuracy outside of R&D laboratories is a
challenge for biochip vendors. In some markets, U.S. Food and Drug Administration approval is
needed, and this delays the time to market considerably for this use of the technology.
Few biochip applications have been taken to a level where they can be administered by
nonspecialists. For significant market growth to occur, biochip applications will need to move from
specialist laboratories into doctors' surgeries and, later, into the consumer market. It will take at
least five years for biochips to enter doctors' surgeries, while consumer biochips for self-diagnosis
are probably five to 10 years away. The potential demand could be huge provided the costs can
be made sufficiently low.
Another form of biochip is less well-developed and involves the growing of biological material on
silicon for implanting into the human body. For example, the growth of neurons on silicon is a
promising area for retina and hearing implants. The generic term "biochip" refers to all these
variations and usage of the technology.
The amount of reported progress in biochips has stalled in the past year or two, caused in some
part by the depressed macroeconomic conditions. We have, therefore, held its position constant on
the 2014 Hype Cycle.
User Advice:

Page 36 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Companies and government organizations involved in analyzing complex organic molecules or


biological materials should examine the potential benefits of biochips.

Biochips are likely to be viable sooner than you realize, and specific areas of relevance include:
medical diagnosis, pollution monitoring, biological research, food safety, airport security and
military uses (biological warfare).

CIOs in health provider organizations should recognize that the amount of data produced from
large-scale use of biochips would be very considerable. The devices are likely to become
connected and, therefore, be part of the Internet of Things. Secure transmission and storage of
data will be essential.

Biochips represent an emerging market for vendors that have MEMS/microfluidic capabilities.
Packaging vendors in particular should take note of this technology.

Capturing the growth opportunity for medical applications will, in practice, require an
understanding of clinical processes to move this technology through clinical trials.

Business Impact:

There could be a significant impact on healthcare, if the technology fulfills its promise of faster
and more accurate diagnosis, particularly during epidemics.

Other businesses will also be affected, most notably mobile device manufacturers and mobile
operators, physical security screening organizations and semiconductor vendors.

Benefit Rating: High


Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: Affymetrix; Agilent Technologies; Imec; Owlstone; STMicroelectronics
Recommended Reading: "Silicon Technology and Biotechnology Paths Converge"

Neurobusiness
Analysis By: Jackie Fenn
Definition: Neurobusiness is the capability of applying insights from neuroscience, behavioral
science and psychology to improve business outcomes.
Position and Adoption Speed Justification: "Neurobusiness" is a relatively new term referring to
the use of psychology and other social sciences to deliver actionable business insight. Popularized
over the past few years by a flood of behavioral science books (see Recommended Reading), the
sometimes counterintuitive findings from decades of psychological research are being applied to a
range of business challenges and opportunities. Neuroscience, in particular, offers a growing ability
to monitor, understand and affect the physical mechanisms of the brain, which in turn promises

Page 37 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

precision in influencing attitudes, actions and behavior. For example, neurobusiness can offer
insight into perception, reasoning, reward responses and people's sense of belonging. Further
insights will be gained from advances in brain-scanning techniques, such as functional magnetic
resonance imaging (fMRI), which detects patterns of neural activity based on blood flow in various
regions of the brain. However, early enthusiasm will inevitably lead to exaggerated claims, and we
expect at least a decade's worth of research and experimentation are needed before neurobusiness
achieves its full potential.
User Advice: Larger companies with discretionary R&D budgets and a desire to lead their industries
should start experimenting with neurobusiness techniques. Consumer brands are most likely to be
first in engaging and realizing benefits. The purpose should be to test the saliency of the techniques
and gradually build knowledge and competency in applying them. Ideas for initial projects might
include a neuromarketing project for brand insights, redesigning the customer experience with Web
and call center interactions, management training on decision bias, or gamifying a corporate
innovation program. Business software intended to support any kind of decision could take
advantage by being designed to counterbalance human "irrationality" for example, in risk
management. However, organizations must be aware of potential backlash from a "creepiness"
factor and concerns over privacy.
Business Impact: Neurobusiness has the potential to deliver a broad impact across industries and
across many areas of organizational activity. Specific areas of focus will be:

Marketing. Because marketing is all about engagement and influence, marketing professionals
have long been early adopters of psychology and behavioral science. They have also been the
first to adopt neuroscience lessons in a formal way in neurometric research, which aims to
understand customers' brain responses to marketing stimuli.

Customer Experience. Organizations with customer-facing opportunities are continually trying


to increase the number of "moments of delight" and the psychological addiction to a product or
service. Engagement techniques such as gamification and emotional design are increasingly
being applied to the customer experience, and the next frontier will be more direct and precise
application of the neuroscience of engagement.

Employee Performance and Decision Support. A productive area for neurobusiness is


enhancing employee creativity, productivity and decision making for example, by addressing
challenges such as unconscious decision biases or adopting practices such as mindfulness
training. The insights can be effectively delivered as training and coaching or embedded into
software and website design. This will drive top executive and top professional personal and
team performance improvement.

Human Capital Management. Many human resources professionals in large organizations are
already deeply involved in tracking and applying social, behavioral and organizational
psychology and change management, and they are adding neuroscience to the list of relevant
research disciplines that inform their programs. Targets for behavioral change might include
innovation, creativity, ethical awareness or productivity.

Page 38 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

In the long term, neurobusiness will emerge as a high-impact business discipline across industries
(thus the Benefit Rating of High), although the benefits will be focused on significantly improving the
effectiveness of current activities rather than creating whole new approaches.
Benefit Rating: High
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Recommended Reading: "Innovation Insight: Neurobusiness Validates Behavioral Sciences as a
Transformational Business Discipline"
"Maverick* Research: Living and Leading in the Brain-Aware Enterprise"
"Maverick* Research: Myths and Realities in the Brain-Aware Enterprise"
"Maverick* Research: Socially Centered Leadership"
"Your Brain at Work: Strategies for Overcoming Distraction, Regaining Focus, and Working Smarter
All Day Long," David Rock, HarperCollins, 2009
"Thinking, Fast and Slow," Daniel Kahneman, Farrar, Straus and Giroux, 2012
"Predictably Irrational: The Hidden Forces That Shape Our Decisions," Dan Ariely, HarperCollins,
2010

Prescriptive Analytics
Analysis By: Lisa Kart; Alexander Linden
Definition: The term "prescriptive analytics" describes a set of analytical capabilities that specify a
preferred course of action. The most common examples are optimization methods such as linear
programming, decision analysis methods such as influence diagrams, and predictive analytics
working in combination with rules.
Position and Adoption Speed Justification: Although the concepts of optimization and decision
analytics have existed for decades, they are now re-emerging along with greater awareness of
advanced analytics and hype around big data. Decision management is a newer concept,
materializing over the last decade, that recognizes the value of using analytics (such as predictive
models) together with rules to make mainly operational decisions. Prescriptive analytics differs from
descriptive, diagnostic, and predictive analytics in that its output is a decision. This recommended
decision can be delivered to a human in a decision support environment, or can be coded into a
system for decision automation.
Some use cases are very mature such as optimization in supply chain and logistics, crossselling, database marketing and churn management but many new use cases are emerging with
as yet unknown potential. Therefore, it is still early days for broad adoption and awareness.

Page 39 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Because prescriptive analytics often leverages predictive methods, its adoption tends to be higher
among companies that have built predictive capabilities. Over the next three to five years, Gartner
expects the hype around prescriptive analytics to increase and it will ultimately enter the Trough of
Disillusionment as pitfalls and limitations come to light. It will reach a period of broader adoption
and productivity in five to 10 years.
User Advice: A quick way to get started is by leveraging prescriptive analytics applications aimed
at business users, such as those from River Logic, Decision Lens, Pros or Earnix. To build these
capabilities in-house, the best option is hiring individuals with prescriptive or predictive analytics
experience (often with operations research, management science, statistics, machine learning or,
increasingly, analytics and data science degrees). The alternative is to work with an experienced
service provider that can help you avoid pitfalls, demonstrate some initial success and learn about
the process. For organizations heavily using predictive analytics, decision management solutions
such as those from FICO, IBM and SAS are an easy way to get started.
Business Impact: Prescriptive analytics has extremely wide applicability in business and society. It
can apply to strategic, tactical and operational decisions to reduce risk, maximize profits, minimize
costs, or more efficiently allocate resources. Significant business benefits are common, obtained by
improving the quality of decisions, reducing costs or risks, and increasing efficiency or profits. An
important part of the approach is geared to making trade-offs among multiple objectives and
constraints. A critical success factor is having senior business leaders closely involved such that
decisions about trade-offs are aligned with organizational objectives.
Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Ayata; Decision Lens; Earnix; FICO; Frontline Systems; Gurobi; IBM; Pros; River
Logic; SAS
Recommended Reading: "Extend Your Portfolio of Analytics Capabilities"
"Find the Best Approach to Decision Management"
"Advanced Analytics: Predictive, Collaborative and Pervasive"

At the Peak
Data Science
Analysis By: Alexander Linden; Lisa Kart
Definition: Data science is the discipline of extracting nontrivial knowledge from often complex and
voluminous data, in order to improve decision making. It involves a variety of core steps ranging
from business and data understanding, data preparation, modeling/optimization/simulation and
evaluation, to deployment of analytic models. It leverages approaches from disciplines such as
Page 40 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

statistics and statistical learning, signal processing and pattern recognition, operations research,
machine learning and decision science.
Position and Adoption Speed Justification: Data science is, to some extent, a replacement term
for data mining, but is also much more: data science is the unification of several quantitative
disciplines (statistics, machine learning, operations research, computational linguistics, and others)
For the first time, computer scientists, operations researchers, statisticians and others are all willing
to unite behind the banner of "data science" which is a very profound development.
During the past year, this notion of data science has become much better understood as the
quantitative set of skills and methodologies in the analytics range of capabilities (see "Extend Your
Portfolio of Analytics Capabilities"). Just the fact that many highly acknowledged academic
institutions now offer data science courses, and often even degrees, means that this term has been
generally accepted. In addition, organizations hiring data scientists and building data science teams
are on the rise. Gartner expects that, within a few years, the term "data science" will gain
widespread recognition as an umbrella term for many forms of sophisticated analytics.
User Advice: Organizations that want to increase the maturity of their analytics and extend their
portfolio of analytics capabilities need to develop data science skills to leverage new big data
sources and demonstrate business value using predictive and prescriptive (and often diagnostic)
capabilities. However, organizations must recognize that data scientists are in very short supply
recruiting them internally may be difficult, but not impossible.
Business Impact: Data science drives a vast array of use cases across all industries: customer
relationship management, optimization and automation of diverse production processes, drug
research, quality and risk management, smart cities, smart systems and many more.
Benefit Rating: Transformational
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Recommended Reading: "An Eight-Question Decision Framework for Buying, Building and
Outsourcing Data Science Solutions"
"Extend Your Portfolio of Analytics Capabilities"
"Who's Who in Advanced Analytics"
"Magic Quadrant for Advanced Analytics Platforms"
"Organizational Principles for Placing Advanced Analytics and Data Science Teams"

Smart Advisors
Analysis By: Kenneth F. Brant

Page 41 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Definition: Smart advisors are a class of smart machines that deliver the best answers to users'
questions based on their analysis of large bodies of ingested content and knowledge of the users'
needs. Therefore, natural-language processing is essential for content and context matching.
Curating the right bodies of content (including real-time accessions), along with training and testing,
is necessary for smart advisors to excel at making probabilistic determinations. Smart advisors get
better at performing their role as they work with their users.
Position and Adoption Speed Justification: Although many current offerings are being betatested while other offerings are still in early trials, smart advisors like IBM's Deep Blue and Watson
have had much well-publicized success in making the right determinations about chess moves and
quiz show answers. Therefore, this technology is positioned nearer the Peak of Inflated
Expectations than we would expect based on its limited adoption to-date. Tests and trials will
determine how quickly the next generation of technology will evolve to provide a wider swath of
user acceptance. Smart advisors are being developed both by megavendors like IBM and by
startups like Engage3 and Lumiata; so there is a small but interesting mix of players on the supply
side. A sign that this market is accelerating will be the appearance of several new entrants in 2014
and 2015 to create more competition and accelerate innovation and best practices in marketing and
deployment. Programs announced by IBM Watson Group to expand the ecosystem and licensing of
smart advisor technology to industry and role-based experts have the potential to expedite
adoption speed, but much remains to be seen with regard to how effective these programs will be.
Industry leaders in healthcare (Memorial Sloan Kettering Cancer Center and MD Anderson),
insurance (ANZ and USAA), and media (Nielsen) have already deployed smart advisors; Infosys'
recent partnership with IPsoft is a sign that the value proposition of smart advisors is gaining
traction in the IT services segment, as well.
User Advice: Enterprises in healthcare, retail, financial services and any other service industry with
relatively high labor costs, big, dynamic and largely unstructured datasets, and needs for highly
individualized consumer advice should explore smart advisors within the next 12 months and
assess their readiness for pilot projects. Pilots should not only establish the capability of the
technology to reduce costs and improve service levels but also explore their ability to monetize
previously unexplored commercial avenues in big data. Therefore, design and review of feasibility
tests should include not only general managers of current business units (to vet possible cost and
productivity improvements in current operations) but also the chief data officers, the chief digital
officers and the visionaries of big data monetization and digital business architecture at your firm.
Business Impact: Smart advisors will impact the industries where the presence of big, dynamic
and largely unstructured data is compounded by the need for highly individualized
recommendations, like medical science findings at the intersection of personal medical records or
promotional offers at particular retail stores at the intersection of shoppers' needs. Theoretically the
smart advisor can advise across a spectrum of use cases, including enterprise users (such as
healthcare service providers), ancillary service providers (such as healthcare payers) or the end
consumers of products and services (such as patients). However, the economics of developing the
smart advisor plus channels and workable business models put the price and complexity of highly
personalized smart advisors out of the reach of the consumer end user today. This may change, but
first we expect enterprises will purchase smart advisors for their own use and for their clients' use.
For the enterprise deploying smart advisors, the business impacts can be lower costs and greater

Page 42 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

reliability (substituting for human labor), differentiated service and brand enhancement (often in
tandem with human labor) or a combination of both. By deploying smart machines, healthcare
providers and payers can potentially improve service outcomes and reduce waste in the healthcare
system by improving medical diagnoses and treatments; retailers and consumer goods
manufacturers can improve customer satisfaction and competitive positioning while optimizing
trade promotion expenses. IT services providers can potentially improve reliability of service, reduce
employee recruitment and training costs, and improve service outcomes.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: AlchemyAPI; Digital Reasoning; Engage3; Fluid; IBM Watson Group; IPsoft;
Lumiata

Autonomous Vehicles
Analysis By: Thilo Koslowski
Definition: An autonomous vehicle is one that can drive itself from a starting point to a
predetermined destination in "autopilot" mode using various in-vehicle technologies and sensors,
such as lasers, radars and cameras, as well as advanced driver assistance systems, software, map
data, GPS and wireless data communication.
Position and Adoption Speed Justification: Advancements in sensor, positioning, imaging,
guidance, artificial intelligence (AI), mapping and communications technologies, combined with
advanced software and cloud computing, are gaining in precision to bring the autonomous vehicle
closer to reality. However, complexity challenges remain before autonomous vehicles can achieve
the reliability levels needed for actual consumer use cases. The development of autonomous
vehicles largely depends on sensor and map data technologies. Sensor data needs high-speed data
buses and very high-performance computing processors to provide real-time route guidance,
navigation and obstacle detection and analysis. The introduction of self-driving vehicles will occur in
three major phases: from automated, to autonomous, to driverless vehicles. Each phase will require
more-sophisticated and reliable capabilities that rely less on human driving intervention.
First applications of autonomous vehicles will occur during this decade, and early examples might
be limited to specific road and driving scenarios (for example, only on highways and not in snow
conditions). During 2013, several automakers, including Nissan and Daimler, announced the plan to
launch self-driving vehicle offerings by 2020. In addition to continued efforts by automotive
companies, continued efforts by technology companies (such as Google, Here [Nokia], QNX, Intel
and Nvidia) are helping to achieve critical advances in autonomous driving, and to educate
consumers on the benefits and maturity of the technology. Further, autonomous machine efforts in
other industries (including the defense and transportation sector, as well as law enforcement and
entertainment) are also accelerating progress in key technologies needed for self-driving vehicles,
such as AI.

Page 43 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

During 2013, autonomous vehicles efforts have been prominently featured by mainstream media,
which is leading to unrealistic and inflated expectations. Key challenges for the realization of
autonomous vehicles continue to be centered on cost reductions for the technology, but they
increasingly include legal and ethical considerations, such as liability and driver-related aspects. For
example, can an intoxicated driver use an autonomous vehicle? Can children use a self-driving
vehicle? How should a self-driving vehicle behave when it has to decide between running over a pet
versus causing property damage? While legal requirements are beginning to be addressed on an
international level (for example, changes to the Vienna Convention on Road Traffic), the pace of
technology innovations and individual country and state legislation will likely initially result in
specific, limited-use cases for self-driving vehicles.
User Advice: Automotive companies, service providers, governments and technology vendors (for
example, software, hardware, sensor, map data and network providers) should collaborate to share
the cost and complexity of experimentation with the required technologies, carefully balancing
accuracy objectives with user benefits.
Consumer education is critical to ensure that demand meets expectations once autonomous
vehicle technology is ready for broad deployment. For example, drivers will need to be educated on
how to take over manually in case an autonomous vehicle disengages due to technical error or to
changing environmental conditions. Specific focus needs to be applied to the transitional phase of
implementing autonomous or partial-autonomous vehicles with an existing older fleet of nonenabled
vehicles. This will have implications for driver training, licensing and liability (as in, insurance).
Business Impact: Automotive and technology companies will be able to market autonomous
vehicles as having innovative driver assistance, safety and convenience features, as well as an
option to reduce vehicle fuel consumption and to improve traffic management. The interest of
nonautomotive companies highlights the opportunity to turn self-driving cars into mobile-computing
platforms that offer an ideal platform for the consumption and creation of digital content, including
location-based services and vehicle-centric information and communication technologies.
Autonomous vehicles are also a part of mobility innovations and new transportation services that
have the potential to disrupt established business models. For example, autonomous vehicles will
eventually lead to new offerings that highlight mobility-on-demand access over vehicle ownership,
by having driverless vehicles pick up occupants when needed. Societal benefits from reduced
accidents, injuries and fatalities and improved traffic management can be significant, and could
even slow down or potentially reverse other macroeconomic trends. For example, if people can be
productive while being driven in an autonomous vehicle, living near a city center to be close to work
won't be as critical, which could slow down the process of urbanization.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Maturity: Embryonic
Sample Vendors: Anki; Bosch; Continental Automotive Systems; Google; Intel; Knightscope;
Mobileye; Nokia; Nvidia; Valeo; ZMP

Page 44 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Recommended Reading: "Industry Convergence The Digital Industrial Revolution"


"U.S. Government Must Clarify Its Terms to Boost V2V Technology Adoption"
"Predicts 2014: Automotive Companies' Technology Leadership Will Determine the Industry's
Future"
"German Consumer Vehicle ICT Study: Demand for In-Vehicle Technologies Continues to Evolve"
"Google Moves Autonomous Cars Closer to Reality"

Speech-to-Speech Translation
Analysis By: Adib Carl Ghubril
Definition: Speech-to-speech translation involves translating one spoken language into another. It
combines speech recognition, machine translation and text-to-speech technology.
Position and Adoption Speed Justification: Speech-to-speech translation entails three steps:
converting speech into text; translating text; and finally, converting text to speech. In effect,
anything that may be converted to text may be translated. Microsoft's Bing platform, running on
Windows 8.1, and Google Translate offer speech translation; furthermore, their optical character
recognition middleware allows the user to touch or select an on-screen character, from a graphic or
photo, and listen to the description or translation of that resulting text in any of the supported
languages.
While there has been little adoption of the technology by enterprises to date, due to accuracy
limitations and response times, the availability of low-cost mobile consumer products may drive
interest and progress for higher-end applications. We continue to anticipate rising hype and
capabilities during the next two years, and a growing breadth of applicability during the next five
years. In August 2013, Facebook purchased Mobile Technologies, makers of the speech-to-speech
application "Jibbigo," and this acquisition is a reflection of Facebook's ambition to enable online
interaction.
Vendors can build on their speech recognition know-how (such as what Apple has done with Siri),
to create a translation system that can be used to support dialogue. Meanwhile, platform-specific
applications from independent developers like SayHi Translate continue to expand the
breadth of users' options. Also, experiments are being conducted with a multimodal approach, in
which information from gestures and facial expression is being used to execute translation in
context with dialogue. IBM is tackling out-of-vocabulary words by devising a machine that interacts
with the user to ascertain where the linguistic mistake was made.
User Advice: Do not view automated translation as a replacement for human translation but, rather,
see it as a way to deliver approximate translations for limited dialogues in which no human
translation capability is available. Evaluate whether low-cost consumer products can help during
business travel or first-responder situations. Leading-edge organizations can work with vendors and
labs to develop custom systems for constrained tasks.

Page 45 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Business Impact: Consumer mobile applications are the first to attract significant interest. Potential
enterprise applications include on-site interactions for fieldworkers, as well as government security
and emergency and social service interactions with the public. In the U.N. General Assembly, more
than 20 languages are spoken, and the yearly meeting transcription fees are significant finding
ways to automate that would provide welcomed cost relief.
Speech-to-speech translators can help improve the social interaction between foreign soldiers and
local inhabitants in the urban settings of modern-day theaters of war. In the longer term,
multinational call centers and internal communications in multinational corporations will benefit,
particularly for routine interactions. However, internal collaborative applications may be limited
because strong team relationships will unlikely be forged, if the only way to communicate is through
automated translation.
Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Cellictica; Facebook; Google; IBM; Microsoft; Philips; Science Applications
International Corp.; SpeechTrans

Internet of Things
Analysis By: Hung LeHong
Definition: The Internet of Things (IoT) is the network of physical objects that contain embedded
technology to communicate and sense or interact with their internal states or the external
environment.
Position and Adoption Speed Justification: Enterprises vary widely in their progress with the IoT.
At a simple level, adoption can be classified into three categories. But even within an enterprise,
there can be groups that have different levels of progress with the IoT therefore, the enterprise
would exhibit a combination of these categories:

Enterprises that already have connected things but want to explore moving to an IoT
These enterprises are no strangers to the benefits and management of connected things/
assets. They are experienced in operational technology, which is an industrial/business internal
form of digital modernization. However, they are unfamiliar with the new Internet-based, bigdata-based, mobile-app-based world. They can be equally optimistic and hesitant to move their
assets (and to add new connected assets) to this unfamiliar Internet world.

Enterprises that are unfamiliar with the IoT, but are exploring and piloting use cases
Most of these enterprises are focused on finding the best areas to implement the IoT while
trying to understand the technology.

Product manufacturers that are exploring connecting their products to provide new value
and functionality to their customers It seems that every week there is a new story about a

Page 46 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

consumer or industrial product that is now connected. However, the large enterprises often wait
and see how the startups are doing before moving forward.
Standardization (data standards, wireless protocols, technologies) is still a challenge to more-rapid
adoption of the IoT. A wide number of consortiums, standards bodies, associations and
government/region policies around the globe are tackling the standards issues. Ironically, with so
many entities each working on their own interests, we expect the lack of standards to remain a
problem over the next three to five years.
In contrast, dropping costs of technology, a larger selection of IoT-capable technology vendors and
the ease of experimenting continue to push trials, business cases and implementations forward.
Technology architecture for the IoT is evolving from one where the thing/asset contains most of the
computing resources and data storage to an architecture in which the thing/asset relies on the
cloud, smartphone or even the gateway for computing and connectivity capabilities. As the IoT
matures, we expect to see enterprises employ a variety of architectures to meet their needs.
User Advice: Enterprises should pursue these activities to increase their capabilities with the IoT:

CIOs and enterprise architects:

Work on aligning IT with OT resources, processes and people. Success in enterprise IoT is
founded in having these two areas work collaboratively.

Ensure that EA teams are ready to incorporate IoT opportunities and entities at all levels.

Look for standards in areas such as wireless protocols and data integration to make better
investments in hardware, software and middleware for the IoT.

Product managers:

Consider having your major products Internet-enabled. Experiment and work out the
benefits to you and customers in having your products connected.

Start talking with your partners and seek out new partners to help your enterprise pursue
IoT opportunities.

Strategic planners and innovation leads for enterprises with innovation programs:

Information management:

Experiment and look to other industries as sources for innovative uses of the IoT.

Increase your knowledge and capabilities with big data. The IoT will produce two
challenges with information: volume and velocity. Knowing how to handle large volumes
and/or real-time data cost-effectively is a requirement for the IoT.

Information security managers:

Page 47 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Assign one or more individuals on your security team to fully understand the magnitude of
how the IoT will need to be managed and controlled. Have them work with their OT
counterparts on security.

Business Impact: The IoT has very broad applications. However, most applications are rooted in
four usage scenarios. The IoT will improve enterprise processes, asset utilization, and products and
services in one of, or in a combination of, the following ways:

Manage Connected things can be monitored and optimized. For example, sensors on an
asset can be optimized for maximum performance or for increased yield and up time.

Charge Connected things can be monetized on a pay-per-use basis. For example,


automobiles can be charged for insurance based on mileage.

Operate Connected things can be remotely operated, avoiding the need to go on-site. For
example, field assets such as valves and actuators can be controlled remotely.

Extend Connected things can be extended with digital services such as content, upgrades
and new functionality. For example, connected healthcare equipment can receive software
upgrades that improve functionality.

These four usage models will provide benefits in the enterprise and consumer markets.
Benefit Rating: Transformational
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Atos; Axeda; Bosch; Cisco; Eurotech; GE; Honeywell; IBM; Kickstarter; LogMeIn;
Microsoft; QNX; Schneider Electric; Siemens
Recommended Reading: "Uncover Value From the Internet of Things With the Four Fundamental
Usage Scenarios"
"The Internet of Things Is Moving to the Mainstream"
"The Information of Things: Why Big Data Will Drive the Value in the Internet of Things"
"Agenda Overview for Operational Technology Alignment With IT, 2013"

Natural-Language Question Answering


Analysis By: Whit Andrews; Hanns Koehler-Kruener; Tuong Huy Nguyen
Definition: Natural language question answering (NLQA) technology is composed of applications
that provide users with a means of asking a question in plain language. A computer or service can
answer it meaningfully while maintaining the flow of interaction.

Page 48 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Position and Adoption Speed Justification: The challenges in effective interpretation of idiomatic
interrogative speech, matching it to knowledge bases of potentially infinite scope, and the selection
of a limited number of answers (even just one) remain profoundly difficult. Simple answers such as
the one answer available for a trivia question are far easier than the multivariate, nuanced answers
inherent in real human communication (for example, "Cold or flu? Why not cold AND flu!").
IBM captured the attention of the world in February 2011 when Watson (a Smart Advisor) won a
quiz show, and now the technology is maturing into a variety of sophisticated products. It joins a
long line of immediately fascinating and broadly constrained custom-made knowledge-calculation
devices. This has been followed by the mainstream introduction of simple conversational assistants
from Apple, Google and, most recently, Microsoft. Apple's Siri launched in 2011 as a new way for
users to interact with informational systems. It incorporates speech-to-text technology with naturallanguage processing query analysis to wow users (at least some of the time). In 2013, Google
featured such technologies in the keynote for its closely and avidly watched Google I/O. Such
attention defines a peak of hype. Facebook's Graph Search project also allows for semantically rich
queries, albeit with less ambiguity. In April 2014, Microsoft introduced Cortana: a virtual personal
assistant for the Windows Phone operating system. As precursor products, these conversational
assistants should evolve into virtual personal assistants between 2015 and 2017.
Solutions ultimately must discover means of communication with humans that are intuitive,
effective, swift and dialogic. They benefit significantly from context, either detected (as in
geographical location) or explicit (as with products that have a specific goal, such as health
diagnostics). The ability to conduct even a brief conversation, with context, antecedent
development and retention, and relevancy to individual users is in its infancy. However,
nonconversational, information-centered answers are indeed already possible with the right
combination of hardware and software. Also, as in all technology categories, the availability of such
resources can only become cheaper and easier. More than five years will pass before such
capabilities are commonplace in industry, government or any other organizational environment
but they will ultimately be available to leaders in such categories.
User Advice: The computing power required to accomplish a genuinely effective trivia competitor is
expensive, but will become less so with time. Any projects founded on such facility must be
experimental, but in the foreseeable future will include diagnostic applications of many kinds as well
as commercial advice and merchandising, and strategic or tactical decision support.
"Augmentation" of human activity and decision making is the key thought. No decision support
application comes, fully formed, from nothing it will be expert humans who build it, design the
parameters and develop the interface. Humans will, similarly, evaluate its advice and decide how to
proceed. A good idea is to begin with experimental technologies, such as chatbots, and to work
toward more sophisticated technologies as they become commercially accessible.
NLQA is positioned to be a strong enabler for other technologies such as virtual personal assistants,
cognitive computing and speech recognition. These technologies can serve as two-way
steppingstones toward building an effective NLQA system.
Business Impact: Ultimately, the ability for line workers or unschooled consumers to achieve
effective responses from machines without using expertise in framing queries will generate new

Page 49 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

kinds of information exploitation by diminishing information friction yet more. Given a limited set of
answers and an effective means of capturing plain language requests, it is easy to see computers
more effectively providing guidance in various environments. Business cases such as diagnostic
support in healthcare whether for expert or nonexpert users and also consumer services (such
as those Siri provides) are some use cases.
Benefit Rating: High
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: Apple; Cognitive Code; EasyAsk; Expect Labs; HP; IBM Watson; Microsoft;
Nuance; Sherpa Software; Vlingo; Wolfram Alpha
Recommended Reading: "The Nexus of Forces Is Driving the Adoption of Semantic Technologies,
but What Does That Mean?"
"Siri and Watson Will Drive Desire for Deeper and Smarter Search"
"CIO Advisory: Why CIOs Should Be Concerned About Siri and Other Voice-Controlled Assistants"
"Sherpa: The End of Search as You Know It for CRM"

Wearable User Interfaces


Analysis By: Angela McIntyre
Definition: Wearable user interfaces describe the interaction between humans and computing
through electronics designed to be worn on the body. They may sense the human body or the
environment around the wearer and transmit the information to a smartphone or to the cloud.
Ideally, wearable user interfaces are unobtrusive, always on, wirelessly connected and provide
timely information in context. Examples of wearable electronics are smart watches, smart glasses,
smart clothing, fitness monitor wristbands, sensors on the skin and audio headsets.
Position and Adoption Speed Justification: This past year saw the hype on wearable user
interfaces reach the Peak of Inflated Expectations and become tempered with realism about the
value consumers perceive in new wearable devices. Devices are not viewed as stylish by
consumers; the data collected from wearable sensors yields insights that are marginally useful to
wearers. Apps and algorithms that can interpret noisy data from wearable sensors are needed to
increase the usefulness of recommendations. Use cases in which wearable user interfaces are more
convenient than smartphones are limited. Smartphones are gaining sensors and apps that give
them health and fitness tracking capabilities similar to wearables. Nonetheless, apps on
smartphones are enabling new capabilities and insights from wearables.
Yet interest in wearable interfaces remains high, and Google is fostering an ecosystem that is
expected to gain traction. Android Wear will enable a consistent user interface across different
types of wearables. For example, Android-based wearables will have a similar user experience in

Page 50 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

the layout of glanceable displays, gestures for navigating among apps, and using voice commands
for control and to access information. Google and affiliated service providers have potential access
to personal information gathered by other wearable devices using services on an Android Wear
platform.
Data from wearable devices can be combined with data from other devices in the Internet of Things
and from other sources on the Internet, adding to big data. Apps, services and virtual personal
assistants (VPAs) will provide increasingly useful insights to wearers as part of cognizant computing
by using personal data collected from wearable electronics. The consumer trends for adoption are
being driven by quantified self, convenience and the desire for immediate alerts, especially
regarding social networks. Tracking data for medical reasons will be a longer-term driver for
wearables.
Over the next 10 years, wearable user interfaces will enable services to become more personalized
to the preferences and needs of the user through contextual information and bio-data gathered
through wearable electronics. Similarly, wearable devices will serve as controllers for other devices
in the Internet of Things. For example, consumers with Nest thermostats can control them remotely
through Google Glass. Similarly, the Pebble smartwatch can take a photo with the GoPro camera
and start a car engine remotely. These are early examples of how wearable user interfaces will
become increasingly integrated into daily life.
User Advice: Invest now in deployments or pilots for wearable user interfaces in the enterprise.
Start with wearables for mobile workers who cannot conveniently or safely put aside what they have
in their hands to use a phone or tablet, such as employees using tools or equipment, or who need
to keep their heads up or to hold on for safety.
Engage with software developers now on augmented reality use cases specific to your business
needs. Augmented reality solutions are in development for head-mounted displays (HMDs).
However, robust software solutions using augmented reality beyond checklists will take an
additional two to five years of development. The battery life of present HMDs lasts only a couple of
hours for uses such as streaming video. Until at least full-day battery life is available, workers will
find wearables inconvenient or impractical to use.
Where time-motion efficiency is essential to productivity, such as in call centers and logistics
organizations, employers are investigating wearables, such as gaze tracking through audio
headsets and location tracking through badges. Explore solutions that lead to recommendations to
increase worker productivity or to monitor employees in physically demanding work environments.
Encourage the workforce to be healthier by implementing wellness programs that include wearable
fitness trackers and also work with providers on advances in algorithms for fitness trackers. Fitness
trackers in wristbands or other forms are motivating to people who want to be less sedentary. The
general health of the consumer or employee can be measured with wearables, including body
temperature, heart rate, heart rate variability (stress) and potentially blood glucose.
Explore longer-range opportunities for always-on information access through smart glasses or
smart watches through voice input and video, but evaluate risks before heavily investing. Create

Page 51 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

policies around personal privacy and the restrictions around taking pictures in the workplace. Data
security risk will also increase with the rise in content sharing among devices that are interacting
across personal networks.
Business Impact: Early industries to adopt wearable electronics are aerospace and police,
followed by sports, field service, manufacturing, logistics, transportation, oil and gas, retail, and
healthcare. The healthcare market stands to benefit from wearable user interfaces that enable
mobile health monitoring, especially for heart conditions. Wearable cameras are ready for
deployment now for use cases such as police/security and inspections. Field service and
manufacturing are using streaming video to an expert who sees what the wearer sees, which is
useful for training or expert assistance. Sports is using wearables on players for "in the game"
perspective tracking the performance of athletes. Augmented reality solutions on HMDs have the
promise to increase productivity by providing part labels, checklists, maps and instructions
superimposed on real-world views.
Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Aliph; Eleksen; Epson; Eurotech; Fitbit; FitLinxx; General Dynamics Itronix;
Google; Kopin; LXE; Motorola; Oculus VR; Pebble; Plantronics; Recon Instruments; Samsung; Sony;
Vuzix
Recommended Reading: "Cool Vendors in Wearable Electronics, 2014"
"Innovation Insight: Smartglasses Bring Innovation to Workplace Efficiency"
"Market Trends: Enter the Wearable Electronics Market With Products for the Quantified Self"
"Fellows Interview: Thad Starner, Wearable Computing Pioneer"

Consumer 3D Printing
Analysis By: Pete Basiliere; Nick Jones
Definition: Consumer 3D printing is the market for devices that are typically used in the home by
family members as well as prosumers. The 3D printers are generally based on low-cost material
extrusion technology that are in the $500-to-$1,000 price band but can range up to $2,500.
Position and Adoption Speed Justification: 3D printing is a 30-year-old technology, but only
within the last five years has the technology advanced to printing on devices priced less than
$1,000.
3D printing by consumers is an emerging market. A sign of the market's growth is the consolidation
of its technology providers. For example, Stratasys completed its acquisition of MakerBot in August
2013. Interestingly, the major 2D printer manufacturers continue on the sidelines, mainly conducting

Page 52 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

research or providing OEM capabilities to third parties. Indeed, Xerox has been an original
equipment supplier for 17 years but still does not put its name on a 3D printer.
Gartner has predicted that by 2015, seven of the 50 largest multinational retailers will sell 3D
printers through their physical and online stores. Most of these devices will be in the "consumer"
market although many will no doubt be purchased by enterprises.
For many consumers, however, do-it-yourself kits to build a 3D printer costing a few hundred
dollars are too much trouble, while assembled 3D printers costing up to $2,500 are too expensive,
although that may not be the case for many "makers," a term popularized by Wired magazine's
former editor-in-chief, Chris Anderson. More than hobbyists or early adopters, makers are
enthusiasts and entrepreneurs who create and collaborate on screen, hone their ideas online and in
real-world communities, and employ cutting-edge technologies to produce unexpected results.
What sets makers apart from other consumers is their inquisitive, collaborative approach to problem
solving, coupled with access to hardware and software tools unavailable to earlier generations of
tinkers and inventors.
However, for the general consumer population, which inevitably compares the cost of a 3D printer
to the $100 to $250 they may spend on a 2D printer, the price is too rich. A wide range of
rudimentary 3D printers that extrude plastic material is on the market, some with price points in the
$500 range. Even these may be too expensive for general consumer use, especially when the cost
to license or purchase 3D creation software tools is factored into the purchase. As a result, the
dominant near-term consumer use of 3D printing will be the purchase of items whether made by an
artist, sold by a consumer goods company or available through an online service bureau.
User Advice: While continuing to keep abreast of 3D print technology developments, physical and
online retailers must explore the use of this technology by experimenting with low-volume
manufacturing of high-margin, custom-designed pieces for example, fashion jewelry and
eyeglass frames sold through in-store kiosks and Web portals. Market studies must determine
the materials that consumers prefer and the price points they are willing to meet. Retailers also
must test selling 3D printers in their physical and online stores. Successful consumer use of 3D
printers requires an ecosystem software, materials and printer that is more complex than that
associated with 2D printing on paper. Retailers need to think carefully about how they will support
this technology with the customer service basics essential things that a customer expects when
shopping with the retailer, such as stock availability, warranties and postsale service and support.
Business Impact: Consumer 3D printing is a classic example of how the use of an established
technology, in this case, additive manufacturing, transitions over time from one that is prohibitively
expensive for all but manufacturing organizations, to one that has pricing within the grasp of
consumers. The hype in the general press has heightened consumer awareness of the technology.
The news about manufacturing guns and other weapons with 3D printers complicates the
opportunity. And, yes, 3D printer costs continue to come down as the printers become more readily
available, enabling consumers to manufacture their own custom-designed items, ranging from
jewelry to weaponry. Retailers selling 3D printers or items produced with 3D printers must
investigate the legal implications of customers using devices sold by them to manufacture

Page 53 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

potentially lethal weapons, just as they must take steps to ensure that 3D-printed items made per
their customers' orders comply with local copyright and related laws.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Maturity: Emerging
Sample Vendors: 3D Systems; Afinia; Beijing TierTime; MagicFirm; Stratasys; XYZprinting
Recommended Reading: "Cool Vendors in 3D Printing, 2013"
"How 3D Printing Disrupts Business and Creates New Opportunities"
"Use the Gartner Business Model Framework to Determine the Impact of 3D Printing"
"3D Photo Booth Will Help Drive Awareness and Momentum for 3D Printing"

Cryptocurrencies
Analysis By: David Furlonger
Definition: Cryptocurrencies are virtual money, created by private entities without the backing of
governments, transacted using digital mediums. Unlike virtual money in the form of coupons or
tokens, they are generated via computer-originated cryptographic mechanisms, often in the form of
puzzle solving, that securely remit digital information through the use of private and public keys.
Mathematical regimes usually limit and control currency production or issuance.
Position and Adoption Speed Justification: Cryptocurrencies have risen to notoriety through the
publicity surrounding Bitcoin. Bitcoins are created via a process of mining the solving of a
computation puzzle. Coins are awarded for each puzzle solution, and prior Bitcoin transactions are
simultaneously verified. The somewhat opaque nature of its original founding and organization,
together with huge volatility in its value, has prompted significant research written about its use as
an alternative currency. However, it is not the only such currency to be created. Others include
Litecoin, Peercoin (also referred to as PPCoin), Freicoin, Terracoin, Devcoin and Namecoin,
although none have as much circulation. (A recent list can be found at https://en.bitcoin.it/wiki/
List_of_alternative_cryptocurrencies.)
Although cryptocurrencies are potentially very secure (transactions can be readily verified but very
difficult to defraud), they also carry significant issues that undermine the principles of money as a
store of value, unit of account and medium of exchange:

Questionable governance and transparency surrounding the original author There is a public
log of transactions, but the computer addresses don't identify individuals, only the processor of
the key that unlocks the computer wallet.

Page 54 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Lack of regulation See "Legality of Bitcoins by Country," "Japan's Authorities Decline to Step
In on Bitcoin," "Report: Spain's Tax Authority Weighs In on Bitcoin," "Bitcoin Recognized by
Germany as 'Private Money'" and "IRS Tax Ruling Implications for Bitcoin."

Complicated generation/issuance.

Limited ubiquity.

Limited usability See https://en.bitcoin.it/wiki/Trade. Now, customers using a Gyft mobile gift
card can use Bitcoins to buy a virtual gift card that can be used at more than 200 retailer pointof-sale terminals.

Limited exchangeability However, some exchanges now exist (for example, Preev.com and
Coinmill.com, as well as payment mechanisms such as BitPay) to facilitate currency conversion
into mainstream currencies and goods and services.

Mixed speed of exchange acceptability and authorization For example, Litecoin transactions
are confirmed every 2.5 minutes.

With Bitcoins, puzzle complexity increases with every solution, raising the requirements for
increased computational power.

For PPCoin, a lottery system allocates new coins based on a formula that determines how many
PPCoins someone already holds.

Concerns about their use in potentially illegal activities Although cryptocurrencies are
allegedly very secure, regulators have been following their development and announced in April
that they would be supervised under anti-money-laundering rules.

Data storage requirements.

Computer network stability.

Market stability For example, the demise of Mt. Gox and other Bitcoin exchanges. Since
2009, Bitcoin prices have been seven times more volatile than the price of gold and eight times
more volatile than the S&P 500 stock exchange (see "Are Crypto-Currencies the Future of
Money?").

Lack of cryptocurrency operating standards (In 2012, the Bitcoin Foundation was formed to try
to standardize, protect and promote the use of the currency.) And, according to FT Alphaville,
some recent additions to the cryptocurrency field are now also replicating traditional systems,
such as BlackCoin.

User Advice:

Monitor the development of cryptocurrencies to assess the likelihood of customer adoption and
usage separate the payment aspects from the medium of exchange capabilities.

Discuss with customers the perceived value proposition for their use of these currencies to
better inform the development of products, loyalty schemes and partnerships, as well as to
understand the value to be gleaned from information capture from transactions.

Page 55 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Discuss with regulators their supervision and monitoring of cryptocurrencies as part of the
global financial marketplace and the impact on business operations.

Assess employee use of these currencies to protect against operational risk in the event of
unintended compliance problems.

Plan (technology and business road maps) for the potential integration of cryptocurrencies with
mainstream mediums of exchange for example, review changes to data systems, execution
and risk systems (see "Bitcoin Now on Bloomberg").

Business Impact: It is easy to miss the most important point with all the "noise" surrounding
individual currencies. The most critical issue is that these mediums of exchange hold the potential
to enable a pure person-to-person or entity-to-entity medium to transfer any kind of digital value,
less expensively and faster than traditional mechanisms. Moreover, these currencies are not subject
to the control of a single country or jurisdiction. And they afford users a degree of anonymity and
security, including the lack of transaction repudiation.
Therefore, the currency itself is not as relevant as the mechanisms in which these currencies are
based and that has fundamental ramifications for governments, financial institutions and the
world as a whole.
Benefit Rating: Transformational
Market Penetration: 1% to 5% of target audience
Maturity: Adolescent
Recommended Reading: "Future of Money: Financial Inclusion Requires More Than Traditional
Money"
"Future of Money: Using Cloud Capacity as a Currency"
"The Nexus of Forces Is Reshaping the Future of Money"
"Future of Money: Virtual Money Drives Rapid Growth in Online Gaming Industry"
"Future of Money: Virtual Currency and Gamification Drive Financial Literacy and Revenue
Opportunities"

Complex-Event Processing
Analysis By: W. Roy Schulte; Nick Heudecker; Zarko Sumic
Definition: Complex-event processing (CEP), sometimes called event stream processing, is a
computing technique in which incoming data about what is happening (event data) is processed as
it arrives to generate higher-level, more-useful, summary information (complex events). Complex
events represent patterns in the data, and may signify threats or opportunities that require a
response from the business. One complex event may be the result of calculations performed on a
few or on millions of base events (input) from one or more event sources.

Page 56 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Position and Adoption Speed Justification: CEP is transformational because it is the only way to
get information from event streams in real time. It will inevitably be adopted in multiple places within
virtually every company. However, companies were initially slow to adopt CEP because it is so
different from conventional architecture, and many developers are still unfamiliar with it. CEP has
moved slightly further past the Peak of Inflated Expectations, but it may take up to 10 more years
for it to reach its potential on the Plateau of Productivity.
CEP has already transformed financial markets the majority of equity trades are now conducted
algorithmically in milliseconds, offloading work from traders, improving market performance, and
changing the costs and benefits of alternative trading strategies. It is also essential to earthquake
detection, radiation hazard screening, smart electrical grids and real-time location-based marketing.
Fraud detection in banking and credit card processing depends on correlating events across
channels and accounts, and this must be carried out in real time to prevent losses before they
occur. CEP is also essential to future Internet of Things applications where streams of sensor data
must be processed in real time.
Conventional architectures are not fast or efficient enough for some applications because they use
a "save-and-process" paradigm in which incoming data is stored in databases in memory or on
disk, and then queries are applied. When fast responses are critical, or the volume of incoming
information is very high, application architects instead use a "process-first" CEP paradigm, in which
logic is applied continuously and immediately to the "data in motion" as it arrives. CEP is more
efficient because it computes incrementally, in contrast to conventional architectures that reprocess
large datasets, often repeating the same retrievals and calculations as each new query is submitted.
Two forms of stream processing software have emerged in the past 15 years. The first were CEP
platforms that have built-in analytic functions such as filtering, storing windows of event data,
computing aggregates and detecting patterns. Modern commercial CEP platform products include
adapters to integrate with event sources, development and testing tools, dashboard and alerting
tools, and administration tools. More recently the second form distributed stream computing
platforms (DSCPs) such as Amazon Web Services Kinesis and open-source offerings including
Apache Samza, Spark and Storm was developed. DSCPs are general-purpose platforms without
full native CEP analytic functions and associated accessories, but they are highly scalable and
extensible so developers can add the logic to address many kinds of stream processing
applications, including some CEP solutions.
User Advice:

Companies should use CEP to enhance their situation awareness and to build "sense-andrespond" behavior into their systems. Situation awareness means understanding what is going
on so that you can decide what to do.

CEP should be used in operational activities that run continuously and need ongoing
monitoring. This can apply to fraud detection, real-time precision marketing (cross-sell and
upsell), factory floor systems, website monitoring, customer contact center management,
trading systems for capital markets, transportation operation management (for airlines, trains,
shipping and trucking) and other applications. In a utility context, CEP can be used to process a

Page 57 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

combination of supervisory control and data acquisition (SCADA) events and "last gasp"
notifications from smart meters to determine the location and severity of a network fault, and
then to trigger appropriate remedial actions.

Companies should acquire CEP functionality by using an off-the-shelf application or SaaS


offering that has embedded CEP under the covers, if a product that addresses their particular
business requirements is available.

When an appropriate off-the-shelf application or SaaS offering is not available, companies


should consider building their own CEP-enabled application on an iBPMS, ESB suite or
operational intelligence platform that has embedded CEP capabilities.

For demanding, high-throughput, low-latency applications or where the event processing


logic is primary to the business problem companies should build their own CEP-enabled
applications on commercial or open-source CEP platforms (see examples of vendors below) or
DSCPs.

In rare cases, when none of the other tactics are practical, developers should write custom CEP
logic into their applications using a standard programming language without the use of a
commercial or open-source CEP or DSCP product.

Business Impact: CEP:

Improves the quality of decision making by presenting information that would otherwise be
overlooked.

Enables faster response to threats and opportunities.

Helps shield business people from data overload by eliminating irrelevant information and
presenting only alerts and distilled versions of the most important information.

CEP also adds real-time intelligence to operational technology (OT) and business IT applications.
OT is hardware and software that detects or causes a change through the direct monitoring and/or
control of physical devices, processes and events in the enterprise. For example, utility companies
use CEP as a part of their smart grid initiatives, to analyze electricity consumption and to monitor
the health of equipment and networks.
CEP is one of the key enablers of context-aware computing and intelligent business operations.
Much of the growth in CEP usage during the next 10 years will come from the Internet of Things,
digital business and customer experience management applications.
Benefit Rating: Transformational
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Amazon; Apache; EsperTech; Feedzai; IBM; Informatica; LG CNS; Microsoft;
OneMarketData; Oracle; Red Hat; SAP; SAS (DataFlux); ScaleOut Software; Software AG;
SQLstream; Tibco Software; Vitria; WSO2

Page 58 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Recommended Reading: "Use Complex-Event Processing to Keep Up With Real-time Big Data"
"Best Practices for Designing Event Models for Operational Intelligence"

Sliding Into the Trough


Big Data
Analysis By: Mark A. Beyer
Definition: Big data is high-volume, velocity and variety information assets that demand costeffective, innovative forms of information processing for enhanced insight and decision making.
Position and Adoption Speed Justification: Big data has crossed the Peak of Inflated
Expectations. There is considerable debate about this, but when the available choices for a
technology or practice start to be refined, and when winners and losers start to be picked, the worst
of the hype is over.
It is likely that big data management and analysis approaches will be incorporated into a variety of
existing solutions, while simultaneously replacing some of the functionality in existing market
solutions (see "Big Data Drives Rapid Changes in Infrastructure and $232 Billion in IT Spending
Through 2016"). The market is settling into a more reasonable approach in which new technologies
and practices are additive to existing solutions and creating hybrid approaches when combined
with traditional solutions.
Big data's passage through the Trough of Disillusionment will be fast and brutal:

Tools and techniques are being adopted before expertise is available, and before they are
mature and optimized, which is creating confusion. This will result in the demise of some
solutions and complete revisions of some implementations over the next three years. This is the
very definition of the Trough of Disillusionment.

New entrants into this practice area will create new, short-lived surges in hype.

A series of standard use cases will continue to emerge. When expectations are set properly, it
becomes easier to measure the success of any practice, but also to identify failure.

Some big data technologies represent a great leap forward in processing management. This is
especially relevant to datasets that are narrow but contain many records, such as those associated
with operational technologies, sensors, medical devices and mobile devices. Big data approaches
to analyzing data from these technologies have the potential to enable big data solutions to
overtake existing technology solutions when the demand emerges to access, read, present or
analyze any data. However, inadequate attempts to address other big data assets, such as images,
video, sound and even three-dimensional object models, persist.
The larger context of big data is framed by the wide variety, and extreme size and number, of data
creation venues in the 21st century. Gartner clients have made it clear that big data technologies

Page 59 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

must be able to process large volumes of data in streams, as well as in batches, and that they need
an extensible service framework to deploy data processes (or bring data to those processes) that
encompasses more than one variety of asset (for example, not just tabular, streamed or textual
data).
It is important to recognize that different aspects and varieties of big data have been around for
more than a decade it is only recent market hype about legitimate new techniques and solutions
that has created this heightened demand.
Big data technologies can serve as unstructured data parsing tools that prepare data for data
integration efforts that combine big data assets with traditional assets (effectively the first-stage
transformation of unstructured data).
User Advice:

Focus on creating a collective skill base. Specifically, skills in business process modeling,
information architecture, statistical theory, data governance and semantic expression are
required to obtain full value from big data solutions. These skills can be assembled in a data
science lab or delivered via a highly qualified individual trained in most or all of these areas.

Begin using Hadoop connectors in traditional technology and experiment with combining
traditional and big data assets in analytics and business intelligence. Focus on this type of
infrastructure solution, rather than building separate environments that are joined at the level of
analyst user tools.

Review existing information assets that were previously beyond analytic or processing
capabilities ("dark data"), and determine if they have untapped value to the business. If they
have, make them the first, or an early, target of a pilot project as part of your big data strategy.

Plan on using scalable information management resources, whether public cloud, private cloud
or resource allocation (commissioning and decommissioning of infrastructure), or some other
strategy. Don't forget that this is not just a storage and access issue. Complex, multilevel,
highly correlated information processing will demand elasticity in compute resources, similar to
the elasticity required for storage/persistence.

Small and midsize businesses should address variety issues ahead of volume issues when
approaching big data, as variety issues demand more specialized skills and tools.

Business Impact: Use cases have begun to bring focus to big data technology and deployment
practices. Big data technology creates a new cost model that has challenged that of the data
warehouse appliance. It demands a multitiered approach to both analytic processing (many
context-related schemas-on-read, depending on the use case) and storage (the movement of "cold"
data out of the warehouse). This resulted in a slowdown in the data warehouse appliance market
while organizations adjusted to the use of newly recovered capacity (suspending further costs on
the warehouse platform) and moving appropriate processing from a schema-on-write approach to a
schema-on-read approach.
In essence, the technical term "schema on read" means that if business users disagree about how
an information source should be used, they can have multiple transformations appear right next to
Page 60 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

each other. This means that implementers can do "late binding," which in turn means that users can
see the data in raw form, determine multiple candidates for reading that data, determine the top
contenders, and then decide when it is appropriate to compromise on the most common use of
data and to load it into the warehouse after the contenders "fight it out." This approach also
provides the opportunity to have a compromise representation of data stored in a repository, while
alternative representations of data can rise and fall in use based on relevance and variance in the
analytic models.
Benefit Rating: Transformational
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Cloudera; EMC; Hortonworks; IBM; MapR; Teradata
Recommended Reading: "Big Data Drives Rapid Changes in Infrastructure and $232 Billion in IT
Spending Through 2016"
"Big Data' Is Only the Beginning of Extreme Information Management"
"How to Choose the Right Apache Hadoop Distribution"
"The Importance of 'Big Data': A Definition"

In-Memory Database Management Systems


Analysis By: Roxane Edjlali
Definition: An in-memory DBMS (IMDBMS) is a DBMS that stores the entire database structure in
memory and accesses all the data directly, without the use of input/output instructions to store and
retrieve data from disks, allowing applications to run completely in-memory. This should not be
confused with a caching mechanism, which stores and manages disk blocks in a memory cache for
speed. IMDBMSs are available in both row-store and column-store models, or a combination of
both.
Position and Adoption Speed Justification: IMDBMS technology has been around for many years
(for example, IBM solidDB, McObject's eXtremeDB and Oracle TimesTen). However, we have seen
many new vendors emerging during the past three years. SAP has been leading with SAP Hana,
which now supports hybrid transactional/analytical processing (HTAP). Other major vendors
(Teradata, IBM and Microsoft), except Oracle, have added in-memory analytic capabilities as part of
their DBMSs. Oracle is due to deliver it in 2014, and Microsoft SQL Server 2014 has also added inmemory transactional capabilities. Small, innovative vendors also continue to emerge both in the
relational (MemSQL, for example) as well as in the NoSQL area (Aerospike, for example).
The adoption by all major vendors demonstrates the growing maturity of the technology and the
demand from customers looking at leveraging IMDBMS capabilities as part of their information

Page 61 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

infrastructure. While SAP Hana is leading the charge, with 3,000 customers with hundreds in
production, the addition of in-memory capabilities by all major players should further accelerate
adoption of IMDBMS technology during the next two years.
Many use cases are supported by IMDBMS. For example, solidDB and TimesTen were originally
developed for high-speed processing of streaming data for applications such as fraud detection,
with the data then written to a standard DBMS for further processing. Others, such as Altibase,
Aerospike and VoltDB, focus on high-intensity transactional processing. Some IMDBMSs such
as Exasol, ParStream or Kognitio are dedicated to in-memory analytical use cases. Finally, the
ability to support both analytical and transactional (aka HTAP) use cases on a single copy of the
data is gaining traction in the market led by SAP and now Microsoft, along with smaller emerging
players such as Aerospike or MemSQL.
The promise of the IMDBMS is to combine, in a single database, both the transactional and
analytical use cases without having to move the data from one to the other. It enables new business
opportunities that would not have been possible previously, by allowing real-time analysis of
transactional data. One example is in logistics, where business analysts can offer customers
rerouting options for potentially delayed shipping proactively, rather than after the fact; hence,
creating a unique customer experience. Another example comes from online gambling, whereby
computing of the handicap could occur as a match is ongoing. To support such use cases, both the
transactional data and the analytics need to be available in real time. While analytical use cases
have seen strong adoption, for most organizations IMDBMS for HTAP technology remains three
years away.
User Advice:

Continue to use IMDBMS as a DBMS for temporary storage of streaming data where real-time
analysis is necessary, followed by persistence in a disk-based DBMS.

IMDBMS for analytic acceleration is an effective way of achieving increased performance.

The single most important advancement is HTAP as a basis for new, previously unavailable
applications taking advantage of real-time data availability, with IMDBMS for increased
performance and reduced maintenance. Organizations should monitor technology maturity and
identify potential business use cases to decide when to leverage this opportunity.

Vendor offerings are evolving fast and have various levels of maturity. Compare vendors from
both the technology and pricing perspectives.

Business Impact: These IMDBMSs are rapidly evolving and becoming mature and proven
especially for reliability and fault tolerance. As the price of memory continues to decrease, the
potential for the business is transformational:

The speed of the IMDBMS for analytics has the potential to simplify the data warehouse model
by removing development, maintenance and testing of indexes, aggregates, summaries and
cubes. This will lead to savings in terms of administration, improved update performance, and
increased flexibility for meeting diverse workloads.

Page 62 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

The high performance implies that smaller systems will do the same work as much larger
servers, which will lead to savings in floor space and power. While the cost of acquisition of an
IMDBMS is higher than a disk-based system, the total cost of ownership of an IMDBMS should
be less over a three- to five-year period because of cost savings related to personnel, floor
space, power and cooling.

HTAP DBMSs will enable an entire set of new applications. These applications were not
possible before, because of the latency of data moving from online transaction processing
systems to the data warehouse. However, this use case is still in its infancy.

Benefit Rating: Transformational


Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Aerospike; Exasol; IBM; Kognitio; McObject; MemSQL; Microsoft; Oracle;
ParStream; Quartet FS; SAP; Teradata; VoltDB
Recommended Reading: "Who's Who in In-Memory DBMSs"
"Cool Vendors in In-Memory Computing, 2013"
"Taxonomy, Definitions and Vendor Landscape for In-Memory Computing Technologies"
"SAP's Business Suite on Hana Will Significantly Impact SAP Users"

Content Analytics
Analysis By: Carol Rozwell; Rita L. Sallam
Definition: Content analytics is a family of technologies that process content and the behavior of
users in consuming content to derive answers to specific questions and find patterns that drive
action. Content types include text of all kinds, such as documents, blogs, news sites, customer
conversations (both audio and text), video, and interactions occurring on the social Web. Analytic
approaches include text analytics, graph analytics, rich media and speech analytics, video analytics,
as well as sentiment, emotional intent and behavioral analytics.
Position and Adoption Speed Justification: The multiplicity of applications and the diverse range
of analytical techniques and vendors indicate that content analytics is still emerging. Some
techniques such as text analytics are relatively mature, while there is a great deal of hype
surrounding some deployments of content analytics, such as sentiment analysis, and the use of
other techniques such as emotional analysis and video analytics is still very nascent.
Use of both general- and special-purpose content analytics applications continues to grow, whether
they are procured as stand-alone applications or added as extensions to search and content
management applications. The greatest growth comes from generally available content resources,
such as social data, public news feeds and documents, open data, contact center records and

Page 63 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

post-sale service accounts. This leads to heavy uptake in CRM. Additionally, open-source
intelligence seeks to use content analytics for better understanding of public and semipublic
sentiment. However, other areas, such as HR, are leveraging content to optimize organizational
efficiency and hiring. Specific industries such as healthcare, life sciences, utilities and transportation
are leveraging insights across content and structured data to optimize processes and business
models. Software as a service (SaaS) vendors are emerging, offering APIs to let snippets of content
be programmatically sent to and analyzed in the cloud. This is an important development and will
help speed up adoption.
Another factor driving the interest in content analytics is the huge volume of information available to
be analyzed and the speed with which it changes.
User Advice: Enterprises should employ content analytics to automate the data preparation
process. It can replace time-consuming, manual and complex human analyses, such as reading,
summarizing and suggesting actionable insight in service records or postings resident in social
media. Look for opportunities to combine these new insights with analysis from traditional,
enterprise and other structured data sources to enhance existing analytics processes or to create
new ones. Firms should identify the analytics functions that are most able to simplify and drive new
intelligence into complex business and analytic processes. Users should identify vendors with
specific products that meet their requirements, and should review customer case studies both
within and outside their industries to understand how others have exploited these technologies. An
oversight group can support application sharing, monitor requirements, and understand new
content analytics to identify where they can improve key performance indicators (KPIs) and use the
content analysis result as input to the predictive analytic model. Appropriate groups for such roles
may already exist. They might already be devoted to associated technologies or goals, such as
content management, advanced analytics, social software, people-centered computing, or specific
business application categories such as marketing, CRM, security or worker productivity. Social
networking applications can be used to deliver information, gain access to customers and
understand public opinion that may be relevant. New skills and tools (such as in linguistics, naturallanguage processing, image processing and machine learning) beyond those needed for traditional
business intelligence will be required.
It is important to note that there are risks in assuming that content analytics can effectively
substitute human analysis. In some cases, false signals may end up requiring more human effort to
sort out than more rudimentary monitoring workflows. The best practice is to optimize the balance
between automation and oversight. Until the tools mature, experts in the field of what the tool is
analyzing will be required to provide advice in context.
Business Impact: Content analytics is used to support and enhance a broad range of analytic
functions. It can:

Provide new insights into analytic processes to identify high-priority clients, the next best
action, product problems, and customer sentiment and service problems

Analyze competitors' activities and consumers' responses to a new product

Support security and law enforcement operations by analyzing photographs

Page 64 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Relate effective treatments to outcomes in healthcare

Detect fraud by analyzing complex behavioral patterns

Optimize asset management through preventative and predictive maintenance

Complex results can be represented as visualizations and embedded in analytic applications,


making them easier for people to understand and take action.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Attensity; Basis Technology; CallMiner; Clarabridge; Connotate; HP; IBM;
Nexidia; Nice Systems; Raytheon BBN Technologies; SAS; Temis; Thomson Reuters (ClearForest);
Trampoline Systems; Transparensee; uReveal; Utopy
Recommended Reading: "Use Search and Content Analytics to Increase Sales"
"How to Expand Your Information Infrastructure for Analytics With Content"
"Cool Vendors in Content and Social Analytics, 2014"
"How Crowdsourcing Can Reduce the Reliability of Social Media Analytics"
"Three Ways to Improve Your Content and Social Analytics"

Hybrid Cloud Computing


Analysis By: David W. Cearley; Donna Scott
Definition: Gartner defines hybrid cloud computing as the coordinated use of cloud services across
isolation and provider boundaries among public, private and community service providers, or
between internal and external cloud services. Like a cloud computing service, a hybrid cloud
computing service is scalable, has elastic IT-enabled capabilities, self-service interfaces and is
delivered as a shared service to customers using Internet technologies. However, a hybrid cloud
service crosses isolation and provider boundaries.
Position and Adoption Speed Justification: Hybrid cloud computing is the coordinated use of
cloud services across isolation and provider boundaries among public, private and community
service providers, or between internal and external cloud services. Hybrid cloud computing does
not refer to using internal systems and external cloud-based services in a disconnected or loosely
connected fashion. Rather, it implies significant integration or coordination between the internal and
external environments at the data, process, management or security layers.
Virtually all enterprises have a desire to augment internal IT systems with those of cloud services for
various reasons, including for capacity, financial optimization and improved service quality. Hybrid

Page 65 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

cloud computing can take a number of forms. The following approaches can be used individually or
in combination to support a hybrid cloud computing approach within and across the various layers
for example, infrastructure as a service (IaaS), platform as a service (PaaS) and software as a
service (SaaS):

Joint security and management Security and/or management processes and tools are
applied to the creation and operation of internal systems and external cloud services.

Workload/service placement and runtime optimization Using data center policies to drive
placement decisions to resources located internally or externally, as well as balancing resources
to meet SLAs, such as for availability and response time.

Cloudbursting Dynamically scaling out an application from an internal, private cloud platform
to an external public cloud service based on the need for additional resources.

Development/test/release Coordinating and automating development, testing and release to


production across private, public and community clouds.

Availability/disaster recovery (DR)/recovery Coordinating and automating synchronization,


failover and recovery between IT services running across private, public and/or community
clouds.

Cloud service composition Creating a solution with a portion running on internal systems,
and another delivered from the external cloud environment in which there are ongoing data
exchanges and process coordination between the internal and external environments.

Dynamic cloud execution The most ambitious form of hybrid cloud computing combines joint
security and management, cloudbursting and cloud service compositions. In this model, a
solution is defined as a series of services that can run in whole or in part on an internal private
cloud platform or on a number of external cloud platforms, in which the software execution
(internal and external) is dynamically determined based on changing technical (for example,
performance), financial (for example, cost of internal versus external resources) and business
(for example, regulatory requirements and policies) conditions.

We estimate no more than 20% of large enterprises have implemented hybrid cloud computing
beyond simple integration of applications or services. This declines to 10% to 15% for midsize
enterprises, which mostly are implementing the availability/disaster recovery use case. Most
companies will use some form of hybrid cloud computing during the next three years. Some
organizations are implementing cloud management platforms (CMPs) to drive policy-based
placement and management of services internally or externally. A fairly common use case is in the
high availability (HA)/DR arena where data is synchronized from private to public or public to private
for the purposes of resiliency or recovery. A less common but growing use case (due to
complexities of networking and latency) is cloudbursting. The grid computing world already
supports hybrid models executing across internal and external resources, and these are
increasingly being applied to cloud computing. More sophisticated, integrated solutions and
dynamic execution interest users, but are beyond the current state of the art.
Positioning has advanced significantly in a year (from peak to postpeak midpoint) as organizations
leverage and embrace the public cloud into their business processes and internal services, and
Page 66 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

engage in designing cloud-native and optimized services. While maturing rapidly, the reality is that
hybrid cloud computing is a fairly immature area with significant complexity in setting it up in
operational form. Early implementations are typically between private and public clouds, and not
often between two different public cloud providers. Technologies that are used to manage hybrid
cloud computing include CMPs, but also specific services supplied by external cloud and
technology providers that enable movement and management across internal and external cloud
resources. Most hybrid cloud computing technologies and services seek to lock in customers to
their respective technologies and services, as there are no standard industry approaches.
User Advice: When using public cloud computing services, establish security, management and
governance models to coordinate the use of these external services with internal applications and
services. Where public cloud application services or custom applications running on public cloud
infrastructures are used, establish guidelines and standards for how these elements will combine
with internal systems to form a hybrid environment. Approach sophisticated integrated solutions,
cloudbursting and dynamic execution cautiously, because these are the least mature and most
problematic hybrid approaches. To encourage experimentation and cost savings, and to prevent
inappropriately risky implementations, create guidelines/policies on the appropriate use of the
different hybrid cloud models. Consider implementing your policies in CMPs, which implement and
enforce policies related to cloud services.
Business Impact: Hybrid cloud computing leads the way toward a unified cloud computing model,
in which there is a single cloud that is made up of multiple sets of cloud facilities and resources
(internal or external) that can be used, as needed, based on changing business requirements. This
ideal approach would offer the best-possible economic model and maximum agility. It also sets the
stage for new ways for enterprises to work with suppliers and partners (B2B), and customers
(business-to-consumer), as these constituencies also move toward a hybrid cloud computing
model. In the meantime, less ambitious hybrid cloud approaches still allow for cost optimization,
flexible application deployment options, and a coordinated use of internal and external resources.
Benefit Rating: Transformational
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Apache CloudStack; BMC Software; HP; IBM; Microsoft; OpenStack; RightScale;
VMware
Recommended Reading: "Hybrid Cloud Network Architectures"
"Hybrid Cloud Is Driving the Shift From Control to Coordination"
"Cloud Storage Gateways: Enabling the Hybrid Cloud"

Gamification
Analysis By: Brian Burke

Page 67 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Definition: Gamification is the use of game mechanics and experience design to digitally engage
and motivate people to achieve their goals. Gartner has recently redefined gamification; in this new
definition, it is distinguished by its digital engagement model and the focus on motivating players to
achieve their goals (see "Redefine Gamification to Understand Its Opportunities and Limitations").
Position and Adoption Speed Justification: In the 2014 Hype Cycle, gamification has moved from
the Peak of Inflated Expectations in 2013, to begin the entry into the Trough of Disillusionment
today. According to Google Trends, over the past year, the hype surrounding gamification overall
has leveled, and the number of critics of gamification is increasing. But client inquiries indicate the
focus for gamification has clearly shifted from being primarily consumer-facing and marketingdriven, to becoming primarily an enterprise concern with a focus both internal and external to the
organization. Internal to organizations, gamification is being used in recruiting, onboarding, training,
wellness, collaboration, performance, innovation, change management and sustainability. This trend
is set to accelerate as larger vendors, such as salesforce.com, begin to integrate game mechanics
and analytics into their software offerings. In addition to externally focused solutions targeting
customers or communities of interest, there are also an increasing number of gamification solutions
focusing on specific communities of interest, particularly in civic, health and innovation areas.
Gamification leaders such as Nike, Khan Academy and Quirky demonstrate that gamification can
have a huge positive impact on engagement when applied in a suitable context. However,
gamification has significant challenges to overcome before widespread success occurs. Designing
a gamified solution is no easy task successful solutions are focused on enabling players to
achieve their goals. Player goals and organizational goals must be aligned, and only then can the
organizational goals be achieved as a consequence of players achieving their goals. Successful
gamified solutions design an experience for players that takes them on a journey to achieving their
goals. Designing for engagement (rather than for efficiency) is a new skill, and one that is in short
supply in IT organizations. This will hinder the development of the trend over the next three years.
User Advice: Gamification builds motivation into a digital engagement model, and can be used to
add value to products and to deepen relationships by changing behaviors, developing skills or
driving innovation. The target audiences for gamification are customers, employees and
communities of interest.
Organizations planning to leverage gamification must clearly understand the goals of the target
audience they intend to engage, how those goals align with organizational goals and how success
will be measured.
Gamification technology comes in three forms:

General-purpose gamification platforms delivered as SaaS that integrate with customdeveloped and vendor-supplied applications

Purpose-built solutions supplied by a vendor to support a specific usage (for example,


innovation management or service desk performance)

Purely custom implementations

Page 68 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Organizations must recognize that simply including game mechanics is not enough to realize the
core benefits of gamification. Making gamified solutions sufficiently rewarding requires careful
planning, design and implementation, with ongoing adjustments to keep users interested. Designing
gamified solutions is unlike designing any other IT solution, and it requires a different design
approach. Few people have gamification design skills, which remains a huge barrier to success in
gamified solutions.
Organizations are beginning to use gamification as a means to motivate employees and customers.
Implementing gamification means matching player goals to target business outcomes, in order to
engage people on an emotional level, rather than on a transactional level.
Business Impact: Gamification can increase the effectiveness of an organization's digital business
strategy. It provides a means of packaging motivation and delivering it digitally to add value to
products and relationships. While many of the concepts in gamification have been around for a long
time, the advantage of a digital engagement model is that it scales to virtually any size, with very
low incremental costs. Its use is relevant, for example, to marketing managers, product designers,
customer service managers, financial managers and HR staff, whose aim is to bring about longerlasting and more-meaningful interactions with customers, employees or the public.
Although gamification can be beneficial, it's important to design, plan and iterate on its use to avoid
the negative business impacts of unintended consequences, such as behavioral side effects or
gamification fatigue.
User engagement is at the heart of today's "always connected" culture. Incorporating game
mechanics encourages desirable behaviors, which can, with the help of carefully planned scenarios
and product strategies, increase user participation, improve product and brand loyalty, advance
learning and understanding of a complex process, accelerate change adoption, and build lasting
and valuable relationships with target audiences.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Badgeville; BigDoor; Bunchball; RedCritter
Recommended Reading: "Redefine Gamification to Understand Its Opportunities and Limitations"
"Technology Overview for Gamification Platforms"
"Business Model Games: Driving Business Model Innovation With Gamification"
"Gamification: Engagement Strategies for Business and IT"
"Best Practices for Harnessing Gamification's Potential in the Workplace"
"Gamification: The Serious Side of Games Can Make Work More Interesting"

Page 69 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Augmented Reality
Analysis By: Tuong Huy Nguyen; CK Lu
Definition: Augmented reality (AR) is the real-time use of information in the form of text, graphics,
audio and other virtual enhancements integrated with real-world objects and presented using a
heads-up display (HUD)-type display or projected graphics overlays. It is this "real world" element
that differentiates AR from virtual reality. AR aims to enhance users' interaction with the
environment, rather than separating them from it.
Position and Adoption Speed Justification: Although verticals such as automotive and military
have been using AR for many years, this technology entered the mainstream driven by the interest
and proliferation of mobile devices and geolocation services. Recent focus has shifted back to
vision-based identification AR. This technology supplements location-dependent AR and provides
additional use-case scenarios.
A growing number of brands, retailers, manufacturers and companies in various verticals have
shown interest in, or are using, AR to enhance internal and/or external business processes. Hype
around AR has stabilized. This has allowed more companies to look beyond the initial hype to
explore AR's potential to provide business innovation, enhance business processes and provide
high value to external clients. The biggest challenge for external-facing AR is gimmicky
implementations solutions that provide the consumer no value. This will potentially limit
consumer interest and adoption in the technology. Internal-facing implementations have better
potential for adoption to bring business value because they won't be hindered by consumer
preferences. Advancement of heads-up display will further encourage use of AR as an enterprise
tool.
Beyond audience-based challenges, a number of factors will continue to hinder AR adoption.

Rigorous device requirements restrict the information that can be conveyed to the end user.
Cloud computing initiatives will alleviate some of this burden

Data costs for always-on connectivity.

Privacy concerns for both location and visual identification-based AR.

Standardization for browsers' data structure.

User Advice:

Communications service providers: Examine whether AR would enhance the user experience
of your existing services. Compile a list of AR developers with which you could partner, rather
than building your own AR from the ground up. Provide end-to-end professional services for
specific vertical markets, including schools, healthcare institutions and real estate agencies, in
which AR could offer significant value.

Mobile device manufacturers: Recognize that AR provides an innovative interface for your
mobile devices. Open discussions with developers about the possibility of preinstalling
application clients on your devices and document how developers can access device features.

Page 70 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

AR developers: Take a close look at whether your business model is sustainable, and consider
working with CSPs or device manufacturers to expand your user base; perhaps by offering
white-label versions of your products. Integrate AR with existing tools, such as browsers or
maps, to provide an uninterrupted user experience.

Providers of search engines and other Web services: Get into AR as an extension of your
search business. AR is a natural way to display search results in many contexts.

Mapping vendors: Add AR to your 3D map visualizations.

Early adopters: Examine how AR can bring value and ROI to your organization and your
customers by offering branded information overlays. For workers who are mobile (including
factory, warehousing, maintenance, emergency response, queue-busting or medical staff),
identify how AR could deliver context-specific information at the point of need or decision.

Brands, marketers and advertisers: Use AR to bridge your physical and digital marketing
assets and drive increased engagement with your user base.

Business Impact: AR is used to bridge the digital and physical world. This has an impact on both
internal- and external-facing solutions. For example, internally, AR can provide value by enhancing
training, maintenance and collaboration efforts. Externally, it offers brands, retailers, marketers and
the ability to seamlessly combine physical campaigns with their digital assets.
CSPs and their brand partners can leverage AR's ability to enhance the user experience within their
location-based service (LBS) offerings. This can provide revenue via set charges, recurring
subscription fees or advertising. Handset vendors can incorporate AR to enhance UIs, and use it as
a competitive differentiator in their device portfolio. The growing popularity of AR opens up a market
opportunity for application developers, Web services providers and mapping vendors to provide
value and content to partners in the value chain, as well as an opportunity for CSPs, handset
vendors, brands and advertisers.
Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Adolescent
Sample Vendors: Catchoom; Daqri; GeoVector; Google; HP; Layar; Metaio; Mobilizy; Nokia;
Qualcomm; Tonchidot; Total Immersion; Zugara
Recommended Reading: "Top Recommendations to Prepare for Augmented Reality in 2013"
"Innovation Insight: Augmented Reality Will Become an Important Workplace Tool"
"Innovation Insight: Smartglasses Bring Innovation to Workplace Efficiency"

Machine-to-Machine Communication Services


Analysis By: Sylvain Fabre; Eric Goodness

Page 71 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Definition: Managed machine to machine (M2M) communication services encompass integrated


and managed infrastructure, application and IT services to enable enterprises to connect, monitor
and control business assets and related processes over a fixed or mobile connection. Managed
M2M services contribute to existing IT and/or are operations technology (OT) processes. M2M
communication services are the connectivity services for many Internet of Things (IoT)
implementations.
Position and Adoption Speed Justification: M2M technology continues to fuel new business
offerings and support a wide range of initiatives, such as smart meters, road tolls, smart cities,
smart buildings and geofencing assets, to name a few.
The key components of an M2M system are:

Field-deployed wireless devices with embedded sensors or radio frequency identification (RFID)
technology

Wireless and wireline communication networks, including cellular communication, Wi-Fi,


ZigBee, WiMAX, generic DSL (xDSL) and fiber to the x (FTTx) networks

A back-end network that interprets data and makes decisions (for example, e-health
applications are also M2M applications)

There are currently few service providers than can deliver end-to-end M2M services. The value
chain remains fragmented. Service providers are trying to partner with others to create a workable
ecosystem.
M2M services are currently provided by three types of provider:

M2M service providers. Mobile virtual network operators and companies associated with an
operator that can piggyback on that operator's roaming agreements (for example, Wyless, Kore
Telematics and Jasper Wireless).

Communications service providers (CSPs). Some CSPs, such as Orange in Europe and AT&T
in North America, have quietly supplied M2M services for several years. However, CSPs are
now marketing M2M services more vigorously, and those without a strong M2M presence so far
are treating it more seriously by increasing their marketing or creating dedicated M2M service
divisions (for example, T-Mobile, Telenor and Vodafone).

M2M service aggregators. These encompass traditional outsourcers and emerging players
that bundle connectivity into systems resale and integration (such as Modus Group or Integron).

One of the key technology factors that may affect M2M service deployment is the capability to
support mobile networks. Early M2M services were smart meters, telematics and e-health monitors,
which are expected to be widely used in the future. In its Release 10, the Third Generation
Partnership Project (3GPP) worked on M2M technology to enhance network systems in order to
offer better support for machine-type communications (MTC) applications. The 3GPP's TS 22.368
specification describes common and specific service requirements for MTC. The main functions
specified in Release 10 are overload and congestion control, and the recently announced Release
11 investigates additional MTC requirements, use cases and functional improvements to existing

Page 72 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

specifications. End-to-end real-time security will also become an important factor when more
important vertical applications are brought into cellular networks.
Another key factor on the technology side that may impact mass deployment of M2M
communication services is the level of standardization. Some key M2M technology components
(RFID, location awareness, short-range communication and mobile communication technologies,
for example) have been on the market for quite a long time, but there remains a lack of the
standardization necessary to make M2M services cost-effective and easy to deploy, therefore
enabling this market to take off. M2M standardization may involve many technologies (such as the
Efficient XML Interchange [EXI] standard, Constrained Application Protocol [CoAP] and Internet
Protocol Version 6 over Low-Power Wireless Personal Area Networks [IPv6/6LoWPAN]) and
stakeholders, including CSPs, RFID makers, telecom network equipment vendors and terminal
providers. The European Telecommunications Standards Institute has a group working on the
definition, smart-metering use cases, functional architecture and service requirements for M2M
technology.
We expect that M2M communication services will be in the Trough of Disillusionment in 2015.
Procurement teams will perceive that prices are too high and the space unnecessarily complex (for
example, roaming or multi-country implementations) especially when contrasted to consumer/
wearables IoT that will use the smartphone as a gateway to the Internet.
User Advice: As M2M communications grow in importance, regulators should pay more attention
to standards, prices, terms and conditions. For example, the difficulty of changing operators during
the life of equipment with embedded M2M technology might be seen by regulators as potentially
monopolizing. Regulators in France and Spain already require operators to report on M2M
connections, and we expect to see increased regulatory interest elsewhere.
For the end user, the M2M market is very fragmented because no single end-to-end M2M provider
exists. A number of suppliers offer enterprise users monitoring services, hardware development,
wireless access services, hardware interface design and other functions. As a result, an adopter has
to do a lot of work to integrate the many vendors' offerings. On top of this, business processes may
need redefining.
While M2M is usually part of a closed loop OT environment run by engineering, it could be
facilitated and exploited by an aligned IT and OT approach. In some cases, M2M may be deployed
and supported by IT departments with adequate skills and understanding.
An enterprise's M2M technology strategy needs to consider the following issues:

Scope of deployment

System integration method

Hardware budget

Application development and implementation

Wireless service options

Page 73 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Wireless access costs

Business Impact: M2M communication services have many benefits for enterprise users,
governments and CSPs. They can dramatically improve the efficiency of device management. As
value-added services, they also have considerable potential as revenue generators for CSPs. The
success of these services will be important for CSPs' business growth plans.
M2M communication services are expected to be the critical enablers for many initiatives that fall
under the "smart city" umbrella and contribute to the IoT. Examples are smart grid initiatives with
connected smart grid sensors to monitor distribution networks in real time, and smart transportation
initiatives with embedded telematics devices in cars to track and control traffic. M2M
communication services will also connect billions of devices, causing further transformation of
communication networks.
M2M communication services should be seen as an important set of facilitating technologies for use
in operational technologies. At an architectural level, particular care should be taken when choosing
M2M solutions to ensure they facilitate the alignment, convergence or integration of operational
technology with IT.
As CSPs' M2M portfolio broadens and goes beyond connectivity, the number of solutions aimed at
specific industry verticals is growing at a fast rate. Most CSPs with M2M offerings provide vertically
integrated, end-to-end solutions in the area of automotive, utilities, transport and logistics, and
healthcare, the latter of which is experiencing particularly fast growth for CSPs.
Benefit Rating: Transformational
Market Penetration: Less than 1% of target audience
Maturity: Adolescent
Sample Vendors: AT&T; Jasper Wireless; KDDI; Kore Telematics; Modus Group; Orange France;
Qualcomm; Sprint; Telefonica; Telenor; Verizon; Vodafone; Wyless

Mobile Health Monitoring


Analysis By: Barry Runyon; Jessica Ekholm
Definition: Mobile health monitoring is the use of mobile devices and information and
communications technologies to actively monitor the health of patients. Patients are provided
mobile and wearable monitoring devices that capture physiological metrics, such as blood
pressure, glucose level, pulse, blood oxygen level and weight, and then transmit or stage the patient
data for analysis, review and intervention.
Position and Adoption Speed Justification: Advances in smartphone platforms, sensor
technologies, cellular networks, cloud computing and mobile and wearable medical devices have
removed many of the technical barriers to mobile health monitoring. Industry cooperatives such as
the Continua Health Alliance, the ZigBee Alliance and the Bluetooth Special Interest Group have

Page 74 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

furthered the cause of device interoperability. Over this past year, we have seen an increased
interest in mobile health monitoring, driven by a number of factors:

Smartphone use by adults is at an all-time high.

The increased burden of chronic disease in emerging markets, many of which have poor
landline coverage and better mobile coverage, is generating interest from government
healthcare agencies in deploying mobile versions of home health monitoring devices.

Interest is growing among healthcare delivery organizations (HDOs) in developed and emerging
markets in using mobility to overcome the "location dependence" limitation of home health
monitoring technologies. The use of portable or wearable devices opens the possibility of
monitoring active, mobile patients continually and in real time.

The popularity of personal health record (PHR) applications is enabling healthcare consumers to
create Web-based healthcare data repositories that are able to accept data from health and
fitness monitoring devices.

The so-called "quantified self" is generating increasing fascination. Sports product


manufacturers, such as Adidas and Nike, are offering motion trackers that help create a better
jogging experience. Professional sports teams use a variety of dedicated sensors and devices
to measure the performance of team players. The widespread adoption of smartphones with
low-cost applications that enable mobile health monitoring is leading to growing interest among
healthcare consumers in self-monitoring.

Affordable, wireless gateways connect to standard home health monitors (weight, blood
glucose and blood pressure) and automate data collection and secure transmission.

Smartphone manufacturers will begin to incorporate biosensor and monitoring technologies into
the devices making it easier for medical application developers to deploy their functionality
and less expensive for the consumer.

The number of wearable devices on the market with the potential to help both patients and
clinicians monitor vital signs and symptoms has increased dramatically.

Acceptance of cloud-based services by HDOs is increasing.

Despite growing interest, most deployments of mobile health monitoring are pilot projects. HDOs,
for the most part, are not yet convinced that the business case for mobile health monitoring is viable
and have not yet shown the organizational commitment to develop sustainable services on a large
scale. In 2012, Telcare (see "Cool Vendors in Healthcare Providers, 2012") began shipping U.S.
Food and Drug Administration (FDA)-cleared glucometers that automatically connect to a cellular
data network and integrate with Telcare's own website, payer call centers and the electronic health
records of healthcare providers. The ease of deployment of products such as Telcare should help
move some pilots to larger-scale, operational programs in part driven by the fact that consumer
mobile blood glucose monitoring devices (such as iHealth's Wireless Smart Gluco-Monitoring
System) are gaining greater acceptance with consumers and employers. As mobile health
monitoring evolves and its clinical uses become more clearly defined, it will most likely fragment into

Page 75 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

certain submarkets focused on particular clinical areas, such as obesity, chronic obstructive
pulmonary disease (COPD), diabetes and cardiac care.
User Advice: Whether mobile health monitoring pilots evolve into operational deployments depends
on the ability of HDOs to overcome certain obstacles, including legal and licensing restrictions,
inconsistent reimbursement by healthcare payers, and the reality that mobile health monitoring will
require new staffing and workforce considerations and new business processes for dealing with
remotely generated patient data, as well as new ways of integrating this information into their
business and clinical systems.
HDOs should focus on the process and business issues raised by mobile health monitoring. It is
essential to develop the ability to manage large numbers of mobile devices and remote patients, to
change business and clinical processes to handle remotely generated patient data, and to change
the staffing model to be able to orchestrate time-critical interventions for patients.
HDOs should not rush to replace home health monitoring in favor of mobile health monitoring.
Mobile health monitoring will be used as a supplement or alternative to home health monitoring and
will likely be used to service certain types of patients (such as younger, more active and more techsavvy patients).
Business Impact: If deployed appropriately, mobile health monitoring will enable closer monitoring
and faster intervention in the care of certain groups of patients. Mobile health monitoring can
improve patient engagement, enhance the patient experience and increase adherence to care
plans.
Benefit Rating: Moderate
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Abbott Diabetes Care; Aerotel Medical Systems; Ambio Health; Ideal Life;
Johnson & Johnson; Medic4all; Medtronic; Nonin Medical; OBS Medical; Preventice; Ringful Health;
Roche; Tunstall Healthcare; Withings
Recommended Reading: "Analytics Gets Personal With the Quantified Self"
"Failure to Address Organizational Issues Will Derail Telemedicine Initiatives"
"A Framework for Understanding Telehealth, Telemedicine and Other Remote Healthcare Delivery
Solutions"
"Survey Analysis: Telemedicine Initiatives Reflect Pragmatism in Adoption"
"2014 Strategic Road Map for the Real-Time Healthcare System"

Cloud Computing
Analysis By: David Mitchell Smith
Page 76 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Definition: Cloud computing is a style of computing in which scalable and elastic IT-enabled
capabilities are delivered as a service using Internet technologies.
Position and Adoption Speed Justification: Cloud computing remains a very visible and hyped
term, but, at this point, it is approaching the Trough of Disillusionment. There are many signs of
fatigue, rampant cloudwashing and disillusionment (for example, highly visible failures). Cloud
computing remains a major force in IT and is still increasing in messages from vendors. Every IT
vendor has a cloud strategy, although many aren't cloud-centric and some of their cloud strategies
are in name only. Users are changing their buying behaviors, and, although they are unlikely to
completely abandon on-premises models or source all complex, mission-critical processes as
services through the cloud in the near future, there is a movement toward consuming services in a
more cost-effective way and toward enabling capabilities not easily done elsewhere. Much of the
focus is on agility, speed and other non-cost-related benefits.
Cloud computing has been, and continues to be, one of the most hyped terms in the history of IT.
Its hype transcends the IT industry and has entered popular culture, which has had the effect of
increasing hype and confusion around the term. In fact, cloud computing hype is literally "off the
charts" as Gartner's Hype Cycle does not measure amplitude of hype (i.e., a heavily hyped term
such as cloud computing rises no higher on the Hype Cycle than anything else).
Although the hype has long since peaked, there is still a great deal of hype surrounding cloud
computing and its many relatives. Although the Hype Cycle does not measure amplitude, cloud still
has more hype than many other technologies that are actually at or near the Peak of Inflated
Expectations. Variations, such as private cloud computing and hybrid approaches, compound the
hype and reinforce that one dot on a Hype Cycle cannot adequately represent all that is cloud
computing.
The hype around cloud computing continues to evolve as the market matures. Initial hype about
cost savings has now focused more on the business benefits that organizations would realize due
to a shift to cloud computing. While some organizations have realized some cost savings, more and
more are focusing on other benefits, such as agility, speed, time to market and innovation.
User Advice: User organizations must demand road maps for the cloud from their vendors. Users
should look at specific usage scenarios and workloads, map their view of the cloud to that of
potential providers and focus more on specifics than on general cloud ideas. Understanding the
service models involved is key.
Vendor organizations must begin to focus their cloud strategies on more specific scenarios and
unify them into high-level messages that encompass the breadth of their offerings. Differentiation in
hybrid cloud strategies must be articulated and will be challenging as all are "talking the talk," but
many are taking advantage of the even broader leeway afforded by the term. Cloudwashing should
be minimized.
Cloud computing involves many components, and some aspects are immature. Care must be taken
to assess maturity and assess the risks of deployment. Tools such as cloud service brokerages can
help.

Page 77 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

As user organizations contemplate the use of cloud computing, they should establish a clear
understanding of the expected benefits of a move to the cloud. Likewise, organizations should
clearly understand the trade-offs associated with cloud models to reduce the likelihood of failure.
Benefits and trade-offs should be well-understood before embarking on a cloud computing
strategy.
Business Impact: The cloud computing model is changing the way the IT industry looks at user
and vendor relationships. As service provisioning (a critical aspect of cloud computing) grows,
vendors must become providers, or partners with service providers, to deliver technologies
indirectly to users. User organizations will watch portfolios of owned technologies decline as service
portfolios grow. The key activity will be to determine which cloud services will be viable, and when.
Potential benefits of cloud include cost savings and capabilities (including concepts that go by
names like agility, time to market and innovation). Organizations should formulate cloud strategies
that align business needs with those potential benefits.
Benefit Rating: Transformational
Market Penetration: 5% to 20% of target audience
Maturity: Early mainstream
Sample Vendors: Amazon; Google; Microsoft; salesforce.com; VMware
Recommended Reading: "Agenda Overview for Cloud Computing, 2014"

NFC
Analysis By: Mark Hung
Definition: Near Field Communication (NFC) is a wireless technology that enables a variety of
contactless applications, such as tap-to-act, information exchange, device pairing, mobile
marketing and payments. It has an operating range of 10 cm or less using the 13.56MHz frequency
band. Three user modes are defined for NFC operation:

Card emulation

Tag reading

Peer-to-peer

These modes are based on several ISO/IEC standards, including ISO 14443 A/B, ISO 15693 and
ISO 18092. The NFC Forum is the industry group that specifies the use of these standards.
Position and Adoption Speed Justification: For the past decade, NFC has been a technology
looking for a solution. Originally intended as the foundation for next-generation payment systems
using smart cards, it never caught on due to the lack of a compelling value proposition. In
November 2010, Google breathed new life into NFC by embedding it in its Google Nexus S
smartphone. This became the first widely available smartphone with built-in NFC. Soon after this

Page 78 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

release, Google enhanced the Android OS to eventually support all three modes specified by the
NFC Forum another first. Currently, all the major smartphone OS vendors, with the notable
exception of Apple, provide native support for NFC:

Android: Acer, Asus, HTC, Huawei, Lenovo, LG, Motorola, Samsung, Sony and ZTE

BlackBerry OS: BlackBerry

Windows Phone: Nokia and Samsung

By embedding NFC in the smartphone platform, the hardware and software companies hope to
move beyond payments and provide the developer community with another tool to foster innovative
applications. Several smartphone and consumer electronics companies have been particularly
aggressive in exploring new NFC uses:

Samsung: It has highlighted several NFC use cases, such as video exchange, in its
commercials. Some of these use cases have been used to distinguish its Galaxy line of
smartphones from the iPhone.

Sony: It has introduced a complete line of consumer electronics devices, such as TVs, remote
controls, boomboxes, speakers and headsets, with NFC capabilities built in.

LG: It has expanded NFC capabilities into home appliances, such as refrigerators and vacuum
cleaners.

Nintendo: The inclusion of NFC in Wii U's GamePad enables NFC functionality for future game
play. Furthermore, the company is also planning on using it for digital payments.

Disney: Its NFC-enabled Disney Infinity line of games and toys and MyMagic+ wristband for its
theme parks met great commercial success in 2013.

NFC payment, however, with multiple parties that have differing interests and agendas, remains the
most complex and time-consuming application to implement. For the next few years, growth of
NFC will be primarily in smartphones and the surrounding digital ecosystem devices, such as
tablets, PCs, printers and TVs. For NFC to take off in payments, a compelling case must be made
for the merchants and the financial ecosystem to invest in the necessary infrastructure. The recent
introduction of Host Card Emulation (HCE) in the Android 4.4 (KitKat) may finally help NFC
payments finally get off the ground, at least for the supply side of the equation.
In other markets, NFC has started to get more traction. In transportation, proprietary contactless
technologies (such as NXP Semiconductors' Mifare) have dominated the market. New industry
organizations, such as the Open Standard for Public Transport (OSPT) Alliance, are now looking to
promote standards-based NFC for the transportation application. In the enterprise, vendors such as
HID Global are now promoting NFC-based solutions for both physical access (for example, building
entry) and IT access (for example, server login). Finally, NFC is also starting to emerge in automotive
applications, including the following:

Bluetooth pairing of mobile devices to in-vehicle infotainment systems

Page 79 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

In-vehicle Wi-Fi configuration

Configuration of personalized driver settings and preferences (for example, seat position,
temperature, audio system, and instrument and infotainment displays)

Keyless entry and authentication

ID and access control for car rental and car-sharing applications

Access vehicle data for diagnostics, maintenance, system status and so forth

User Advice:

Electronic equipment manufacturers should carefully examine NFC's possible use cases and
determine which of their mobile, computing, communications and consumer electronics devices
can benefit from NFC's inclusion.

Software developers should explore the combination of NFC with a smartphone's other
capabilities to bring about innovative applications to bridge the online and physical worlds.

Wireless connectivity semiconductor vendors should re-examine their product road map and
decide how to offer this capability to their customers, whether through partnership, acquisition
or organic development. This will become a checkbox item for connectivity on smartphones
within two to three years.

Business Impact: NFC can bring about unrealized applications by embedding identity in a
multifunction computing and communications platform, such as the smartphone. Although NFC will
have the most impact at the consumer level at first, it may eventually have a strong influence on
context-aware computing and security control in many different industries and enterprises.
Benefit Rating: High
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Broadcom; Disney; Google; Nintendo; Nokia; NXP Semiconductors; Samsung;
Sony
Recommended Reading: "Innovation Insight: NFC Bridges Mobile Devices, People and Things"

Virtual Reality
Analysis By: Brian Blau
Definition: Virtual reality (VR) provides a computer-generated 3D environment that surrounds a user
and responds to that individual's actions in a natural way, usually through immersive head-mounted
displays (HMDs) and head tracking. Gloves providing hand tracking and haptic (or touch-sensitive)
feedback may be used as well. Room-based systems provide a 3D experience for multiple
participants; however, they are more limited in interaction capabilities.

Page 80 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Position and Adoption Speed Justification: Virtual reality is used in high-end simulation and
training applications, including military simulation and training like flight simulators, truck operator
training in specialized environments (such as mines), and accident investigation in several
industries. It is also used in scientific visualization and modeling, including geomodeling in the oil
industry and genome mapping, as well as for product design, where VR systems are used to
experience automobile or equipment design. Fully immersive theme park rides are also considered
VR because of their use of computer-generated graphics, but they are limited to a playback
experience and are often not responsive to user input. Entertainment-based VR that uses immersive
and interactive storytelling techniques could disrupt major markets by creating new types of
entertainment experiences and user interfaces versus the traditional movie theater and television flat
screen approach that has been the norm since movies were invented more than 100 years ago.
Medical professionals will use VR for telepresence doctoring or even remote surgery. Immersive
military applications are more advanced than other types of VR, and the time to plateau of 5 to 10
years is consistent with adoption in the consumer and more traditional consumerlike usage for
businesses.
VR experiences are typically used with HMDs. The most well-known is the 2012 crowdsourced
Kickstarter project Oculus Rift that raised $2.4 million to develop a fully immersive HMD that is
targeted at the video game industry. A retail version of Oculus Rift is still in development and, in an
unexpected move, Facebook acquired Oculus in April 2014 for $2 billion. In March 2014, Sony
announced Project Morpheus, which is their HMD and VR effort that is targeted toward console
gaming. Now that Oculus and Sony are competing for VR experiences, the consumer VR market will
likely take off by 2015 and that could usher in a true expansion of the VR technology and solutions
market.
Despite these well-established niches and recent HMD advances, reliance on specialized interfaces
has kept VR from becoming a mainstream technology in business or entertainment. Virtual worlds
such as Second Life and IMVU which show a 3D environment on a 2D screen rather than
immersing the user inside a room or by using an HMD and the more recent rise of Oculus Rift
have all contributed to the rekindled interest in VR. However, major technology and usability
advances are still required for a low-cost, broadly used immersive virtual environment. In the
meantime, growing popularity of 3D entertainment using 3D glasses and, increasingly, 3D smart
television screens and projections that do not require glasses may relegate immersive VR to
permanent niche status. Currently, augmented reality applications (which superimpose information
on the user's view of the real world rather than blocking out the real world) or mixed-reality
scenarios (where HMDs and context-aware software are used in a hybrid augmented/virtual
environment) are more popular technology approaches to the problem of marrying immersive VR to
a consumer setting.
While VR can be amazingly sophisticated, the level of customization can come at a high cost.
Recent advances in consumer technologies may help ease these obstacles. Standards (such as for
artificial intelligence scripting, object metadata and avatar identity data) are becoming more popular
due to the increased use of social networking technologies (such as management of social media
profiles) and broadening use of public identities. On the development side, technologies like cloud
graphics processing units and mobile video games, as well as the proliferation of broadband
access, will allow application developers to integrate VR more easily into their products.

Page 81 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

User Advice:

Evaluate VR for video game development or mission-critical training and simulation activities
because it can offer higher degrees of fidelity than simple screen-based systems.

Consider VR for exploring design issues in the early stages of decision making for high-cost
products or architectural designs.

Business Impact: Virtual reality can support a wide variety of simulation and training applications,
including rehearsals and response to events. VR can also shorten design cycles through immersive
collaboration, and enhance the user interface experience for scientific visualization, education and
entertainment.
Benefit Rating: Moderate
Market Penetration: 1% to 5% of target audience
Maturity: Emerging
Sample Vendors: Barco; Digital ArtForms; Facebook; Mechdyne; Presagis; Sony; Virtual Heroes;
Vuzix; WorldViz
Recommended Reading: "Innovation Insight: Augmented Reality Will Become an Important
Workplace Tool"
"Maverick* Research: Surviving the Rise of 'Smart Machines,' the Loss of 'Dream Jobs' and '90%
Unemployment'"
"Cool Vendors in Human-Machine Interface, 2014"
"Top 10 Technology Trends Impacting the Oil and Gas Industry in 2014"

Climbing the Slope


Gesture Control
Analysis By: Stephen Prentice
Definition: Gesture control is the ability to recognize and interpret movements of the human body
to interact with and control a computer system without direct physical contact.
Position and Adoption Speed Justification: The broad proliferation of forward-facing video
cameras on devices, from handheld to large wall-mounted, has accelerated gesture recognition up
the slope toward mainstream adoption in a broad range of business and personal applications,
beyond the early developments in the gaming sector. In multiuser environments or where more
accuracy is required, assisted gesture control which makes use of additional physical objects
(such as gloves and wands with inertial sensors) can be used to enhance the interpretation or
resolution of detectable movements. At the same time, alternate sensing technologies are being
commercialized (such as the Leap Motion Controller). These offer submillimeter discrimination
Page 82 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

within a limited, or desktop-sized, zone. The continuing growth of tablets and large-screen
smartphones exposes the restrictions imposed by their limited screen size, and the emergence of
"hover and swipe" rather than the existing "touch and swipe" may enable a richer set of commands
(and a broader range of applications) for these devices. The most recent development is the
explosion of wearable devices (for example, fitness bands, smartwatches and augmented reality
devices such as Google Glass) in which gestural movements recognized by video or by inertial
sensors play a key role in the user interface. Still to come is the likelihood of gesture control playing
a key role in controlling autonomous devices (such as robots) in a mixed human/robot workplace
environment.
Composite interfaces (combining gesture, movement, facial and voice recognition) can create a
rich, immersive and intuitive interface to deliver new capabilities in very competitive environments.
Specific business applications are emerging. Several major retailers are now using in-store virtual
mirrors, which use gesture control to enable users to select garments and see them superimposed
on their bodies. The ability to interact from a distance (and from behind a window) opens up
applications in digital signage, banking and other areas. Healthcare applications (in areas such as
physiotherapy, fitness and well-being) are emerging, tracking developments in computer gaming.
The ability to control devices without physical contact (or while wearing nitrile gloves) has significant
benefits in reducing transfer of infectious materials.
In the near-term and midterm future, we anticipate:

The traditional control paradigms will no longer be appropriate, due to the growing availability of
gesture-controlled devices, the rapidly increasing accuracy of these devices and the growing
number of devices requiring control, many of which are becoming embedded into the fabric of
our environment. Gesture control allows control from the distant "lean-back zone" to the
immediate "lean-in zone," and down to the ultimate "wearable" zone. It is looking increasingly
significant as a primary interaction paradigm, with the ability to transform the way humans
interact with a new generation of computers.

With mainstream products in the gaming market now well-established, gesture control is
moving quickly through the Hype Cycle, and the growing availability of options advances it from
"adolescent" to "early mainstream" in terms of maturity.

While mainstream adoption in gaming is happening quickly, the time to plateau in the enterprise
space will be longer, but the changing nature of devices (especially in the consumer area) is
forcing the pace of adoption to accelerate.

A market-defined "language" of universally recognized gestures (similar to what has happened in


the multitouch area) will emerge to form the base from which more specialized control can be
developed.
User Advice: Gesture control is just one element in a collection of technologies (including voice
recognition, facial recognition, location awareness, 3D displays and augmented reality) that
combine well to reinvent human-computer interaction, especially around wearable devices.
Enterprises should:

Page 83 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Evaluate handheld and camera-based gesture recognition for potential business applications
involving controlling screen displays from a distance (the "lean-back" operating zone).

Evaluate wearable devices to see where they may be employed to enable new modes of
interaction.

Evaluate the emerging generation of desktop-oriented devices, and consider what role they
may play in the "lean-in" operating zone.

Consider how these may be combined with location-based information and augmented-reality
displays.

Even the simplest use of gesture, movement or touch can be introduced to existing products
(especially in the handheld space) to enhance the user experience.
Business Impact: The ability to interact and control without physical contact frees the user and
opens up a range of intuitive interaction opportunities, including the ability to control devices and
large screens from a distance. For smaller desktop, handheld and wearable devices, the ability to
control the device without physical contact opens up valuable possibilities in a variety of markets,
but especially in healthcare applications (where physical contact may result in the transfer of
infectious material). Gesture control also benefits the design aesthetics of touch-based devices,
allowing users to avoid unsightly fingerprints on their devices.
Benefit Rating: High
Market Penetration: 1% to 5% of target audience
Maturity: Early mainstream
Sample Vendors: Apple; Atheer Labs; eyeSight; Elliptic Labs; GestureTek; Google; Gyration; iNUI
Studio; Leap Motion; Microsoft; Nintendo; Oblong; SoftKinetic; Sony

In-Memory Analytics
Analysis By: Kurt Schlegel
Definition: In-memory analytics is an alternative business intelligence (BI) performance layer in
which detailed data is loaded into memory for fast query and calculation performance against large
volumes of data. This approach obviates the need to manually build relational aggregates and
generate precalculated cubes to ensure analytics run fast enough for users' needs.
Position and Adoption Speed Justification: Declining memory prices, coupled with widespread
adoption of 64-bit computing, continue to prime the market for in-memory analytics. Most BI
vendors are now positioning in-memory analytics as a key component of their BI offerings, and the
use of DRAM and NAND Flash memory to speed up analytics will soon be ubiquitous as part of
vendor platforms. Forward movement into the Slope of Enlightenment reflects the acceptance of inmemory analytics with more use cases and clear performance benefits becoming apparent.

Page 84 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

In-memory analytics is no longer a fringe technology; it is increasingly becoming the dominant


performance layer for BI and analytic application architectures. The time taken to reach the Plateau
of Productivity was less than two years (previously this technology remained two to five years away
from the plateau for several years).
User Advice: For response-time issues and bottlenecks, IT organizations should consider the
performance improvement that in-memory analytics can deliver, especially when run on 64-bit
infrastructure. Users should be careful to use in-memory analytics as a performance layer and not
as a substitute for a data warehouse. In fact, users considering utilizing in-memory analytics should
also be aware of how their requirement for speedier query processing and analysis could be
addressed by the use of in-memory processing in the underlying databases feeding BI or via inmemory databases or data grids.
BI and analytic leaders need to be aware that in-memory analytics technology has the potential to
subvert enterprise-standard information management efforts through the creation of in-memory
analytic silos. Where it is used in a stand-alone manner, organizations need to ensure they have the
means to govern its usage and that there is an unbroken chain of data lineage from the report to the
original source system, particularly for system-of-record reporting.
Finally, it is becoming apparent as the scale of in-memory analytics deployments grows,
performance tuning is still needed, either by the return of some aggregation at data load, or by
managing application design against user concurrency requirements and the sizing of hardware and
available RAM.
Business Impact: BI and analytic programs can benefit broadly from the fast response times
delivered by in-memory computing, and this in turn can improve the end-user adoption of BI and
analytics. The reduced need for database indexing and aggregation enables database
administrators to focus less on the optimization of database performance and more on value-added
activities. Additionally, in-memory analytics by itself will enable better self-service analysis because
there will be less dependence on aggregates and cubes built in advance by IT.
However, from an analyst user perspective, faster queries alone are not enough to drive higher
adoption. In-memory analytics is of maximum value to users when coupled with interactive
visualization capabilities or used within data discovery tools for the highly intuitive, unfettered and
fast exploration of data.
Benefit Rating: Moderate
Market Penetration: 20% to 50% of target audience
Maturity: Early mainstream
Sample Vendors: IBM; Microsoft; MicroStrategy; Oracle; Qlik; SAP; SAS; Tibco Software
Recommended Reading: "Need for Speed Powers In-Memory Business Intelligence"

Page 85 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Activity Streams
Analysis By: Nikos Drakos
Definition: An activity stream is a publish-and-subscribe notification mechanism and conversation
space typically found in social networking applications. It lists activities or events relevant to a
person, group or topic within the application. A participant subscribes to or "follows" entities, such
as other participants or business application objects, to track their related activities (a project
management application may add status information, for example), while a physical object
connected to the Internet may report its state, such as a flight delay.
Position and Adoption Speed Justification: Activity streams are popular in consumer social
networking sites such as Facebook and Twitter, as well as in enterprise social networking
applications. Activity streams aggregate notifications from the system as well as information about
the activities of other "followed" individuals ('likes', comments, shared items, for example). In
business environments, activity streams may also contain information about events that are pushed
into a stream from business applications. Activity streams have the potential to become a generalpurpose mechanism for personalized dashboards through which to disseminate and filter
information; a mechanism for connecting groups and communities; and a rich "presence"
mechanism.
Beyond notifications and conversations, it is also possible to use live widgets or gadgets for
example, a simple browser-based or mobile interactive application to notify someone about an
event, as well as allow them to interact with that notification. For example, a notification about a
survey may include some data collection, or a notification about an expense report may contain
action buttons that can open the report or allow an authorized user to approve it. Activity streams
populated with automated notifications provide a simple mechanism that can stimulate and focus
conversations around specific events, as well as broaden visibility and participation across different
groups.
Activity streams can be exposed within many contexts, including various places within a social
network site (for example, a profile, group or topic page); or they can be embedded within an
application (for example, an email sidebar or beside a business application record).
User Advice: Tools that help individuals expand their "peripheral vision" with little effort can be
useful. Being able to choose to be notified about the ideas, comments or activities of others on the
basis of who they are or the strength of a relationship is a powerful mechanism for managing
information from an end user's perspective. Unlike email, with which the sender may miss
interested potential recipients or overload uninterested ones, publish-and-subscribe notification
mechanisms such as activity streams enable recipients to fine-tune and manage more effectively
the information they receive.
Activity streams should be assessed in terms of their relevance as general-purpose information
access and distribution mechanisms. Most established enterprise software vendors, as well as
many specialist ones, have introduced activity streams in their products, and it is important to be
ready to understand their implications in terms of business value, cost and risk.

Page 86 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

There are several aspects of enterprise implementation that will require particular caution. They
include:

The richness of privacy controls that enable users to manage who sees what information about
their activities

The ability to organize information that has been arbitrarily gathered through an activity stream,
and to determine its authoritativeness

The ability to mitigate compliance and e-discovery risks

The ability to cope with information overload with overpopular activity streams

Increasingly, the possibility of multiple, overlapping activity streams in the same organization, as
each internal or external system introduces its own.

Business Impact: The obvious application of activity streams is coordinating activity in dispersed
teams or in overseeing multiparty projects. Regular status updates that are collected automatically
as individuals interact with various systems can keep those responsible up to date, as well as keep
different participants aware of the activities of their peers. Activity streams can help a newcomer to
a team or activity understand who does what and how things are generally done.
The popularity of activity streams with users of consumer social networks will drive interest and
usage in business environments. Activity streams are likely to become a key mechanism in business
information aggregation, distribution and filtering.
Benefit Rating: Moderate
Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: Citrix; Facebook; Google; IBM; Igloo; Jive; Microsoft; Oracle; salesforce.com;
SAP; Tibco Software; VMware
Recommended Reading: "Boost Collaboration With 'Social Everywhere' Application Architectures"

Enterprise 3D Printing
Analysis By: Marc Halpern; Zalak Shah
Definition: 3D printing is an additive technique that uses a device to create physical objects from
digital models. "Enterprise" refers to private- or public-sector organizations' use of 3D printing for
product design, development and prototyping, as well as educational institutions at all levels.
Enterprise 3D printing also includes the use of 3D printers in a manufacturing process to produce
finished goods.
Position and Adoption Speed Justification: 3D printing technologies have been available for
product prototyping and short-run parts manufacturing for almost 30 years. Yet, enterprise 3D

Page 87 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

printing is still an adolescent market, with 5% to 20% market penetration, characterized by evolving
technology capabilities, methodologies and associated infrastructure and ecosystems despite the
age of the technology.
Before now, it was primarily used for prototyping new designs. The pace of adopting 3D printing for
a broader range of enterprise activities was slow until now because the cost of printers was too
high, and before recent years, 3D printers could not print parts that had structural strength suitable
for most mechanical use.
Today, manufacturers are beginning to seriously consider, and in some cases, already are using, 3D
printing for manufacturing new and replacement parts, as well as the tools, jigs and fixtures used in
the manufacturing or assembly of other finished goods.
Today, while the range of materials that can be 3D printed is narrow and very slowly expanding,
enterprises are evaluating the "cross-over" point when the total cost of long-run, traditionally
manufactured parts is less than the total cost of short-run 3D printing items. While the material
range, finished-part quality and total cost factor into the enterprise's decision making, so too does
the recognition that some innovative new designs with unusual or complex geometry can be
produced with 3D printing and are difficult or impossible to produce with any of the traditional
manufacturing technologies.
As the technology continues to develop, providers are introducing lower-cost devices with better
functionality and a wider range of materials to choose from. The 3D printers at this end of the
market have become more-common, practical office and lab devices that, in some cases, fit on a
desktop.
User Advice:

Experts knowledgeable in 3D printing materials must validate that mechanical characteristics of


3D printed parts are suitable for their intended use.

Those responsible for managing the manufacturing costs of produced parts must weigh the
trade-offs between printing 3D parts versus employing traditional manufacturing approaches.

Enterprises must consider use of 3D printing to create the jigs, fixtures and cutting tools used
as part of traditional manufacturing processes. If printing parts is prohibitive, use of 3D printing
to produce such factory tooling can make an enterprise's manufacturing operations more costeffective and responsive.

Those responsible for service and repairs should consider use of 3D printing to produce
replacement parts. This can be particularly cost-effective if original parts were very expensive
or, for old equipment, the spare parts are no longer available.

Those responsible for 3D printing strategy should ensure that users are adequately trained in 3D
modeling techniques needed to produce parts and products via 3D printers.

Business Impact:

Page 88 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

3D printing makes creation of unique customer products more scalable across many
manufacturing industries. This is particularly true for dental products and medical devices. It
facilitates co-creation of products with end customers.

3D printing replacement and spare parts can significantly reduce the amount of inventory and
warehouse space that enterprises need to maintain. It would also extend the lifetime of
products because replacement parts or parts needed for upgrades could be 3D printed.

3D printing tools, jigs and fixtures can reduce manufacturing costs and make manufacturers
more agile to deliver to customers faster.

Plummeting prices of the low-end, consumer-focused material extrusion 3D printers producing


plastic items will encourage enterprises to use them in the creation of prototypes by product
development groups. Use of more and cheaper prototypes improves design for
manufacturability and overall product quality.

3D printing could potentially increase concerns about intellectual property theft across
manufacturers.

Benefit Rating: Transformational


Market Penetration: 5% to 20% of target audience
Maturity: Adolescent
Sample Vendors: 3D Systems; EnvisionTEC; Eos Systems; ExOne; Formlabs; Mcor Technologies;
Stratasys
Recommended Reading: "Cool Vendors in 3D Printing, 2014"
"Use the Gartner Business Model Framework to Determine the Impact of 3D Printing"
"How 3D Printing Disrupts Business and Creates New Opportunities"
"3D Photo Booth Will Help Drive Awareness and Momentum for 3D Printing"

3D Scanners
Analysis By: Marc Halpern
Definition: A 3D scanner is a device used across industrial and consumer enterprises, including
retail, that captures data about the shape and appearance of real-world objects to create 3D
models of them. A 3D scanner captures the characteristics of the object, ranging from products and
facilities to human body shapes including bones, teeth, and ears (for example, for fitting hearing
aids), and converts them into digital form.
Position and Adoption Speed Justification: Gartner began seeing the use of 3D scanners among
manufacturers during the late 1990s. The earliest users adopted 3D scanners to reverse engineer
designs, create medical devices such as custom hearing aids, and do quality control of

Page 89 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

manufactured parts. "Clouds of points" from scanned parts were compared with 3D models built
with nominal dimensions to see the fit of points on actual parts to the idealized geometry. Software
companies created an innovation called the "soft gauge," which added tolerance zones to the
computer-aided design (CAD) models so that checks could be made that the points lie within the
tolerance zones. Manufacturers have also used scanners to scan factories in order to create 3D
models of those factories. Those models help them plan upgrade construction projects.
3D scanners are finding a consumer market by enabling users, who may not have access to CAD or
3D modeling software or who may not be proficient in their use, to more easily create a CAD
drawing of the item by starting with a file that replicates the original. Proficiency in the CAD software
tools normally used to create files for 3D printing can be difficult for many people. Scanners with
capabilities that are well-suited to consumers and many enterprises are available from $3,000 to as
little as $600. Continued technological advancements, improved functionality and price decreases
in 3D scanners will mean consumers can justify a modest expenditure to try 3D image capture and
3D printing. With the technology becoming less expensive and relatively simple to use, consumers
and enterprises are purchasing more.
User Advice: Users must optimize the density of scanned points to capture detail needed on
scanned parts but not make clouds of points too dense. This is particularly the case for scans of
large volumes as in the case of scanning a factory.
Scanner, camera, and 2D and 3D printer manufacturers must continue research and development
work aimed at improving 3D scanner price, usability and performance. The 3D printer technology
providers, in particular, must ensure scanners enable consumers and enterprises to easily create
the files that can be used to print 3D output on their devices.
Educational institutions must use low-cost 3D scanners not only for engineering and architectural
courses as a complement to traditional design programs, but also in creative arts programs (for
instance, to enable students to artistically modify items from nature). Manufacturing enterprises
must explore use of 3D scanning technology in product design, rapid prototyping and reverseengineering. Whether in an enterprise or an educational institution, 3D scanners must be used in
conjunction with design and creative programs that employ 3D printers to produce physical output
from CAD software and other similar software.
Business Impact: Practical uses for 3D scanners will continue growing as their features improve
and prices decline. Sales will grow as their use becomes more widespread, driving down purchase
costs and enabling more enterprises and consumers to justify their purchase.
The commercial market for 3D scanning and printing applications will continue expanding into
architectural, education, engineering, geospatial, medical and short-run manufacturing. In the
"maker" and consumer markets, scanners must have a lower cost before they will enjoy widespread
acceptance for artistic endeavors, custom or vanity applications (such as "fabbing" [the
manufacture of one-off parts] and the modeling of children's toys, pets and gamers' avatars).
Benefit Rating: High
Market Penetration: 5% to 20% of target audience

Page 90 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Maturity: Adolescent
Sample Vendors: 1010data; 3D Digital; 3D Systems; Artec Group; Autodesk; Creaform; Dacuda;
David Vision Systems; Eos Systems; HP; Konica Minolta; Matterform; NextEngine; Roland
Recommended Reading: "Cool Vendors in Consumer Devices, 2013"

Consumer Telematics
Analysis By: Thilo Koslowski
Definition: Consumer telematics represents end-user-targeted, vehicle-centric information and
communication technologies (vehicle ICTs) and services that use embedded technology or mobile
and aftermarket devices. Network-enabled cars for consumers provide in-vehicle services, such as
emergency assistance, navigation and routing, traffic information, local search (for example, for
charging stations or restaurants), and concierge services.
Position and Adoption Speed Justification: As a result of growing consumer demand for
telematics and vehicle ICT, automakers are increasingly exploring opportunities to offer costeffective, cloud-based solutions that ensure sustainable business models without substantial
upfront investments. Rather than having to develop the required technology (that is,
communications hardware) and resource infrastructure (that is, call centers) in-house, automotive
companies continue to engage third-party providers that will take over the development,
management and billing of vehicle-centric telematics services. In addition, companies are looking
for automated, Web-based services that leverage online or server-based information and make it
accessible in a vehicle (for example, getting directions to a point of interest, such as a restaurant).
The value chain for telematics and connected-vehicle offerings continues to change and will focus
on extending mobile applications and services (from the mobile and Internet service industries) to
vehicles, in addition to creating specific automotive functions (for example, expanding application
ecosystems, such as those based on Android applications, to the vehicle). Telematics service
providers (TSPs) will face competition from market entrants coming from the IT industry that will
aggregate other third-party wireless content and develop core technological value propositions from
a mobile device perspective. These companies will also include smaller software, hardware and
content providers that target specific aspects of a holistic consumer telematics application and
work closely with automakers or system integrators to ensure compatibility and reliability.
Consumer telematics is also increasingly developed for the automotive aftermarket by TSPs,
network providers and insurance providers. In mature automotive markets, such as the U.S. and
Western Europe, and some quickly growing emerging markets like China, most manufacturers will
offer consumer telematics in approximately 70% of their new vehicle models by 2020.
User Advice: As telematics and connected-vehicle services, applications, technology and content
providers emerge, vehicle and device manufacturers (for example, consumer electronics
companies) will have to choose the providers that best fit their business and technology
requirements. Companies wanting to offer connected-vehicle services to consumers should take
advantage of the emerging offerings in the mobile- and location-based service space. The market is

Page 91 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

becoming more mature, and vendors have made significant investments in building the expertise,
resources and partnerships that can help companies accelerate their vehicle ICT launches.
Furthermore, vehicle manufacturers and device manufacturers must differentiate between core,
vehicle-centric telematics offerings that are embedded in a vehicle (most safety and security
applications) and personal telematics offerings (primarily information and entertainment services),
which consumers access by integrating portable devices with the vehicle.
To enable device-to-vehicle and service-to-vehicle integration concepts, vehicle manufacturers
must collaborate with consumer electronics companies, service and content providers (regarding
interfaces), and connectivity solutions. The introduction of electric vehicles (EVs) will give consumer
telematics a boost, because seamless EV ownership experiences will greatly benefit from
connected data services (for example, finding the next charging station and informing drivers of the
available range left).
Automotive companies should consider their choices in growing the connected-vehicle ecosystem
by identifying best-of-breed technology providers, instead of a single-solution approach. Both
options have their benefits and disadvantages; however, with increasing in-house expertise for the
connected vehicle, automotive companies can be more selective in their partner choices to better
balance innovation and cost objectivity factors (for example, innovation in connected-vehicle
offerings should reside with the automakers).
Business Impact: Consumer telematics provides an opportunity to differentiate product and brand
values (for example, infotainment access and human-machine interface experience) and to excel in
new or complemented customer experiences, to create new revenue sources (for example,
preferred listings for infotainment content), to collect vehicle-related quality and warranty
information via remote diagnostics, and to capture consumer insights.
Benefit Rating: High
Market Penetration: 20% to 50% of target audience
Maturity: Adolescent
Sample Vendors: Airbiquity; Apple; AT&T; GM (OnStar); Google; Intel; Jasper Technologies;
Microsoft; Nokia (Here); Nvidia; SiriusXM; Sprint; Telogis; Verizon; WirelessCar
Recommended Reading: "GM's Global LTE Strategy Aims for New Connected-Vehicle
Experiences"
"Predicts 2014: Automotive Companies' Technology Leadership Will Determine the Industry's
Future"
"Nokia's Here Zeros In on the Cross-Platform Connected-Driver Experience"
"Livio Buy Boosts Ford's In-Vehicle Application Standardization Efforts"
"SiriusXM Gets Serious About the Connected-Vehicle Market"

Page 92 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Entering the Plateau


Speech Recognition
Analysis By: Adib Carl Ghubril
Definition: Speech recognition systems convert human speech into text or machine instructions.
Position and Adoption Speed Justification: Speech recognition has gained the momentum it
needs to move more rapidly toward mainstream adoption as vendors recognize its value in
enriching touch and in-air gesture interactions. Speech is a primary form of human interaction and is
now deemed crucial in enabling the notion of users doing what is "natural."
With the top cloud service providers IBM, Microsoft, Google, Amazon, Apple and Samsung all
mobilizing resources in speech recognition systems, the number of applications making use of
speech recognition is rising. Apple's purchase of Novauris signals a plan to improve the
responsiveness of Siri (Apple's speech recognition engine) by bringing some speech processing
back from the cloud and on to the local mobile computing platform. Microsoft, Nuance and others
are also tackling dialects and tonal languages.
Dictation, browsing and menu navigation are becoming readily available across PC and mobile
platforms. In fact, vendors are now developing systems that recognize dialects in addition to
language. Indeed, pattern-matching algorithms have given way to stochastic models (for example,
Markov's models) that are now about to be replaced by a hierarchical approach of layered neural
networks called "deep neural networks" (DNNs), demonstrating the kind of performance
improvement that could bring speech recognition to the required productivity level.
Better noise filtering also has allowed significant improvements in speech recognition in the cabin of
the car, and this speech recognition technology is now available in midmarket vehicles.
User Advice: Speech recognition is still very susceptible to the system's immediate surroundings
environmental noise and distance between the user and the microphone dramatically affect
performance. Furthermore, cloud-based systems hamper response time, affecting transcription
performance and, subsequently, adoption. Indeed, reliable speech recognition will remain elusive
until DNN algorithms are fully developed and processing tasks are more effectively partitioned
between local processing resources (able to process relatively simple tasks in real time) and virtual
resources (able to process relatively more complex tasks but only by introducing lag).
Thus, speech recognition should be deployed "on demand" for individual users who express
interest and motivation (for example, those with repetitive stress injuries). Users who are already
practiced in dictation will likely be most successful. Also, examine non-mission-critical applications,
in which a rough transcription is superior to nothing at all, such as voice mail transcription and
searching audio files. In addition, consider speech recognition and its related technology, text to
speech, for applications in which users must record notes as they perform detailed visual
inspections for example, radiology, dentistry and manufacturing quality assurance.

Page 93 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

For mobile devices, focus initial applications on selecting from lists of predefined items, such as city
names, company names or musical artists. This is where speech recognition has the strongest
value-add by avoiding scrolling embedded lists while maintaining a high level of accuracy.
Business Impact: Speech recognition for telephony and contact center applications enables
enterprises to automate call center functions, such as travel reservations, order status checking,
ticketing, stock trading, call routing, directory services, auto attendants and name dialing.
Additionally, it is used to enable workers to access and control communications systems, such as
telephony, voice mail, email and calendaring applications, using their voices. Mobile workers with
hands-busy applications, such as warehousing, can also benefit from speech data entry.
For some users, speech input can provide faster text entry for office, medical and legal dictation,
particularly in applications in which speech shortcuts can be used to insert commonly repeated text
segments (for example, standard contract clauses).
For mobile devices, applications include name dialing, controlling personal productivity tools,
accessing content (such as MP3 files) and using voice-mail-to-text services. Finally, carmakers
supporting the control of infotainment and telemetry systems using speech recognition would be
addressing a need for unencumbered driving.
Benefit Rating: Moderate
Market Penetration: 20% to 50% of target audience
Maturity: Early mainstream
Sample Vendors: Amazon; Apple; Google; IBM; LumenVox; Microsoft; Nuance; Sensory; Spansion;
Telisma
Recommended Reading: "The Three Key Components of Industrial Speech Recognition Solutions"
"Emerging Technology Analysis: Voice-to-Text on Mobile Devices"
"MarketScope for IVR Systems and Enterprise Voice Portals"

Appendixes

Page 94 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Figure 4. Hype Cycle for Emerging Technologies, 2013


Consumer 3D Printing
Gamification
Wearable User Interfaces
Complex-Event Processing

expectations
Big Data
Natural-Language Question Answering
Internet of Things
Speech-to-Speech Translation
Mobile Robots
3D Scanners
Neurobusiness
Biochips
Autonomous Vehicles

Content Analytics
In-Memory Database Management Systems
Virtual Assistants

Prescriptive Analytics
Affective Computing
Electrovibration
Volumetric and Holographic Displays
Human Augmentation
Brain-Computer Interface
3D Bioprinting
Quantified Self

Augmented Reality
Machine-to-Machine Communication Services
Mobile Health Monitoring
NFC

Predictive Analytics
Speech Recognition
Location Intelligence
Consumer Telematics

Mesh Networks: Sensor

Biometric Authentication Methods

Cloud
Computing

Enterprise 3D Printing
Activity Streams

Quantum Computing

Gesture Control
In-Memory Analytics
Virtual Reality

Smart Dust
Bioacoustic Sensing

As of July 2013

Innovation
Trigger

Peak of
Inflated
Expectations

Trough of
Disillusionment

Slope of Enlightenment

Plateau of
Productivity

time
Plateau will be reached in:
less than 2 years

2 to 5 years

5 to 10 years

more than 10 years

obsolete
before plateau

Source: Gartner (July 2013)

Page 95 of 98

Gartner, Inc. | G00264126

This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Hype Cycle Phases, Benefit Ratings and Maturity Levels


Table 1. Hype Cycle Phases
Phase

Definition

Innovation Trigger

A breakthrough, public demonstration, product launch or other event generates significant


press and industry interest.

Peak of Inflated
Expectations

During this phase of overenthusiasm and unrealistic projections, a flurry of well-publicized


activity by technology leaders results in some successes, but more failures, as the
technology is pushed to its limits. The only enterprises making money are conference
organizers and magazine publishers.

Trough of
Disillusionment

Because the technology does not live up to its overinflated expectations, it rapidly becomes
unfashionable. Media interest wanes, except for a few cautionary tales.

Slope of
Enlightenment

Focused experimentation and solid hard work by an increasingly diverse range of


organizations lead to a true understanding of the technology's applicability, risks and
benefits. Commercial off-the-shelf methodologies and tools ease the development process.

Plateau of Productivity

The real-world benefits of the technology are demonstrated and accepted. Tools and
methodologies are increasingly stable as they enter their second and third generations.
Growing numbers of organizations feel comfortable with the reduced level of risk; the rapid
growth phase of adoption begins. Approximately 20% of the technology's target audience
has adopted or is adopting the technology as it enters this phase.

Years to Mainstream
Adoption

The time required for the technology to reach the Plateau of Productivity.

Source: Gartner (July 2014)

Table 2. Benefit Ratings


Benefit Rating

Definition

Transformational

Enables new ways of doing business across industries that will result in major shifts in industry
dynamics

High

Enables new ways of performing horizontal or vertical processes that will result in significantly
increased revenue or cost savings for an enterprise

Moderate

Provides incremental improvements to established processes that will result in increased revenue
or cost savings for an enterprise

Low

Slightly improves processes (for example, improved user experience) that will be difficult to
translate into increased revenue or cost savings

Source: Gartner (July 2014)

Page 96 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

Table 3. Maturity Levels


Maturity Level

Status

Embryonic

In labs

None

Emerging

Commercialization by vendors

First generation

Pilots and deployments by industry


leaders

High price

Much customization

Maturing technology capabilities and


process understanding

Second generation

Less customization

Uptake beyond early adopters

Proven technology

Third generation

Vendors, technology and adoption


rapidly evolving

More out of box

Methodologies

Several dominant vendors

Maintenance revenue focus

Used/resale market only

Adolescent

Early
mainstream

Products/Vendors

Mature
mainstream

Robust technology

Not much evolution in vendors or


technology

Legacy

Not appropriate for new developments

Cost of migration constrains replacement

Rarely used

Obsolete

Source: Gartner (July 2014)

Gartner Recommended Reading


Some documents may not be available as part of your current Gartner subscription.
"Understanding Gartner's Hype Cycles"
More on This Topic
This is part of an in-depth collection of research. See the collection:

Gartner's Hype Cycle Special Report for 2014

Page 97 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

This research note is restricted to the personal use of jesus.rendon@itesm.mx

GARTNER HEADQUARTERS
Corporate Headquarters
56 Top Gallant Road
Stamford, CT 06902-7700
USA
+1 203 964 0096
Regional Headquarters
AUSTRALIA
BRAZIL
JAPAN
UNITED KINGDOM

For a complete list of worldwide locations,


visit http://www.gartner.com/technology/about.jsp

2014 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. This
publication may not be reproduced or distributed in any form without Gartners prior written permission. If you are authorized to access
this publication, your use of it is subject to the Usage Guidelines for Gartner Services posted on gartner.com. The information contained
in this publication has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy,
completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. This
publication consists of the opinions of Gartners research organization and should not be construed as statements of fact. The opinions
expressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues,
Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company,
and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartners Board of
Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research organization
without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner
research, see Guiding Principles on Independence and Objectivity.

Page 98 of 98

Gartner, Inc. | G00264126


This research note is restricted to the personal use of jesus.rendon@itesm.mx

You might also like