You are on page 1of 12

Cloud RAN

www.aricent.com
CLOUD RAN
Abstract and capacity due to interference. This also requires more radio
network controllers.
Mobile broadband is immensely important globally as a key
Radio Access Network (RAN) architecture requires solutions
socio-economic enabler, as evidenced by the continuing growth
in the following areas:
of data traffic on mobile networks. To meet this unabated growth
in demand, cellular operators must increase their network > Additional base stations and radio antennas without
capacity by using advanced wireless technologies like adding increasing the number of cell sites
more network elements like cell sites, controllers, etc. > Reconfigurable BSs to support multiple technologies
> Resource aggregation and dynamic allocation
According to growth estimation data, data traffic increases by
131 percent every year, while air interface grows 55 percent > Cooperative radio technology for coordinated multi point
yearly. At the same time, ARPU is constantly decreasing. transmission and reception
> More capacity and coverage with reduced interference
Per UMTS Forum Report 44, the total worldwide mobile traffic
> Distributed antenna technology for increased coverage
will reach more than 127 Exabytes in 2020, which is 33 times
more than the 2010 figure. Significantly, at least 80 percent of > Controller software enhancement to run on virtualization
the traffic volume remains generated by users, leading to large environment for lower costs and elastic capacity
variations in the total mobile traffic, in terms of time and space > Summarily reduce Capex and Opex, and overall TCO
variations of traffic. Future mobile networks must be designed to
This white paper provides an overview of the distributed RAN
cope with such variation of traffic and uneven traffic distribution,
architecture called Cloud RAN, which addresses solutions for
while at the same time maintaining permanent and extensive
the different areas mentioned previously. It also provides a
geographical coverage in order to provide continuity of service
more detailed analysis of the Cloud radio network controller
to customers. In 2020, daily traffic per Mobile Broadband
architecture.
subscription in the representative Western European country
will stand at 294 MB, and at 503 MB for dongles (67 times
greater than in 2010).
Introduction
The cost of acquiring a new spectrum, deploying new wireless
carriers, and evolving network technologies (e.g., from GSM to In a conventional cellular network, the antenna, RF equipment,
W-CDMA to LTE), while adding more processing capacity, new digital processor, and baseband unit (BTS) sit in the cell site as
radios, and antennas—and managing the resulting heterogeneous shown in the Conventional Cellular Network diagram on the
network—is becoming economically unsustainable and leads next page. This requires more power and real estate space, and
to a vicious cycle of demand. additional directional antennas and big cell towers to support
multi-frequency bands and new air interface technologies like LTE.
An increase in the number of base stations is resulting in more Enhancing a conventional network to support data traffic demand
power consumption, higher interference, and reduced coverage in a current wireless network is economically unsustainable.

Cloud RAN 1
Urban Zone Active Antenna Array
In order to support increasing bandwidth demand, operators
need to enhance their network to support multiple technologies,
(BTS)
multiple frequency bands, and new air interface technologies.
This requires new antennas to be installed, multiple directional
antennas to support MIMO, beam forming, Rx diversity, etc.
(BTS)
This also increases the number of antennas in an already dense
BSC (BTS)
MSC network, which in turn increases interference between different
cells and reduces the capacity of the cell. The end result is
Internet increases site costs.

In the Active Antenna array solution, each element supports a


connection to a separate transceiver element. The antenna array
Base Station (BTS) can support multiple transceivers, which addresses the problem
Rural Zone of installing multiple antennas to support multiple air interface
technologies, MIMO, beam forming, Rx diversity, etc.
Conventional Cellular Network
Each active antenna array has the transceivers (RF and digital
There is an immediate need to identify a solution that reduces the components) hardware embedded with each antenna element
number of cell sites, effectively reuses resources, and employs inside the antenna array, rather than outside in a separate RF
reconfigurable basebands, multi-band radios, and distributed box called RRH or in a conventional TRDU/TMA. This reduces
wideband antennas to support different air interface technologies. loss due to the RF connection between the antenna and external
RF. With the built-in transceivers, the individual signals can be
Cloud RAN architecture is based on distributed radio access fed into different antenna elements to create focused vertical
network architecture consisting of the following network beams per each user, carrier, technology, etc., which can control
elements: the interference and increase cell capacity and coverage.
> Active antenna arrays
> Multi-band radio remote heads
> Centralized baseband units Multi-band Radio Remote Heads
> Metro cells In conventional networks, BTS/NodeB contains radio (RF and
> Radio network controllers on cloud digital components) and baseband units connected to an antenna
> Common management server using coaxial cables.

> SON server for seamless management and optimal The Open Base Station Architecture Initiative (OBSAI) and the
network usage Common Public Radio Interface (CPRI) standards introduced
standardized interfaces separating the server and the radio

Active Centralized > UMTS


Antenna Baseband > HSPA SON Common IMS/
System Bank > LTEeNB
Server Management Operator
> 2G/2.5G Server Services
> LTE-A
Optical

Coax

Remote IP Internet
Radio Head
RAN Servers Controllers on Core
Macro > GSM/GPRS Cloud Network
Site IP > UMTS
> UMTS Femto GW
Femto > HeNBGW
Cells/ IP > WiFi Access Gateway
WiFi
Figure 1: CRAN Access Technology Cloud

Cloud RAN 2
part of the base station, the latter of which is supported by the The centralized baseband is built on the concept of Software
Remote Radio Heads (RRH). Defined Radio (SDR) with use of distributed radio signal
processing and baseband processing units, which are software
A separate RRH is required for each frequency band to support
configurable and reduce the complexity of deploying BBU at
multiple frequency bands and multiple sectors in a given
the location of the cell site. The increase in additional carriers,
geographical area. The number of RRH required proportionally
spectral bandwidth, new technologies, etc. can be seamlessly
increases, and in many of the macrocell deployments, RRH is
supported by stacking a number of baseband units in the
in the top of the cell tower with the antenna to reduce the RF
baseband pool and deploying remote MB-RRH and AAA with
loss. In denser network deployments, increasing the number of
comparatively less cost and easy maintenance.
RRH may not be feasible in all deployments, so RRH may have
to be deployed on high-rise buildings, etc. This increases the
The baseband and radio signal processing is distributed using
overall cost, RF loss, and maintenance costs.
the CPRI interface between BBU and remote radio equipment.
Multi-band RRH (MB-RRH) are supported by multiple vendors The Common Public Radio Interface (CPRI) is an industry
for addressing the issues mentioned above. It can support cooperation aimed at defining a publicly available specification
multiple frequency bands and multiple technologies like GSM, for interface between the Radio Equipment Control (REC) and
WCDMA, and LTE in combination with the RRH units. This reduces the Radio Equipment (RE), which in our case is the BBU and
the number RRH required to support multiple frequency bands Remote Radio Head respectively. The scope of the CPRI
and different technologies, while reducing the cell site costs, specification is restricted to the link interface only (layer 1 and
power consumption, and complexity. layer 2), which is basically a point-to-point interface. The Open
Base Station Architecture Initiative (OBSAI) was introduced
to standardize interfaces separating the Base-Station server

Centralized Baseband Units and the radio part of the base station. Figure 2 depicts a CRAN
architecture utilizing CPRI or OBSAI interface.
In typical macrocell deployments, the baseband unit is located
Key features of this architecture (Architecture A) are:
at the base of the cell tower along with the radio and other digital
equipment. The cost of deploying new baseband units along > Cells are distributed across processors and flexibly connected
with radios, antennas, etc. to support additional carriers, spectral to radio unit through high bandwidth (order of Gbps) optical
bandwidth, different technologies, etc. and managing the fiber links
heterogeneous network is becoming economically challenging > Board level, link level redundancy could be provided
and unsustainable. > High-speed communication across sectors for efficient
inter-cell information sharing for cooperative/coordinated

Cloud RAN Unit High Speed

Unit Unit M
RRC, S1-AP, X2-AP, RRM, SON RRC, S1-AP, X2-AP, RRM, SON
Layer 2 Layer 2 Layer 2 Layer
Layer22 Layer 2 Layer 2
- Cell 1 - Cell 2 - Cell n - Cell
- Cell
11 - Cell 2 - Cell n

Layer 1 - Layer 1 - Layer 1 - Layer 1 - Layer 1 - Layer 1 -

CPRI/OBSAI Engine CPRI/OBSAI Engine

CPRI/OBSAI link over Fiber

Figure 2: CRAN Architecture A: Utilizing CPRI/OBSAI Link

Cloud RAN 3
radio resource management, scheduling, and power control and modification in MAC will be required. A portion of MAC
to optimize cell throughput and interference reduction should also run in the baseband unit in the antenna site to

> Reduced need for hardware at antenna sites control the time-critical L1 interface and relay messages
between Cloud MAC and antenna Layer 1.
> Utilizes optical links where already available to avoid laying
new links, which may make infrastructure expensive > High-speed communication across sectors for efficient
inter-cell information sharing for cooperative/coordinated
The main disadvantage of this approach is the high-bandwidth radio resource management, scheduling, and power control
link required between radio equipment and the central unit. to optimize cell throughput and interference reduction
For example, CPRI supports different line-bit-rate options ranging
from 614 Mbps to 6.14 Gbps. Overlaying such high-bandwidth The main advantage of option B is it requires cheaper and lower

connections is a costly prerequisite and can be a big barrier to bandwidth IP links between the cell site and central unit. However,

this solution becoming popular. To overcome this problem, if the cell site will require more hardware compared with option

the split between radio equipment and control unit can be moved A because Layer 1 and some part of Layer 2 are being executed

higher up the network stack (i.e., from below Layer 1 to between in the cell site. In addition, the end-to-end latency increases due

Layer 1 and Layer 2), then instead of sharing IQ samples, only to IP link delay and variance characteristics.

the demodulated and decoded data and protocol information


need to be shared over an IP-based link between the remote BBU POOLING:
unit and the central unit. This considerably reduces the The pooling of processing resources for multiple cell sites at a
bandwidth requirement to approximately 200 Mbps for a 2x2 central location (utilizing architecture option A or B) has many
MIMO, 20 MHz cell. Figure 3 depicts CRAN Architecture utilizing benefits. Based on the capacity, coverage, and number of air
IP link between radio unit and the central unit. interface technologies to support, additional BBU can be easily
added and remotely managed. The cell sites need to have only
Key features of Architecture Option B are:
RRH and antennas; this reduces the huge space, power
> Cloud RAN unit is connected with relatively low-bandwidth consumption, and management overheads of the cell site.
(order of 100 Mbps) IP links to Radio equipment site—IP
connectivity should be through operator-managed network
so that there is strict control over latency and jitter KEY BENEFITS OF BBU POOLING

> Antenna site terminates IP links and carries out Layer 1 Capex and Opex reduction
processing according to air Interface timing The hardware can be pooled across multiple cell sites in order
> Layer 3 and Layer 2 located in Cloud RAN unit. To handle to reduce the initial capital costs, as well as regular running
impact of latency of IP link on 1ms, strict scheduling of LTE (electricity, site rental, etc.) and maintenance costs.

High Speed

Cloud RAN Unit M Antenna Site 1


Site Management
Cloud RAN Unit 1
MAC
Layer 2 MAC MAC
RRC, S1-AP, X2-AP, RRM, SON (partial)
- Cell 1 (partial) (partial)
Layer 2 Layer 2 Layer 2
Layer 1 - Layer 1 - Layer 1 -
- Cell 1 - Cell 2 - Cell n
IP Link IP Link

Antenna Site N

Delay

IP Link

Figure 3: CRAN Architecture B: IP Link between Cloud RAN Unit and Antenna Site Equipment

Cloud RAN 4
Load Aggregation and Balancing: The metrocells can be deployed on lamp posts, buildings, etc.
Baseband processing for multiple cell sites is aggregated based and are connected to the operator core network through the
on the bandwidth requirement not increasing the number of cell IP backhaul. These cells can be deployed in both indoor and
sites. The BBU units can be dynamically distributed to different outdoor environments.
cell sites based on the usage patterns.
This provides an economically viable solution for the operator
Multiple Technologies Support to increase cell density with less cost, efficient spectrum usage,
The BBU units can be dynamically configured to support different and less time taken to extend capacity and coverage.
air interface technologies based on network load and service
requirements.

High Availability
Radio Network Controllers on Cloud
The BBU pool has number of BBU units. During the failure of As defined by NIST, cloud computing is a model for enabling
any single BBU, other active BBUs can share the load of the ubiquitous, convenient, on demand network access to a shared
failed BBU, so that it can seamlessly recover. During multiple pool of configurable computing resources (e.g., networks,
BBU failures, the active BBU units can be dynamically configured servers, storage, applications, and services) that can be rapidly
to share traffic loads from a number of cell sites supported by provisioned and released with minimal management effort and
the BBU pool. service provider interaction.

Cooperative Multi-point Operation (CoMP) The radio network controllers in the cloud RAN solution are built
The BBUs connected to different cell sites are located in a using this cloud-computing model to support GSM BSC, UMTS
centralized location, allowing the cell site information related RNC, HeNB-GW, MME, WiFi-GW functions with increased capacity,
to signaling, traffic data, resource allocation, channel status, in addition to multiple technologies. This cloud computing
etc. can be easily shared between BBUs. This information can model can also be extended to CN elements for supporting
be used to optimize the allocation of resources, handovers, call flexible open architecture to increase capacity, different
handling, scheduling for Inter Cell Interference Control (ICIC) technologies, effective reuse of resources, and high availability.
and improve spectral efficiency. The CoMP and ICIC are the key
requirements of the LTE-A in the 3gpp Rel-11 specifications. Traditionally, radio access network controllers like BSC,
Because the BBUs support macrocells and small cells, the RNC, H(e)NB-GW, etc. are built on specific hardware with
coordinated multi-site processing helps optimize the mobility customization. The controller application can only run on specific
and ICIC between heterogeneous networks. hardware and software solutions, and are built for supporting
estimated capacity. The available resources are never used to
SON Support their full capacity, which increases the TCO, time to market,
The shared information of BBUs can be used for advanced and dependency on specific hardware and software vendor
SON features to optimize the various services. The SON can solutions.
dynamically configure resources to be used for the cell site
processing, optimize the handover between cells, manage
inter-RAT handovers, conduct cell-load balancing, and efficiently Software as End Application like
use HW resources. During very low load conditions, some of Service (SaaS) controller applications
the BBUs can be switched off to save energy and help achieve
green BTS.

Platform as Application platform or


Service (PaaS) middleware as a service
Metrocells
As mentioned before, adding more macro cells to support
increased capacity and coverage is not an optimal solution. In Infrastructure Cloud HW, CPU, Core,
as Service Disks, Fabric
an effort to reduce the load on the macrocells, and to provide
higher capacity and greater coverage, operators are deploying
offloading solutions where the macrocells are offloaded to Cloud Computing Service Models
lowcapacity, lowpower small cells called metrocells.
Figure 4: Cloud Computing Service Models

Cloud RAN 5
Cloud computing architecture defines three different service and the operating system it runs is called the guest. Each guest
models, as shown in the Figure 5 below, where COTS solutions OS instance running on VM acts as an individual server for the
can be used in different service layers to avoid using customized application. The diagram below shows the overview of the
hardware and software solutions from specific vendors. virtual servers.

The radio network controller applications in the cloud computing A virtual machine (VM) is a software implementation (i.e., a
environment still need all the software and hardware layers as computer) that executes programs like a physical machine.
in the traditional telecom equipment. But hardware virtualization, Virtual machines are separated into two major categories based
OS abstraction layers, and middle layers are provided to the on their use and degree of correspondence to any real machine.
application through virtual service layers so that it can remain A system virtual machine provides a complete system platform
independent of underlying hardware and software components. that supports the execution of a complete operating system
(OS), while a process virtual machine is designed to run a single
Cloud computing is in the very early stages of adaption in the program and support a single process.
telecom controller space. Using controller applications as SaaS
on the different vendor PaaS and IaaS is still a common interface A system virtual machine (virtual hardware), which provides
supported by multiple vendors that is still evolving. The standard an abstraction of a simple x86 PC with private CPU, memory,
bodies like NIST and ETSI are working to define a standard network interface (NIC), and file system, is used for controller
interface for the different service layers. virtualization. Each VM is independent of the VMM and other VMs.

Per NIST, generally, interoperability and portability of customer When the number of VMs increases complexity of I/O traffic, and
workloads is more achievable in the IaaS service model because hardware handling in VMM increases, application handling
the building blocks of IaaS offerings are relatively well-defined significantly slows down compared with a non-virtualization
(e.g., network protocols, CPU instruction sets, legacy device environment.
interfaces, etc.).
The PCI-SIG has defined a standard for how to virtualize SR-
The IaaS layer is supported by multiple vendors through their IOV (Single Root I/O Virtualization) where a physical device
COTS virtualization solutions. A hypervisor called the virtual implements hundreds of images of itself, one for each VM.
machine manager provides hardware virtualization so that Each VM communicates with its own set of I/O queues, which
multiple operating systems are able to run concurrently on a can directly use the device without the performance cost of
host computer. The virtual hardware is called a virtual machine going through a VMM while ensuring isolation between the VMs.

DOM U
Hardware

Hardware
Hardware Hardware OS OS
App App

OS OS OS OS DOM U
App App App App

Hardware
OS OS
App App

OS OS
App App DOM U
Hardware

Hardware

Before: 3 different servers for 3


operating systems and services OS OS
App App

After: Only 1 server required for 3


different operating systems
Figure 5: Virtual Servers and services

Cloud RAN 6
VMware supports this technology with its ESXi VMM called the Using a cloud computing environment for radio network
VMDirectPath. The VMDirectPath I/O allows a guest operating controllers has the following advantages:
system on a virtual machine to directly access physical PCI
Hardware Independence
and PCIe devices connected to a host. Each virtual machine can
Controller software can run on COTS hardware available from
be connected to up to two PCI devices. PCI devices connected
different HW vendors, hence no binding with customized
to a host can be marked as available for pass-through from the
hardware solutions. Different applications can run on the same
hardware advanced settings in the configuration for the host.
hardware so that available resources can be used on demand.
Intel and AMD support hardware-based assistance for I/O
Software Independence
virtualization processes and complement single-root I/O
Application software can run on COTS virtual machines available
virtualization. Intel’s name for this technology is VT-d, while
from different vendors as IaaS. The application is independent of
AMD’s version is ADM-Vi.
the actual hardware used, so it can run on different hardwares
The controller applications in the cloud environment are based with no application software changes. There is also no proprietary
on third-party IaaS layer interfacing with guest OS/virtual software supporting hardware independence.
machine or IaaS in the service-layer hierarchy. All software layers
Resource Pooling
like guest Os, middle layers, controller-specific OAM, controller
The different hardware types can be pooled to run multiple
application, etc. which are above IaaS are provided by TEMs.
instances of application software to support increased capacity.
The guest OS can be any standard OS like Linux, VxWorks,
The resources can be dynamically allocated, with different
Solaris, etc. depending on the application architecture.
applications running on the same hardware.
The virtual server/cluster management is part of third-party
High Availability
IaaS solutions. This provides the mechanism to manage the
Using pooled resource to run controller applications takes care
virtualization environment, control the execution of the virtual
of single or multiple units failing within a pool of resources,
machine, and loading the associated applications. Some of the
while providing geo-redundancy, multi-tenancy, and elasticity.
key functionalities supported by virtual machine management are:
> Centralized control and deep visibility into virtual infrastructure Reduced CAPEX
(create, edit, start, stop VM) Usage of the COTS hardware and software reduces TCO and
> Proactive management to track physical resource availability, time to market. Reuse of available resources with dynamic
configuration, and usage by VMs allocation helps use the full capacity of the resources, thus
reducing the number of resources required.
> Distributed resource optimization
> High availability Reduced OPEX
> Scalable and extensible management platform Use of common hardware and software reduces the cost of
managing different customized solutions. The resource can be
> Security
affectively used depending on the load conditions. Based on
There are multiple vendors supporting centralized control at demand, some of the resources can be switched off in order
the different levels in the virtualization environment. The VMware to reduce electricity and other infrastructure costs (e.g.,
vCenter is one such solution that supports scalable and cooling, etc.).
extensible management platforms as shown in the diagram
Elasticity, Best of Class Performance
on the next page.
The capacity of the system can change quickly according to
The operator can host the controller application software on need. The controller applications (RNC, BSC, etc.) run in virtual
the operator’s own private cloud or on a service provider’s cloud machines independent of the physical hardware. Third-party
(community or public). virtualization technology from different vendors can be used to
host the application-specific OS, middleware, and applications.
There are multiple vendors providing the virtualization IaaS
layer. Some of the key solutions are VMware, KVM, and WR
hypervisor.

Cloud RAN 7
An example of radio controller application on cloud environment is shown in the following diagram:

Different Applications, middle layer, OAM, etc.


VMM
COTS VM Manager
H(e)NB-
BSC RNC Guest OS VM Disk Guest OS VM Disk
GW

M/W M/W M/W VM V-10 VM V-10

Guest OS Guest OS Guest OS Guest OS VM Disk Guest OS VM Disk

VM V-10 VM V-10

COTS SW/HW
Virtual Virtual Virtual Core OS Disk Disk Disk
HW HW HW

Hypervisor CPU Fabric HW IO HW

CPU
Physical Hardware (Servers or ATCA)

Figure 6: Controller Application Over IaaS layer

Multiple applications can run on the single platform with different decentralized algorithms as applicable at each individual network
VMs running different OSs using a multi-tenant model. In a element. The operator may support multiple technologies like
multi-core environment, different applications can run on a GSM, WCDMA, and LTE in the cloud RAN deployment. This
different core with associated VM, guest OS, middle layer, and requires network-level self-optimization to support automatic
applications. The different controller applications allow common updates of network topology changes between E-UTRAN/
cloud computing architecture to dynamically use available UTRAN/GERAN networks.
resources.
Information related to network load, performance, etc. of the
different wireless technologies is used by the centralized function
to dynamically allocate shared resources to different network
Common Management Server elements in the cloud RAN and support load balancing. For
As previously mentioned, operators use more than one RAT to example, when the GSM load is less but the UMTS is in the peak,
support wireless data traffic demand. The converged solutions the shared NEs like AAA and RRH can be configured to support
AAA, RRH, multi-standard BBUs, and radio network controllers additional cells, frequency bands, etc. When the network load is
are used to support multiple technologies. Management of these low, the set of network elements can be switched off wherever
converged network elements requires a common management the load can be handled by a minimum set of network elements.
server capable of supporting the FCAPS features for GSM,
UMTS, and LTE network nodes.
Conclusion and Aricent Value
Proposition
SON Functions
As discussed in the previous sections, the complexity of
In cloud RAN network architecture, each network element is enhancing traditional networks to support increasing broadband
capable of supporting self-configuration, optimization, and capacity and coverage is not economically viable. There is
autonomous recovery. SON, in this architecture, is based on immediate need to deploy distributed networks with centralized

Cloud RAN 8
baseband units, RRHs, AAA, and radio network controllers on and multiple instances of Layer 2 can be utilized to handle
the cloud to reduce the complexity of introducing addition cell multiple cells/sectors.
sites and adding additional antennas and radio components. > eNodeB software is modified to handle IP link (architecture
The Radio network controllers on the cloud environment using option B described previously) interface between cell site
virtualization technology reduce the infrastructure cost to unit and the central unit.
support both multiple technologies and the complexity of
managing multiple network elements.

In the 3rd Generation Partnership Project (3GPP) international Enhanced Packet Core Modules
standardization group meeting held in June, 2012, “energy > RAN on the cloud must cater to variable capacity
requirements and host multiple cells. Aricent Layer 3 and
Layer 2 including Scheduler, MAC, RLC, PDCP, GTPU, are
12. Deploying a cloud RAN architecture-based network can scalable for multi-core architectures, support multiple form
address these requirements. The NGMN group also initiated a factors (femto, pico, micro) and different capacity
“CENTRALISED PROCESSING, COLLABORATIVE RADIO, requirements based on deployment.
REAL-TIME CLOUD COMPUTING, CLEAN RAN SYSTEM > Single instance of Aricent Layer 3 can handle multiple cells/
(P-CRAN) [11]” project to address these issues. sectors hosted on cloud RAN equipment and can interface
with cells/sectors hosted on other cloud RAN equipment
Implementation of a cloud RAN solution can save CAPEX up
on the X2 link.
to 15 percent and OPEX up to 50 percent over five to seven
compared with traditional RAN deployment, per the China > Aricent Layer 2 can handle one cell/sector per instance and
Mobile report [1]. According to the Alcatel-Lucent Light Radio multiple instances of Layer 2 can be utilized to handle
Economics analysis [2], these disruptive RAN architecture multiple cells/sectors.
designs and innovative features can reduce overall TCO by at > eNodeB software is modified to handle IP link (architecture
least 20 percent over five years for an existing high-capacity option B described previously) interface between cell site
site in an urban area — with at least 28 percent reduction for unit and the central unit.
new sites.

Aricent is actively participating in and following emerging C-RAN


architecture initiatives. Aricent eNodeB, EPC, and HeNB-GW
Universal SON Server (UniSON)
enabling software are ready for CRAN architecture. > RAN on the cloud must cater to variable capacity
requirements and host multiple cells. Aricent Layer 3 and
Layer 2 including Scheduler, MAC, RLC, PDCP, GTPU, are

eNodeB Framework scalable for multi-core architectures, support multiple form


factors (femto, pico, micro) and different capacity
> RAN on the cloud must cater to variable capacity
requirements based on deployment.
requirements and host multiple cells. Aricent Layer 3 and
> Single instance of Aricent Layer 3 can handle multiple cells/
Layer 2 including Scheduler, MAC, RLC, PDCP, GTPU, are
sectors hosted on cloud RAN equipment and can interface
scalable for multi-core architectures, support multiple
with cells/sectors hosted on other cloud RAN equipment
form factors (femto, pico, micro) and different capacity
on the X2 link.
requirements based on deployment.
> Aricent Layer 2 can handle one cell/sector per instance and
> Single instance of Aricent Layer 3 can handle multiple cells/
multiple instances of Layer 2 can be utilized to handle
sectors hosted on cloud RAN equipment and can interface
multiple cells/sectors.
with cells/sectors hosted on other cloud RAN equipment
on the X2 link. > eNodeB software is modified to handle IP link (architecture
option B described previously) interface between cell site
> Aricent Layer 2 can handle one cell/sector per instance
unit and the central unit.

Cloud RAN 9
Additionally, Aricent is involved multiple services projects related
EMS
Universal SON Server to solution architecture, implementation, and field support of
C-RAN solutions. This includes Tier 1 OEMs in the area of multi-

TR69, RAT BTS, virtual common hardware for RNC/BSC solutions,


etc. Aricent is well-equipped to provide software frameworks,
(eNodeB, EPC etc.), necessary resources, management framework
ENODEB and a strong delivery process to assist our customers for their
SON Client own C-RAN solution.

REFERENCES

(1) http://www.google.com/url?sa=t&rct=j&q=china+mobile+c-ran&source=web&cd=1&ved=0CE0QFjAA&url=http%3A%2F%2Flabs.
chinamobile.com%2Farticle_download.php%3Fid%3D63069&ei=ebXyT6uBAc7LrQfRnK2rCQ&usg=AFQjCNFDC6S_4Oth6_0vLobNzvfvrlouHw

(2) http://www.alcatel-lucent.com/wps/DocumentStreamerServlet?LMSG_CABINET=Docs_and_Resource_Ctr&LMSG_CONTENT_FILE=White_
Papers%2FlightRadio_WhitePaper_EconomicAnalysis.pdf&REFERRER=j2ee.www%20%7C%20%2Ffeatures%2Flight_radio%2Findex.
html%20%7C%20lightRadio%3A%20Evolve%20your%20wireless%20broadband%20network%20%7C%20Alcatel-Lucent

(3) http://www.vmware.com/products/vcenter-server/overview.html

(4) http://www.vmware.com/products/vsphere/mid-size-and-enterprise-business/overview.html

(5) http://www.obsai.com/obsai/content/download/4977/41793/file/OBSAI_System_Spec_V2.0.pdf

(6) http://www.cpri.info/downloads/CPRI_v_5_0_2011-09-21.pdf

(7) http://csrc.nist.gov/publications/drafts/800-146/Draft-NIST-SP800-146.pdf

(8) http://collaborate.nist.gov/twiki-cloud-computing/pub/CloudComputing/RoadmapVolumeIIIWorkingDraft/NIST_cloud_roadmap_VIII_
draft_110311.pdf

(9) http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf

(10) http://www.umts-forum.org/component/option,com_docman/task,doc_download/gid,2545/Itemid,213/

(11) http://www.ngmn.org/workprogramme/centralisedran.html

Cloud RAN 10
Engineering excellence.Sourced

Aricent is the world’s #1 pure-play product engineering services and software firm. The
company has 20-plus years experience co-creating ambitious products with the leading
networking, telecom, software, semiconductor, Internet and industrial companies. The
firm's 10,000-plus engineers focus exclusively on software-powered innovation for the
connected world.

frog, the global leader in innovation and design, based in San Francisco is part of Aricent.

The company’s key investors are Kohlberg Kravis Roberts & Co. and Sequoia Capital.

info@aricent.com

© 2014 Aricent. All rights reserved.


All Aricent brand and product names are service marks, trademarks, or registered marks of Aricent in the United States and other countries.

You might also like