Interesting Technology Market Notes Ashwani Nov 2015

Published on December 2016 | Categories: Documents | Downloads: 20 | Comments: 0 | Views: 136
of 41
Download PDF   Embed   Report

Technology notes

Comments

Content

25 Nov 2015 STORYBOARD

Data center network design
moves from tree to leaf
Leaf-spine designs are pushing out the hierarchical tree data center
networking model, thanks to the popularity of Ethernet fabric
networks.

Explained: Leaf-spine data center network
architecture
The old data center network design should make like a tree and leaf.
For many years, data center networks have been built in layers that, when
diagrammed, suggest a hierarchical tree. As this hierarchy runs up against
limitations, a new model is taking its place.
In the hierarchical tree data center, the bottom of the tree is
the accesslayer, where hosts connect to the network.
The middle layer is the aggregation,or distribution, layer, to which the
access layer is redundantly connected. The aggregation layer provides
connectivity to adjacent access layer switches and data center rows, and in
turn to the top of the tree, known as the core

The core layer provides routing services to other parts of the data center, as
well as to services outside of the data center such as the Internet,
geographically separated data centers and other remote locations.
This model scales somewhat well, but it is subject to bottlenecks if uplinks
between layers are oversubscribed. This can come from latency incurred as
traffic flows through each layer and from blocking of redundant links
(assuming the use of the spanning tree protocol, STP).

Leaf-spine data center architectures
In modern data centers, an alternative to the core/aggregation/access layer
network topology has emerged known as leaf-spine. In a leaf-spine
architecture, a series of leaf switches form the access layer. These
switches are fully meshed to a series of spine switches.
The mesh ensures that access-layer switches are no more than one hop
away from one another, minimizing latency and the likelihood of bottlenecks
between access-layer switches. When networking vendors speak of
anEthernet fabric, this is generally the sort of topology they have in mind.

Leaf-spine architectures can be layer 2 or layer 3, meaning that the links
between the leaf and spine layer could be either switched or routed. In
either design, all links are forwarding; i.e., none of the links are blocked,
since STP is replaced by other protocols.
In a layer 2 leaf-spine architecture, spanning-tree is most often replaced
with either a version of Transparent Interconnection of Lots of Links (Trill)
orshortest path bridging (SPB). Both Trill and SPB learn where all hosts are
connected to the fabric and provide a loop-free path to their Ethernet MAC
addresses via a shortest path first computation.
Brocade's VCS fabric and Cisco's FabricPath are examples of proprietary
implementations of Trill that could be used to build a layer 2 leaf-spine
topology. Avaya's Virtual Enterprise Network Architecture can also build a
layer 2 leaf-spine but instead implements standardized SPB.
In a layer 3 leaf-spine, each link is a routed link. Open Shortest Path First is
often used as the routing protocol to compute paths between leaf and spine
switches. A layer 3 leaf-spine works effectively when network virtual local
area networks are isolated to individual leaf switches or when a network
overlay is employed.
Network overlays such as VXLAN are common in highly virtualized, multitenant environments such as those found at Infrastructure as a Service

providers. Arista Networks is a proponent of layer 3 leaf-spine designs,
providing switches that can also act as VXLAN Tunnel Endpoints.

The merits of white box switching
Modern infrastructures are based on commodity, non-proprietary hardware.
People have used commodity servers for a long time, but now we are
seeing commodity switching gear too, on which we layer open network
operating systems. This is a promising area, writes Ethan Banks in "Why
White Box Switching?" An offshoot of software defined networking, "white
box switching might survive on its own merits -- even if SDN falls by the
wayside."
Modern infrastructures run on open source software, finds Ed Scannell in
"Opening Up to Open Source." The reasons for today's surge of open
source projects aren't the same as yesterday's -- whereas open source
used to be about saving money, now it's about exploiting "the latest Webbased technologies for mobile, cloud and analytics platforms."
What makes one infrastructure "modern" and another not so much? That is
continually up for debate, but there are definitely some common themes.
Read on to learn more.

The case for a leaf-spine data
center topology
Three-layer network designs are last season's topology. The leafspine model is coming in hot, but can the advantages made data
center designers forget about the weaknesses?

Three-layer designs are falling out of favor in modern data center networks,
despite their ubiquity and familiarity. What's taking over? Leaf-spine
topologies.
As organizations seek to maximize the utility and utilization of their
respective data centers, there's been increased scrutiny of mainstream
network topologies. "Topology" is the way in which network devices are
interconnected, forming a pathway that hosts follow to communicate with
each other.
The standard network data center topology was a three-layer architecture:
the access layer, where users connect to the network; the aggregation
layer, where access switches intersect; and the core, where aggregation
switches interconnect to each other and to networks outside of the data
center.

ETHAN BANKS

The traditional, three-layer network design

The design of this model provides a predictable foundation for a data center
network. Physically scaling the three-layer model involves identifying port
density requirements and purchasing an appropriate number of switches for
each layer. Structured cabling requirements are also predictable, as
interconnecting between layers is done the same way across the data
center. Therefore, growing a three-layer network is as simple as ordering

more switches and running more cable against well-known capital and
operational cost numbers.

Why three-layer falls short
Yet there are many reasons that network architects explore new data center
topologies.
Perhaps the most significant is the change in data center traffic patterns. Most
network traffic moves along a north-south line -- hosts are communicating with
hosts from other segments of the network. North-south traffic flows down the
model for routing service, and then back up to reach its destination. Meanwhile,
hosts within the same network segment usually connect to the same switch,
keeping their traffic off the network interconnection points.
However, in modern data centers, compute and storage infrastructures alterations
change the predominant network traffic patterns from north-south to east-west.
In east-west traffic flows, network segments are spread across multiple access
switches, requiring hosts to traverse network interconnection points. At least two
major trends have contributed to this east-west phenomenon: Convergence and
virtualization.
Convergence: Storage traffic often shares the same physical network as application
traffic. Storage traffic occurs between hosts and arrays that are in the same network
segment, logically right next to each other.
Virtualization: As IT continues to virtualize physical hosts into virtual
machines (VM), the ability to move workloads easily has become a mainstream,
normative function. VMs move from physical host to physical host within a
network segment.
Running east-west traffic through a network data center topology that was
designed for north-south traffic causes oversubscription of interconnection links
between layers. If hosts on one access switch need to communicate at a high speed

with hosts attached to another access switch, the uplinks between the access layer
and aggregation become a potential -- and probable -- congestion point. Three-tier
network designs often exacerbate the connection issue. Because spanning-tree
blocks redundant links to prevent loops, access switches with dual uplinks are only
able to use one of the links for a given network segment.
Adding more bandwidth between the layers in the form of faster inter-switch links
helps overcome congestion in the three-layer model scale, but only to a point. The
problems with host-to-host east-west traffic don't occur one conversation at a time.
Instead, hosts talk to other hosts all over the data center at any given time, all the
time. So while adding bandwidth facilitates these conversations, it's only part of the
answer.

A new topology in town
The rest of the answer is to add switches at the layer below the access layer, and
then spread the links from the access layer to the next, across the network. This
topology is a leaf-spine. A leaf-spine design scales horizontally through the
addition of spine switches, which spanning-tree deployments with a traditional
three-layer design cannot do.
This is similar to the traditional three-layer design, just with more switches in the
spine layer. In a leaf-spine topology, all links are used to forward traffic, often
using modern spanning-tree protocol replacements such asTransparent
Interconnection of Lots of Links (TRILL) or Shortest Path Bridging (SPB). TRILL
and SPB provide forwarding across all available links, while still maintaining a
loop-free network topology, similar to routed networks.

ETHAN BANKS

A small leaf-spine network

The advantages of leaf-spine
Leaf-spine topologies are now the de facto standard -- it's difficult to find a design
other than leaf-spine among vendors' various Ethernet fabric designs. There are
good reasons for this -- leaf-spine has several desirable characteristics that play into
the hands of network designers who need to optimize east-west traffic:
All east-west hosts are equidistant. Leaf-spine widens the access and aggregation
layers. A host can talk to another host on any other leaf switch and know that the
traffic will only traverse the ingress leaf switch, spine switch and egress leaf
switch. As a result, applications running over this network infrastructure will
behave predictably, which is a key feature for organizations running multi-tiered
Web applications, high-performance computing clusters or high-frequency trading.
Leaf-spine uses all interconnection links. The traditional three-layer design uses
spanning-tree, a loop prevention protocol. As mentioned earlier, spanning-tree
detects loops, and then block links forming the loop. This means that dual-homed
access switches only use one of their two uplinks. Modern alternatives such as SPB
and TRILL allow all links between leaf and spine to forward traffic, allowing the
network to scale as traffic grows.
It supports fixed configuration switches. Fixed configuration switches ship with
a specific number of ports, compared with chassis switches, which feature modular

slots that can be filled with line cards to meet port density requirements. Chassis
switches tend to be costlycompared to fixed configuration switches. But chassis
switches are necessary in traditional three-layer topologies where large numbers of
switches from one layer connect to two switches at the next layer. Leaf-spine
allows for interconnections to be spread across a large number of spine switches,
obviating the need for massive chassis switches in some leaf-spine designs. While
chassis switches can be used in the spine layer, many organizations are finding a
cost savings in deploying fixed-switch spines.
Leaf-spine is currently the favored design for data center topologies of almost any
size. It is predictable, scalable and solves the east-west traffic problem. Any
organization whose IT infrastructure is moving towards convergence and high
levels of virtualization should evaluate a leaf-spine network topology in their data
center.

The cons of leaf-spine
Leaf-spine isn't without shortcomings. One drawback is that the high switch count
to gain the required scale. Leaf-spine topologies in the data center need to scale up
to the point that they can support the physical hosts that connect to them. The
larger the number of leaf switches needed to uplink all of the physical hosts, the
wider the spine needs to be to accommodate them.
A spine can only extend to a certain point before either the spine switches are out
of ports and unable to interconnect more leaf switches, or the oversubscription rate
between the leaf and spine layers is unacceptable. In general, a 3:1
oversubscription rate between leaf and spine layer is deemed acceptable. For
example, 48 hosts connecting to the leaf layer at 10Gbps use a potential maximum
of 480Gbps. If the leaf layer connects to the spine layer using 4 40Gbps uplinks,
the interconnect bandwidth is 160Gbps, for an oversubscription ratio of 3:1.

ETHAN BANKS

Oversubscription between network layers

Leaf-spine networks also have significant cabling requirements. The number of
cables required between the leaf and spine layer increases with the addition of a
spine switch. The wider the spine, the more interconnects are required. The
challenge for data center managers is structuring cabling plants to have sufficient
fiber optic strands to interconnect the layers. Also, interconnecting switches dozens
of meters apart requires expensive optical modules, adding to the overall cost of a
leaf-spine deployment. While there are budget-priced copper modules useful for
short distances, optical modules are necessary and a significant cost in modern data
centers.

Modern data center strategy:
Design, hardware and IT's
changing role
Newer data centers look far different than they did just a decade
ago, and strategies for design and style continue to shift under the
weight of big data, cloud computing, mobility and other technology
trends.

Introduction
Data center infrastructure now extends beyond the brick and mortar walls to public
clouds, where data that was once considered too sensitive to leave the building is
now hosted. Servers have been consolidated and fully virtualized, and new levels
of integration achieved with converged infrastructure systems. The data center has
to be highly scalable and systems need to be faster every year. IT professionals
within the data center have to be jacks of all trades. And of course, the business
side demands all of that on a minimal IT budget.
In this guide, we present all the tenets of a modern data center strategy, including
the latest in data center infrastructure, operations and management. We also cover
the changing role of IT, important IT skills of tomorrow and the tools IT pros need
to manage a hybrid environment.

1Explore the future
Tomorrow's data center
While no one can predict what the future holds, we can plan for emerging
technologies in the data center. Here are a few data center strategies to watch.
Tip

Software-defined networking marks drastic data center change
Software-defined networking (SDN) promises a virtual network infrastructure that can be provisioned as easily
as setting up a new virtual server. But real-world SDNs are hard to find and require significant management and
resources. Continue Reading

Tip

Five data center upgrade strategies to modernize your facility
Organizations can extend the working life of their aging data center by renovating the facility and making these
changes -- some of which cost little to nothing. Continue Reading

News

Data center strategy for a hybrid environment
Companies now use the public cloud as an extension of their own data center, mixing public and private clouds
with colocation and on-premises IT to become as efficient as possible. Continue Reading

News

Meet tomorrow's computing demands with new technologies
Web-scale IT organizations reap efficiency, agility and performance benefits over typical enterprise operations.
It's time to change that.Continue Reading

News

Start with cloud, SDN and virtualization to modernize your data
center
Cloud computing, flash storage, software-defined networks (SDN), virtualization and new data center
management tools will help data center managers deliver the data their customers or end users need. Continue
Reading

Feature

Influences on IT's spending plans
TechTarget's data center survey gauges the influence that cloud, big data, mobile computing, virtualization and
more have on data center strategies.Continue Reading

2An evolving workforce
The changing role of IT
As the technology inside data centers changes, so must the associated IT personnel.
Welcome the DevOps movement, and other trends that shape a new role for
backend IT.
News

DevOps movement boosts IT operations teams
In this Q&A, author John Allspaw, senior vice president of technical operations at Etsy.com, compares IT
operations pre- and post-DevOps and offers suggestions about simple things that IT operations teams can do to
boost their profile. Continue Reading

Tip

What's driving the DevOps movement?
The DevOps movement has been motivated by many things, but three things have driven it into the
mainstream. Continue Reading

Tip

Learn how to work with Chef
Chef speaks the developer's language, which takes some getting used to when you're fluent in IT ops. Continue
Reading

News

How Facebook uses Chef configuration management tools
One of the rock stars of the cloud and DevOps movement shares lessons learned from using Opscode's Chef
configuration management tools in its data center design strategy. Continue Reading

News

Essential DevOps tools besides Chef and Puppet
Here are 10 popular DevOps tools that will meld application development and deployment into a more
streamlined exercise. Continue Reading

Tip

Six hot new data center jobs
Learn how the growing popularity of technologies such as cloud computing has transformed the data center job
market and expanded IT skill requirements. Continue Reading

News

Sharing IT budget control
The day is coming when individuals who are not in IT will hold sway over large portions of the corporate IT
budget. Continue Reading

Feature

Prepare a software-defined data center team
Security and automation go hand-in-hand with software-defined data centers -- does your team have the skills to
match? Continue Reading

3Cloud and convergence

The latest in data center infrastructure
Looking at emerging technology is important for the future of data centers, but
what about today's data center? These tips on cloud computing, converged
infrastructure and the data center show the current trends to follow.
Feature

Flashy innovations inside hyper-converged offerings
Converged and hyper-converged infrastructures are getting a boost from all-flash storage components. Continue
Reading

News

The many faces of converged infrastructure
IT vendors are riding the converged infrastructure wave into data centers by labeling their wares with the term,
but very few platforms are truly converged. Continue Reading

Tip

Cloud computing risks from a data center perspective
IT has three choices with public cloud: steering clear, maintaining its entire environment off-site in the cloud or
seeking the best of both worlds with the hybrid cloud model. Continue Reading

Tip

Become the cloud service provider your enterprise needs
The traditional data center needs to survive the cloud era -- by embracing it. Continue Reading

Tip

Build a better private cloud than AWS
Most private clouds fail on the first deployment. Take these tips to heart when designing a cloud move. Continue
Reading

4Crunching numbers
Big data, data center applications

Big data has established its place in the enterprise. Staying on top of the latest
software and hardware changes ensures that your company will make the most of
the data deluge.
News

Academics teach commercial enterprises about big data
Big data analytics was once an area of technology reserved for scientists, but corporations now need the
technology to analyze huge volumes of data for a host of business reasons. Continue Reading

Feature

Big data = big changes in the data center
Brace yourself for big data. If it hasn't already hit your data center, it will soon and it may place new demands
on your IT infrastructure, operations and data center strategy. Continue Reading

Feature

Legacy applications in the cloud: What's ahead?
IT industry watchers predict that within about five years, legacy apps that are currently difficult to deploy to the
cloud will become better suited to a cloudy home. Continue Reading

News

How cloud apps impact data center infrastructure and management
Reducing servers results in smaller data center facility concerns and costs, but migrating apps to a SaaS model is
not without its own unique hurdles. Continue Reading

5Take the temperature
Data center cooling
Agility, efficiency and mobility are three aspects of modern data centers, and the
facility is a big part of it. Learn how to monitor energy use, remain flexible and
save the environment.
Tip

Keep data center energy use in check with PUE metrics

Metrics tools that examine energy efficiency provide broad measurements that are helpful for enterprises that
want to minimize energy use. Continue Reading

Opinion

The big green payoff
Sustainable data center operations are imperative for the era of ubiquitous computing. On a practical level, green
facilities can also save money.Continue Reading

Feature

Q&A: How to keep up with massive Internet growth
Kevin Ressler, director of global product management for the enterprise networks division at TE Connectivity,
discusses the rise in employee device mobility and how data centers must evolve to meet user
demands.Continue Reading

Tip

Renew the data center
For the largest, most power-hungry data centers, renewables like wind and solar are picking up some of the
power load. Continue Reading

Tip

New technology, trends to cool off the data center
Stagnant strategies for data center cooling will keep energy bills climbing ever higher, but a more modern data
center strategy can bring them back down to earth. Continue Reading

Feature

An example of upgrading on-premises
When it came time for a new data center, VSE decided -- after much research -- to build rather than outsource,
and repurpose its old equipment for DR. Continue Reading

Reasons to upgrade a data center
network architecture

The total number of network-connected devices under management is
increasing significantly, which creates more data to store and process. So
what does this mean for your data center network architecture?
The TechTarget Networking Survey asked IT pros why they were upgrading
their data center network. Overall, 44% of respondents said the upgrade is
in response to increased applications and data. New information is likely
due in part to big data initiatives. In the TechTarget 2015 IT Priorities
Survey, 30% of respondents said they planned to implement big data
projects in the coming year. But with this upsurge of information, storage
and networking concerns are likely to arise.
A majority of respondents take to upgrades to handle the amount of data
coming in. While only 20% upgrade to move toward converged storage and
networking resources, it is likely that a large portion of those who upgrade
their data center network architecture to accommodate more data do so
with storage in mind.
"People attach an increasing amount of storage via the data network -Ethernet -- as opposed to traditional fiber channel storage area networks,"
said John Burke, CIO and principal research analyst at Nemertes Research
in Mokena, Ill. Data center managers look for switches that handle low
latency and zero-loss Ethernet loads.
Enterprises have the option to store big data in the cloud. Significant
percentages of the TechTarget Networking Survey respondents said the
data center network needs an upgrade to support private (26%) or hybrid
(21%) cloud. Cloud storage offers improved scalability and agility that
traditional in-house storage deployments can't match. In the future, more
enterprises might need a big data network to stream information to cloudhosted storage.

Only 15% of respondents in the TechTarget Networking Survey intend to
upgrade to implement network programmability or software-defined
networking (SDN). Of those IT pros, 44% said they will consider purchasing
from an SDN vendor and 41% will evaluate a traditional network hardware
vendor that offers an SDN product. Despite major vendors making a push
for SDN, a slightly higher percentage of respondents -- 46% -- don't plan to
consider an SDN vendor at all, and will look to vendors with technology
from a third party. Respondents could choose more than one option,
implying most are willing to entertain traditional as well as SDN-specific
vendors for this data center network architecture change.

Their choice will likely come down to price and vendors' technological
offerings, but also existing relationships.

"People are willing to look at their traditional network vendors now doing
SDN because … a relationship has already been established and a trust
level found," Burke said. Purchasing SDN from a familiar vendor eliminates
the fear of a new company being acquired or disappearing.
Burke added that those looking to new SDN vendors are not concerned
with protecting existing relationships or trying to make SDN fit into a current
vendor's marketing or sales strategies.
"[New vendors] offer a way to make a dramatic change and to shift rapidly
to a network driven with and by orchestration tools and automation," he
said.

The Disaggregation of Networking, The Open Source Upstarts
And Legacy Vendors' Business

Recently I wrote about some announcements by upstart
networking vendor Cumulus. To summarize what Cumulus
does it is important to explain the way networking
traditionally works. Existing networking vendors such as
Juniper and Cisco take generally proprietary software and
tightly couple it with their own hardware. This, or so these
companies say, gives customer the best performance and
ensures their networking works as expected – software is
created to run perfectly on highly tuned hardware.
Companies like Cumulus are disputing this claim and helping
move towards a disaggregated networking paradigm where
organizations can buy any hardware (be it from the
traditional vendors or cheap commodity hardware suppliers)
and run open source software on top of that hardware. These
vendors point to cost savings, flexibility and the shift to one

operating system being able to work across different
hardware platforms as the reason that this disaggregated
approach will become the status quo in the future.
When I covered a partnership between Dell , Cumulus
and VMware around networking, I received a fairly lengthy
reply from Cisco’s PR folks disputing much of what I said. I
thought it worth revisiting the discussion and responding to
the different points Cisco makes. I’m not a networking
analyst, however mapping onto the networking world some of
the impacts that virtualization has had upon the server
market, can inform us of some possible outcomes. Starting at
the top:
1) Networks aren’t servers. Server virtualization thrived because servers were grossly
underutilized. Networks are often oversubscribed and rarely underutilized. Customers don’t
need better network utilization; they want infrastructure that responds to the needs of
applications. It’s a different problem.a

There is some validity to what Cisco says here but it has to be
said that hardware in the networking space is becoming less
and less differentiated today. Customers are often looking to
the webscale companies like Facebook and Google GOOGL
+0.00% and seeking to emulate their approach delivering
complex applications on simple and/or commodity
infrastructure. It is not about underutilization or not, it is
about being able to deploy and migrate workloads. The
networking has to match storage and compute and allow for
fast and flexible workload orchestration. It has to allow
hardware to rapidly be added or removed to allow for erratic
load profiles. As Dan Conde from another networkinginvolved company, Midokura, said to me regarding Cumulus:
The key thing about Cumulus Linux it that it is not “Linux based”. It IS Linux. This creates a
standard interface with which to program and configure the devices using conventional
Linux commands, and in theory enables you to load your own software on the devices.

Cumulus Linux switches are more of a server with an OS that has capabilities to process
network packets and control the network processor — which can be aBroadcom BRCM
+0.00% chip, for example. The programmability provides the hidden benefit for users — you
can now configure and manipulate switches en-masse — using scripts like Puppet, Chef or
CF Engine. So from the point of view of an SDN partner to Cumulus
Networks of VMware VMW -1.69% , it means that the network device is also as programmable as
pure software components.

Going back to the Cisco response to my post, the company
stated that:
2) The notion that disaggregating hardware from software saves customers money is a myth.
70-80% of the cost of running networks is NOT in the hardware. Yes, so-called white boxes
may be cheaper in terms of initial capital outlay, but with the Cumulus or NSX model
customers have to take on the cost and complexity of separately managing hardware and
software. They in effect become systems integrators. How many companies really have the
resources or the inclination to take that on? Oh, and by the way, there is some great analysis
in the market which estimates Cisco can achieve cost savings of 75% compared to white box
approaches when you consider the total cost of ownership.

I don’t want to pit one analysis against another but last year
I wrote about a Credit Suisse report that deeply investigated
what networking disaggregation would mean for
Cisco’s business generally, and networking business in
particular. Suffice it to say that the report pointed to huge
margin pressure upon Cisco that will come from networking
disaggregation. Add to that the customer–benefit that users
can now reuse the same operational tools for networking that
they’re using for their compute and you have a buy-side and a
sell-side economic impact that is hard to argue against.
3) As a related point, separating hardware from software also complicates trouble-shooting.
Whether you have white boxes or Cisco switches, problems can occur anywhere in a network,
and with the dis-aggregated model, customers lose the ability to troubleshoot across their
entire network. And hardware matters when it comes to scale and performance. That’s
another huge limitation of the software-only model.

Cisco has a point here. With the aggregated model of
networking, customers have “one throat to choke”. One
vendor delivers both hardware and software and thus there is

no doubt who is to blame when something goes wrong. But
it’s hard to argue this point as a continuing factor
as enterprise IT rapidly moves towards a distributed,
disaggregated and composible paradigm across the board.
Enterprise IT is becoming, by definition, a more distributed
operation. Any CIO worth his salt has thought about trouble
shooting within a far wider context than previously. And on to
the last point:
4) Lastly, SDN is not a dire threat to Cisco. We are embracing SDN – and leading in SDN. As
I said above, what customers really want is IT infrastructure that is more automatically
adaptive to the needs of applications. They don’t want software-defined anything for the sake
of it. We’ve embraced that challenge with our Application Centric Infrastructure strategy,
which – by enabling policy-led, automatic infrastructure configuration of both physical and
virtual networks – reduces the time it takes customers to deploy or migrate applications
from months to mouse clicks. That’s what customers really care about. Now, you may call
that more “Cisco spin” but we’ll let the results speak for themselves: after less than one
month of availability of our full ACI suite, we have 60 paying customers. It took some of our
more-hyped rivals half a decade to reach such numbers.

Clearly Cisco isn’t going away any time soon – enterprises
have too much invested in existing solutions to even
contemplate ripping them all out. Bear in mind the number of
mission critical applications still running on mainframes
around the place. The question isn’t whether SDN will destroy
Cisco, but rather whether the margin and competitive
changes that SDN brings will prove sufficiently corrosive to
Cisco (and it’s ilk) in terms of margin to cause some long term
systemic implications.
It’s a fascinating progression to watch – expect more
fireworks, more argument and more heads to be raised above
the parapets on this one.

A disaggregated server proves
breaking up can be a good thing
Split up and go your separate ways for cost efficiency. With
disaggregation, systems separate CPU, memory and I/O for more
flexibility, no hard feelings.
The frequency with which components of data center systems break up
could put some Hollywood couples to shame.
While the "in" thing in system design is convergence, some forward-looking
technologists are aiming toward disaggregated servers. It's conscious
uncoupling, data-center style
Networking and storage are frequently purchased and configured
separately from servers. Disaggregating systems takes things a step further
and targets the processing, main memory, and the input/out (I/O)
subsystem -- "the three piece suit" which makes up every system, said Dr.
Tom Bradicich, Hewlett-Packard vice president of engineering for servers.
Disaggregation is particularly attractive among hyperscale cloud service
providers, which see disaggregation as a way to achieve more flexible
systems and fewer underutilized resources.
"In the public cloud, you're playing a multi-billion dollar Tetris game," said
Mike Neil, Microsoft general manager for Enterprise Cloud. "You have all
these resources manifested as physical systems, and the challenge is to be
as efficient as possible with those resources."

CPU: Memory: I/O

In today's traditional servers, the ratios of CPU to memory to I/O are mostly
unchangeable. With disaggregated servers, those systems are separated into
discrete pools of resources that are mixed and matched to create differently sized
and shaped systems. Data center architects can then "compose" systems through an
orchestration interface that are CPU-, memory- or I/O-intensive, depending on
workload demands, and then tear them down to recreate another system with a new
profile.
The push for disaggregated servers could trickle down to enterprises andhigh
performance computing (HPC) environments, if the economics are right, said Kuba
Stolarski, IDC research manager for server, virtualization and workloads.
"The standard justification for disaggregated systems is the refresh or maintenance
story," Stolarski said. "Today, if the processor is due for a refresh, then the whole
box is going to go," even if the other components are still viable.
Everyone who spends a lot on servers cares about saving money, not just
hyperscale folk, said Todd Brannon, Cisco director for product marketing for
its Unified Computing product line. The CPU represents about two-thirds of the
total system cost, and the rest (memory, I/O subsystem) typically doesn't need
replacing as often as the processor. In a disaggregated system, "just replace the
processing cartridge and maintain the investment in everything else," Brannon
said.
But plenty of things could prevent disaggregated systems from excelling:
Fundamental laws of physics and engineering, a failure to get the required
technology costs down and management complexity.

Wanted: Superfast fabrics
Is the first disaggregated system already here?
Cisco announced the UCS M-Series Modular Servers, the first example of
a commercially available disaggregated system, said Brannon. The front of

the 2U M-Series chassis holds eight processing cartridges that contain
CPU and memory --connected to disaggregated storage and connectivity
resources via Cisco's Virtual Interface Card.
The holy grail of disaggregation is between CPU and main memory, so systems
vendors need to deliver better, faster bandwidth between those components.
"The biggest challenge of disaggregation is the interconnect," said HP's Dr.
Bradicich.
Today, the distance between processors and main memory is measured in inches,
"but for disaggregation to work, it needs to be measured in feet," he said. That is
easier said than done. "It all has to do with physics: the farther it is, the slower it
is."
One emerging interconnect technology commonly associated with disaggregation
is silicon photonics, which has three major advantages over interconnect
technologies: performance, weight and distance, said Jay Kyathsandra, Intel senior
product marketing manager in its Cloud Platforms Group, which is developing the
technology. Silicon photonics supports data transmissions of up to 1.3Tb per
second, weighs about a third as much as copper cables and can extend to 300
meters, according to the specifications.
But silicon photonics isn't necessarily synonymous with disaggregation, said
Kyathsandra. "The actual implementation will be a function of what the original
equipment manufacturer/original design manufacturer wants to do," he said. If
your system does not require the kinds of speeds it provides, "silicon photonics
may not be a part of [the ultimate implementation]."
Systems designers must weigh the advantages of silicon photonics over cost, said
Microsoft's Neil. Consider the tens of dollars it costs to connect a hard drive to a
system locally versus the hundreds or even thousands of dollars it costs to
implement network-attached storage or storage area network. In a disaggregated
system, it might be physically possible to separate memory and CPU at long

distances, but an expensive interconnect technology will eat into the potential
utilization benefits and derail the whole plan.
Systems designers might find that emerging Ethernet standards provide sufficient
performance and low enough latency to support disaggregation. Several systems
designed for hyperscale systems rely on 10GbE. When grouped with Quad Small
Form-factor Pluggable transceivers, Ethernet can go to 40Gb, said Kevin
Deierling, vice president of marketing at Mellanox, a supplier of high-performance
Ethernet and InfiniBand interconnect technology.
Meanwhile, work is underway on 25GbE, he said. That gets you to 100Gb -- the
speed of many InfiniBand fabrics required in HPC environments.
Whatever happens, systems vendors aren't waiting for interconnect technologies to
be fully baked. HP's Moonshot chassis, for example, is currently equipped with
several fabrics that will connect future disaggregated system components, said Dr.
Bradicich. One Ethernet fabric is used today, the second "proximal array" fabric
was revealed as part of the 64-bit ARM-based m400 servers announcement and the
third, as-of-yet unutilized fabric called the 2D Torus Mesh, allows any one
Moonshot component to communicate directly with any of its neighbors, Bradicich
explained.
"The highways are laid down and paved. There are just no cars on it yet."

Running the show
To enable disaggregation, the software required to provision and deprovision
system resources must be modified.
Intel's overall disaggregation initiative is known as Rack Scale Architecture. On
the hardware side, that includes its work in the silicon photonics optical
interconnect, as well as a programmable network switch due out in 2015. A pod
management framework communicates with the system components via
hardware application programming interfaces, and provides a policy-based

orchestration framework to assemble and disassemble systems from the resource
pool, said Intel's Kyathsandra.
Intel demonstrated the system within a single rack, and will share the results with
the Open Compute Project for hardware designs and OpenStack for cloud
orchestration, to encourage openness and adoption. "You should be able to have
different racks from different vendors, and get the same hardware-level
information from each rack," Kyathsandra said. The goal is for different
orchestration layers to work together seamlessly. "It shouldn't matter whether you
are using VMware or OpenStack," he said.

Disaggregate in the face of danger
Disaggregation presents challenges for software development, management and
operating systems vendors too. "How do you reason across these pools of
resources?" said Microsoft's Neil.
Microsoft's strategy is to innovate in Azure, and then push that work back out via
the Open Compute Project for hardware, and Windows Server and Systems
Center for management. Microsoft also made a version of its Azure system,
the Microsoft Cloud Platform System, based on commercially available Dell
hardware.
"Our general goal is to take innovations that we've done in Azure, and drive [them]
out toward broader industry adoption," Neil said. Disaggregated hardware and
software designs will likely find their way to Open Compute Project and the
market, he said.
Disaggregated system designs may enter data centers much faster than may seem
possible, said Mellanox's Deierling.
"[It] will trickle down to the enterprise faster than people realize," he said.
Hyperscale vendors have a lot of engineers who are already doing this work, and
it's not a big leap for them to productize the work that they have been doing

internally for public consumption. "They're saying 'hey, we've already done the
heavy lifting. Let's bring this to the enterprise.'"

Cisco announced the UCS M-Series Modular Servers, the first example of a
commercially available disaggregated system, said Brannon. The front of the 2U
M-Series chassis holds eight processing cartridges that contain CPU and memory
--connected to disaggregated storage and connectivity resources via Cisco's Virtual
Interface Card.

Ericsson Radio

With the introduction of the Ericsson Radio System we are also introducing the
industry's most energy efficient and compact radio solution, maintaining
performance leadership at half the size and weight known as the Radio 2217.
This is the smallest and lightest macro coverage radio in the new portfolio with a
volume of approximately 10 liters and a weight of 12 kg. It is designed to support
LTE and WCDMA and Multi-standard mixed mode with up to 40 MHz of LTE and
up to 6 carriers of WCDMA, all within a 40 MHz of instantaneous bandwidth
(IBW).
With 4-way Rx diversity becoming increasingly more important to improve the
Uplink performance in many systems, the new 2Rx unit Radio 0208 provides an

easy and flexible way of equipping sites with HW supporting this functionality.
Radio 0208 has small form factor or 6 liters and an weight of 8kg and integrates
well with any other radio to constitute a 4Rx/2Tx system.
Another new addition to the Ericsson Radio System portfolio is the smallest, most
powerful outdoor microcell on the market, the Radio 2203. Its sleek, minimalistic
Scandinavian design complements any environment.
The radio equipment used for the RBS 6000 family supports multi standard
operation, which means it can operate on any of the standards, either in single
mode (one standard at a time) or in mixed mode (simultaneous operation on
more than one standard).
The radio equipment can be of two types:


Radio units for installation in macro cabinets (RUs)



Remote radios for main-remote configurations. These can be either
remote radio units (RRUs) or antenna integrated radio units (AIR).

The RRUs are designed to be installed close to the antennas, and can be either
wall or pole mounted. There are different versions of RRUs that support Macro or
Micro configurations.
In the AIR units on the other hand, the radio unit and the antenna are combined
into one single unit and installed in the usual antenna location.
The radio equipment is multi standard capable, which means that the different
units can operate on all standards GSM, WCDMA and LTE. Two standards can
operate simultaneously.

M2M India Conclave 2014
Smartphone users in urban India will cross 104 million in 2014 compared to 51 million
users in 2013 –India is 3G & 4G Ready with a mobile subscriber’s base of over 950 million
– One of the fastest growing & largest telecom market place in the world. Telco’s in India
are excited and have realized importance of machine-to-machine(M2M) market which
is growing worldwide at a fast pace. M2M technologies allow both wireless and wired
systems to communicate with other devices / systems of the same ability. Innovative
applications like smart cars, connected homes, smart metering, remote management and
industrial data collection would be the major revenue drivers of service providers
worldwide in the future.
M2M is touted as the next big business opportunity for Telco’s and Vendors - According to
a joint study conducted by GSMA and Machina Research, the number of total connected
devices would grow from more than 9 million today to 24 million in 2020. According to
latest industry reports, India M2M modules market would reach US$98.38 million by 2016
at a CAGR (compound annual growth rate) of 33.81 percent. Cellular M2M modules were
expected to grow at 35.32 percent over the same period, from 2011 to 2016.

In India M2M market has now started evolving – India M2M market will be majorly driven
by Automotive & Commercial Telematics, Household Monitoring & Control, Financial
Services & Retail, Smart Homes & Smart Metering, Manufacturing, Transportation and
Logistics.
In India market Automotive, Transport & Logistics industries lead in terms of M2M
adoption. However, Utilities will drive future market growth as the Government of India is
increasingly taking serious initiatives to deploy smart energy meters to address the
concern of increasing power theft and round-the-clock monitoring of power supply.
Being India’s First Convention on M2M – Now into successful 4th Edition - The
forum will offer a world class platform for understanding business value proposition of
M2M and how M2M technologies and applications will help businesses to fuel growth and
productivity in India market.

Dual carrier HSPA: DC-HSPA, DC-HSDPA
- notes or tutorial of the basics of Dual carrier HSPA, DC-HSPA which
utilises two carriers on the downlink - DC-HSDPA, or Dual carrier HSDPA.
HSPA TUTORIAL INCLUDES
 HSPA Introduction
 HSDPA
 HSDPA channels
 HSDPA categories
 HSUPA
 HSUPA categories
 HSUPA channels
 Evolved HSPA / HSPA+
 Evolved HSPA MIMO
 Dual Carrier HSPA

To further improve the HSPA performance a scheme utilising two HSDPA carriers to increase the
peak data rates has been made available. The scheme known under a variety of names and
acronyms - DC-HSPA, Dual carrier HSPA, Dual Cell HSPA, and DC-HSDPA, Dual cell HSDPA,
also better utilises the available resources by multiplexing carriers in the CELL DCH state.
DC-HSPA or DC-HSDPA enables better utilisation of the resources, especially under poor
channel conditions where signal to noise ratios may not be as high as normally needed for high
data rate links.

DC-HSPA / DC-HSDPA background
UMTS / W-CDMA was initially conceived as a circuit switched based system and was not well
suited to IP packet based data traffic. Once the basics UMTS system was released and
deployed, the need for better packet data capability became clear, especially with the rapidly
increasing trend towards Internet style packet data services which are particularly bursty in
nature.
The initial response to this was the development and introduction of HSDPA, followed by HSUPA
for provide the combined HSPA service. These were defined in 3GPP Release 5 & 6. Later this
was further developed and deployed in some areas to provide even higher data transfer rates as
HSPA+ which occurred in Release 7.
A further release, Release 8 detailed the dual cell HSDPA, or HSPA, and then a combination of
DC-HSDPA and MIMO being defined in Release 9.

DC-HSPA / DC-HSDPA basics
The concept behind DC-HSPA / DC-HSDPA is to provide the maximum efficiency and
performance for data transfers that are bursty in nature - utilising high levels of capacity for a
short time. As most of the traffic is in the downlink direction, dual carrier HSPA is applied to the
downlink - i.e. HSDPA elements, and therefore dual carrier HSPA is also known as DC-HSDPA.
The concept of packet data is that it data is split into packets with a destination tag, and these
are sent over a common channel - sharing the channel as data traffic from one source is not
there all the time.
DC-HSDPA seeks to take apply this principle to the multiple carriers that may be available to an
operator. Often UMTS licences are issued in paired spectrum of either 10 MHz or 15 MHz blocks
- two or three carriers, for uplink and downlink.
Using UMTS, HSPA, or even HSPA+ these carriers operate independently, and dependent upon
the usage, one carrier could be fully utilised while the other is under used. Coordination between
the carriers only takes place in terms of the connection management, and the dynamic load is
not balanced. DC-HSDPA / DC-HSPA seeks to provide resource allocation and optimisation.
This joint resource allocation over multiple carriers requires dynamic allocation of resources to
achieve the higher peak data-rates per HSDPA user within a single Transmission Time Interval
(TTI), as well as enhancing the terminal capabilities. The use of DC-HSDPA is aimed at providing
a consistent level of performance across the cell, and particularly at the edges where MIMO is
not as effective.

Channels for DC-HSDPA
When implementing DC-HSDPA, the channels present within the system need to be modified to
enable the system to operate as required.


HS-DPCCH: While it would have been possible to utilise two HS-DPCCHs, one on each
carrier, only one is used - the feedback information being mapped to the single channel.
There are either 5 or 10 CQI - channel Quality Indicator bits that are used. Five are used
when only one channel is utilised, and ten when two are in use. The compound CQI is
made up from two independent CQIs: one for each channel. New channel coding
schemes are defined for the overall HARQ feedback format.



HS-SCCH: The HS-SCCH is transmitted on both the anchor, or primary carrier as well
as the supplementary one, and the UE has to monitor up to four HS-SCCH codes on
each carrier. However the UE is only required to be able to receive up to one HS-SCCH
on the serving or main cell and one HS-SCCH on the secondary cell.

DC-HSDPA signalling & scheduling
One of the key processes required within DC-HSDPA is that of scheduling the data to be
transmitted as this has to be achieved across the two carriers. The scheduling algorithms
required developing in a manner that provided backwards compatibility for single carrier
transmissions while providing throughput speed improvements for the dual carrier scenarios.
The queues for data to be transmitted are operated in a joint fashion to provide the optimum
flexibility in operation - it enables the carrier with the least traffic queued to be used (not all UEs
will have the dual carrier facility and therefore one carrier may be loaded more heavily than the
other, etc..)
One area which did require addressing was the operation of the MAC-ehs entity within the NodeB stack. Within HSPA this was designed to support HS-DSCH operation in more than one cell
served by the same Node-B and therefore extending this for dual carrier operation required only
minor changes.
Separate HARQ entities are required for each HS-DSCH. In this way the transmission is
effectively two separate transmissions over two separate HS-DSCHs - each one has its own
uplink and downlink signalling.
Each carrier has a transport block that uses a Transport Format Resource Combination (TFRC)
which is based on the HARQ and CQI feedback sent over the uplink HS-DPCCH. Any
retransmissions required by HARQ will use the same modulation coding scheme as the first
transmission.

UE categories for DC-HSPA
UE categories were developed to enable the base stations to be able to quickly determine the
capabilities of different UEs. The numbers required extending for HSPA+ and DC-HSPA / DCHSDPA.

UE
CATEGORY

3GPP
RELEASE

MAX NO OF
HS-DSCH
CODES

MODULATIO
N

MAXIMUM RAW
DATA RATE

COMMENTS

21

Rel 8

15

16-QAM

23.4

DC-HSDPA

22

Rel 8

15

16-QAM

28.0

DC-HSDPA

23

Rel 8

15

64 QAM

35.3

DC-HSDPA

24

Rel 8

15

64 QAM

42.2

DC-HSDPA

25

Rel 9

15

16 QAM

46.7

DC-HSDPA +
MIMO

26

Rel 9

15

16 QAM

55.9

DC-HSDPA +
MIMO

27

Rel 9

15

64 QAM

70.6

DC-HSDPA +
MIMO

28

Rel 9

15

64 QAM

84.4

DC-HSDPA +
MIMO

DC-HSUPA and Multicarrier HSPA
The concepts behind DC-HSDPA can be taken further in a number of areas to provide further
improvements in the performance of the overall HSPA+ system.
The first of these is to utilise a similar dual carrier system for the uplink. Using dual carrier
HSUPA, DC-HSUPA, would provide similar gains in the uplink as DC-HSDPA provides for the
downlink. The broad implementation would also be similar.
Another way in which performance of the system can be further pushed is to utilise multiple
carriers, beyond the two used in DC-HSPA. Y aggregating further carriers the improvements
gained with DC-HSPA can be further improved along higher still peak data rates.

DC-HSUPA

Higher performance
for the uplink.
The popularity of smartphones has created significant demand
on uplink network capacity. Uplink traffic consists of user traffic
and application signaling. A prime example of uplink user traffic
is picture sharing, which generates significant uplink network
load particularly during special events, such as professional

sports. While instant messaging status updates are an
example of uplink application signaling which is sent
periodically or in response to user status changes that add to
uplink loading.
In general, smartphone traffic is “bursty” which is particularly
suitable for multiplexing on a shared data pipe. It has been
demonstrated that significant capacity improvements can be
achieved for bursty smartphone data traffic by aggregating
multiple carriers into a single shared data pipe. DC-HSUPA is a
carrier aggregation technique for addressing the uplink
challenge using an operators’ existing carrier resources.
The following diagram illustrates the DC-HSUPA concept:

In essence, DC-HSUPA combines two uplink carriers into
a larger data pipe with joint scheduling of uplink traffic
across the two carriers. It allows mobile devices to
make use of instantaneous spare capacity available on
either carrier, thus achieving multiplexing gain and load
balancing. The benefit is a significant efficiency
improvement which leads to higher system capacity.

The following graph shows the gains achieved by DC-HSUPA
over two standalone carriers. The vertical arrow shows that for
a given number of users, for instance eight users per sector,
the uplink user burst rate is almost doubled when DC-HSUPA is
enabled. This essentially reduces signaling and the data upload
time by half, which in turn improves the service response time.
For example, the latency of picture sharing through instant
messaging can be significantly reduced by enabling DCHSUPA.

DC-HSUPA can also improve cell capacity for a given
user experience. The horizontal arrow in the graph
illustrates this concept. It shows that at the given user
experience level, DC-HSUPA can improve sector
capacity by 60%. Note that the improvements in user
experience and capacity are achieved without adding
new spectrum resources. It is a cost-effective solution
to support the increasing number of smartphone users
and traffic load.
The diagrams below illustrate application performance
improvements under DC-HSUPA as compared to Single Carrier
HSUPA (SC-HSUPA). Based on lab testing, large e-mail upload

time can be reduced by 48% when DC-HSUPA is enabled.
Similarly, webpage time-to-contents during uplink congestion
can be reduced by 35% when DC-HSUPA is enabled. In
addition to user experience improvements, DC-HSUPA also
results in more than 20% reduction in power consumption
compared to SC-HSUPA.

DUAL BAND DUAL CELL (DBDC-HSDPA)

Carrier aggregation
across spectrum
bands.
Dual Cell HSDPA (DC-HSPA) aggregates two adjacent carriers
to offer a higher peak data rate and improve capacity and user
experience for bursty smartphone data traffic. Dual Band Dual
Cell HSDPA (DBDC-HSDPA) further enables aggregation
across two frequency bands. The following diagram illustrates
the DBDC-HSDPA concept.

DBDC-HSDPA was initially standardized in the 3GPP Release
9 to support the following band combinations:
Carrier 1

Carrier 2

Band I (2100 MHz)

Band VIII (900 MHz)

Band II (1900 MHz)

Band IV (2100/1700 MHz)

Band I (2100 MHz)

Band V (850 MHz)

It was further extended in 3GPP Release 10 to support the
following new band combinations:

Carrier 1

Carrier 2

Band I (2100 MHz)

Band XI (1450 MHz)

Band II (1900 MHz)

Band V (850 MHz)

Under most of the above band combinations, DBDC-HSDPA
aggregates a high band carrier with a low band carrier. The
availability of the low band carrier significantly enhances cell
edge and indoor performance compared to DC-HSDPA with
two high band carriers. In addition, field test results have
demonstrated more than 20% cell edge throughput gain and a
significant reduction in device power consumption from DBDCHSDPA compared to DC-HSDPA.
DUAL CELL HSDPA

Improving capacity
and performance by
aggregating downlink
carriers.

Dual Cell HSDPA (DC-HSDPA) aggregates two adjacent
downlink carriers to achieve higher downlink capacity and
better user experience. Furthermore, by combining two carriers
into a single data pipe, DC-HSDPA increases the peak
downlink data rate to 42Mbps. The following diagram illustrates
the DC-HSDPA concept.

DC-HSDPA was standardized in 3GPP Release 8. Today, more
than 150 commercial HSPA networks worldwide have deployed
DC-HSDPA.

3C/4C HSDPA
Significantly increases peak data
rate compared to HSDPA.
Three Carrier HSDPA (3C-HSDPA)
The use of smartphone applications requiring greater
bandwidths becoming more prevalent, putting significant strains
on cellular network capacity and user experience. 3C-HSDPA is
a downlink carrier aggregation technique for addressing this
challenge by leveraging the characteristic of smartphone traffic.

Smartphone traffic is typically “bursty” which is particularly
suitable for multiplexing on a shared data pipe. 3C-HSDPA
combines three downlink HSPA+ carriers through joint traffic
scheduling to form a larger data pipe. As a result, it delivers
better system capacity and user experience compared to three
carriers operating independently.
The diagram below illustrates the highlevel 3C-HSDPA concept:

The following graph shows the system capacity and user
experience gains from 3C-HSDPA compared to a combined
Dual-Carrier (DC) HSDPA and Single-Carrier (SC) HSDPA
deployment scenario. In this analysis, user experience is
measured by the user burst rate. It is defined as the ratio of the
downlink data burst size in bits to the total time it takes to
transmit the entire data burst to the user.
The vertical arrow shows that for a given number of users per
sector per carrier, in this case eight, the user burst rate is more
than 70% higher compared to the DC-HSDPA + SC-HSDPA
scenario. The horizontal arrow shows that for a given user
experience, 3C-HSDPA can more than double the carrier

capacity in a sector. Note that 3C-HSDPA achieves the
improvements without adding new spectrum resources.

Four Carrier HSDPA (4C-HSDPA)
3GPP R10 also enables the aggregation of four downlink
carriers in the same sector. More information to come.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close