Service Function Chaining explained (Guest Post)

This post is courtesy of Elaheh T. Jahromi, PhD from Concordia and R&D intern at Ericsson Montreal. You can reach her on Linkedin). Thank you!!

Service Function Chaining (SFC – see RFC 7498) is a defined ordered set of network service functions (such as NAT, Firewalls, Load Balancer,…) to realize a high level/end-to-end service. In other words, SFC identifies which service functions a flow of packets should go through from its source to destination.what-is-side-chaining

In conventional networks, Network Services are “statically deployed/chained” in the network. “Statically deployed/chained” stems from the fact that they are hardly coupled to dedicated/proprietary hardware and underlying network topology, thus resulting in rigidity and limitations for network service providers

The substantial reliance of NFs to their underlying HW, brings about huge CAPEX for deployments, as well as OPEX, in view of the fact that configuration of the NFs requires labor-intensive manual effort. On top of that, due to time-consuming process of adding/removing NFs, these NFs should be over-provisioned for peak hours that leads to under-utilization of expensive resources in off-peak times.

NFV and SDN are complementary technologies, emerged to deal with above-mentioned issues.

NFV aims at decoupling the network functions from the underlying hardware, and eventually defining the NFs as stand-alone piece of software entitled as Virtual Network Functionalities (VNF), that could be rapidly deployed and run on top of standard and general hardware, anywhere in the network.

SDN realizes the dynamic SFC, by decoupling the control and data plane in network. Using SDN, the routing devices in the network are programmed on the fly to enable a specific function chain for a specific traffic, in a way that eventually the traffic is steered through different pre-deployed NFs. For example if Service A requires NF X,Y and Z, an SDN controller will populate the forwarding tables of routing devices in a way that the packets belonging to the service A traverse first to service X, then Y and Z. A concrete example of SDN and dynamic SFC usage is illustrated in [1] which offers differentiated QoS for privilege traffics while regular traffics are steered through a best effort service. The following illustration from the Cisco Blog shows how their vSwitch implements SFC:


OpenDayLight Project, is an open source SDN project hosted by The Linux Foundation. This project was announced on 2013 and followed by another collaborative project entitled as OPNFV (Open Platform for NFV) in September 2014. In short, OPNFV is an open source project to realize VNF deployment, orchestration, management and chaining using a set of upstream open projects. As an example OPNFV leverages Open Stack as an IaaS solution for supporting the required infrastructure for VNFs to be deployed and run. ODL is also integrated in OPNFV to enable the dynamic chaining of VNFs.


[1] Callegati, Franco, et al. “Dynamic chaining of Virtual Network Functions in cloud-based edge networks.” Network Softwarization (NetSoft), 2015 1st IEEE Conference on. IEEE, 2015.

Boosting the NFV datapath with RHEL OpenStack Platform

Instead of explaning SR-IOV and DPDK by myself, please visit Nir Yechiel’s article on and you’ll understand the multiple combinations of technologies around fast processing of network traffic in virtualized environments… Enjoy!

The Network Way - Nir Yechiel's blog

A post I wrote for the Red Hat Stack blog, trying to clarify what we are doing with RHEL OpenStack Platform to accelerate the datapath for NFV applications.

Read the full post here: Boosting the NFV datapath with RHEL OpenStack Platform

View original post

On my way to MWC!

I’m very happy to announce that next week I’ll be at the Mobile World Congress (#MWC2016) in home, sweet home Barcelona.mwc2016_logo

At Red Hat, we’ve prepared many exciting demos and announcements that will appear in our Telecommunications blog 4k

I’ll be presenting a Video on Demand transcoding solution demonstration (from Vantrix) that covers the challenges of 4K (UHD) Video delivery, and how OpenStack can help.

In this blog I’ll write about my daily discoveries, but if you want a quick chat, you can come visit me at the Red Hat booth in the MWC,  2G230 Hall2

Furthermore, I’ll be writing about Mobile Edge Computing and how our open source solutions can be used to create the foundational infrastructure for the edge datacenter in 5G, IoT and C-RAN architectures.

And now a quick word of advice to the first-time travelers to Barcelona:

  • Instead of a regular coffee, order a Cortado (in spanish) / Tallat (in catalan). It’s our version of the italian Macchiato, but better
  • Although the most typical morning drink is Cacaolat, a local chocolate milk invented almost a century agot. You can try the Granja Viader near Ramblas/Liceu, the bar that created that drink that’s been running since 1870 . You can also order, in any bar, a freshly backed Donut (not your typical dunkin donuts, try one!) or a Bocata de Pernil Iberic (a long-bread sandwich of spanish cured ham) with the omnipresent Pa Amb Tomaquet (sounds like “pamtoomakat”) which is a baguette cut in half with tomato spread, oli, and salt or garlic on it… hmmm… delicious and crunchy!
  • Make sure you go visit Plaça Espanya. From there you can go shopping/dinner inside Las Arenas (a former Plaza de Toros, now converted to a mall)  and watch the amazing Magic Fountain (every day at 8 and 9pm I think) on your way to the big museum on the top of the hill.
  • And for god’s sake don’t eat anywhere in las Ramblas. Go to the Gotic, east side of Ramblas (fine cuisine), avoid Raval, west side (more ethnic food), and remember: if there are pictures of the food at the door, you won’t eat well.
  • If you stay over the weekend, go to see the views of the city from the Park Guell where its iconic dragon will meet you with his mouth wide open!



Quick Reality Check: Latency

Nowadays, in any NFV-related improvement to existing technology, improving Latency is always involved. Either via reducing it or making it more deterministic (like real-time).

intelgrafikIn order to reduce latency, one first needs to realize how terribly slow are some things when compared to a CPU speed. As you know, CPU is basically a ultra-fast robot that executes simple operations (math, bit tranformation, push/pop from memory, etc) and takes its commands from code registers and the data from different memory locations: some very fast but small (L1 cache) and others slow but very big (RAM and Hard Drives via system buses like SATA or PCIe).

Motherboard_diagram.svg.pngNow let’s focus about the Memory: when the CPU needs to fetch some data to manipulate, it remains idle until the data comes. This time is then spent to execute other pieces of work. When the data has been fetched (from RAM for instance), then the code resumes execution. If the data exists in L1, is a cache hit, but other than that, it’s a miss and the CPU will have to wait until the data is fetched from L2, L3, RAM, etc.

So in order to understand why it’s important to minimize the amount of data that has to be fetched from “distant” parts, like a PCIe device or a SATA hard-drive, have a look at the time it takes to access those parts, including network devices. The numbers come from Referring to nanoseconds doesn’t make sense to our little brain, but here you’ll see how bad this is as soon as you use  a more human scale, where a L1 cache hit takes 1 second, and slower operations take up to years.

If L1 access is a second, then:

L1 cache reference : 0:00:01
Branch mispredict : 0:00:10
L2 cache reference : 0:00:14
Mutex lock/unlock : 0:00:50
Main memory reference : 0:03:20
Compress 1K bytes with Zippy : 1:40:00
Send 1K bytes over 1 Gbps network : 5:33:20
Read 4K randomly from SSD : 3 days, 11:20:00
Read 1 MB sequentially from memory : 5 days, 18:53:20
Round trip within same datacenter : 11 days, 13:46:40
Read 1 MB sequentially from SSD : 23 days, 3:33:20
Disk seek : 231 days, 11:33:20
Read 1 MB sequentially from disk : 462 days, 23:06:40
Send packet CA->Netherlands->CA : 3472 days, 5:20:00

In a future post, I’ll compare how DPDK or SR-IOV allow VNFs to have much faster access to network resources, and process data packets without even touching the RAM (all is done in L1 to L3 cache). Using those solutions, the latency for network processing can be reduced to near-baremetal level, in nanoseconds, instead of being stuck with milliseconds-like performance of regular software switches in virtual environments.

(Credit for Images:,664-3.html )