• Nenhum resultado encontrado

Virtualization of home networking equipment on 5G cloud

N/A
N/A
Protected

Academic year: 2021

Share "Virtualization of home networking equipment on 5G cloud"

Copied!
80
0
0

Texto

(1)

Universidade de Aveiro Departamento de

2019 Eletrónica, Telecomunicações e Informática

Tiago André

Borges Vieira

Virtualização de equipamento de rede doméstico

na cloud 5G

Virtualization of home networking equipment on

5G cloud

(2)
(3)

Universidade de Aveiro Departamento de

2019 Eletrónica, Telecomunicações e Informática

Tiago André

Borges Vieira

Virtualização de equipamento de rede doméstico

na cloud 5G

Virtualization of home networking equipment on

5G cloud

Dissertação/Tese apresentada à Universidade de Aveiro para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Engenharia Eletrónica e Telecomunicações, realizada sob a orientação científica de Daniel Nunes Corujo, Professor Investigador Doutorado do Departamento de Eletrónica, Telecomunicações e Informática da Universidade de Aveiro, e do Óscar Narciso Mortágua Pereira, Professor Auxiliar do Departamento de Eletrónica, Telecomunicações e Informática da Universidade de Aveiro.

(4)
(5)

o júri / the jury

presidente / president Professora Doutora Susana Isabel Barreto de Miranda Sargento

Professora Catedrática, Universidade de Aveiro

vogais / examiners committee Doutor Pedro Miguel Naia Neves

Consultor Sénior, Altice Labs

Doutor Daniel Nunes Corujo

(6)
(7)

agradecimentos / acknowledgements

Agradeço ao meu orientador Doutor Daniel Nunes Corujo todo o acompanhamento e orientações prestadas.

Agradeço aos meus colegas do Instituto de Telecomunicações Flávio Meneses e Manuel Fernandes toda a paciência, dicas e ensinamentos que me proporcionaram ao longo do desenvolvimento deste trabalho.

Por último, mas não menos importante, dedico este trabalho à minha família, namorada e amigos por todo o apoio incondicional prestado.

Esta dissertação foi realizada no âmbito do projeto de investigção PTDC/EEI-TEL/30685/2017 “5G-CONTACT - 5G CONtext-Aware Communications optimization” financiado pela FCT/MEC, e do projeto POCI-01-0247-FEDER-024539 “5G”, financiado pelo FEDER (através do POR LISBOA 2020 e do COMPETE 2020) e foi desenvolvida com o apoio do Instituto de Telecomunicações UID/EEA/50008/2019.

(8)
(9)

Palavras-chave 5G, Virtualização, Cloud, vCPE, SDN, NFV, OSM, Docker.

Resumo Com o propósito de aumentar a flexibilidade e a agilidade na maneira como os operadores de rede fornecem os seus servicos, tem vindo a ser proposta a virtualização de Costumer Premises Equipment (CPE). Esta dissertação extende essa vertente, propondo uma solução que permite a implantação de funções de rede de CPEs numa cadeia de containers de funções de rede virtuais (VNFs), criando instâncias de CPE virtuais (vCPEs). Através de mecanismos baseados em Redes Definidas por Software (SDN) e Virtualização das Funções de Rede (NFV), os vCPEs podem ser migrados entre clusters, garantindo o equilíbrio dos seus recursos, ou para cumprir os requisitos de uma transmissão de dados específica. Um protótipo foi testado como prova de conceito com a transmissão de um vídeo em Full e Ultra High-Definition, mostrando zero impacto na qualidade da experiência de utilizador dos utilizadores finais quando a migração está a ocorrer.

(10)
(11)

Keywords 5G, Virtualization, Cloud, vCPE, SDN, NFV, OSM, Docker.

Abstract In order to increase flexibility and agility in the way that network operators deliver their services, the virtualization of the Customer Premises Equipments (CPE) has been being proposed. This dissertation extends such approach and proposes a solution that enables the deployment of CPEs’ network functions in a chain of containerized virtual network functions (VNFs) creating virtual CPE instances (vCPEs) organized in clusters. Through Software Defined Networks (SDN) and Network Function Virtualization (NFV) mechanisms, vCPEs can be migrated among clusters ensuring the balancing of its resources, or to fulfill the requirements of a specific data transmission. A proof of concept prototype was tested with a transmission of a video in Full and Ultra High-Definition, showing none impact in the quality of experience of end-users when vCPE migration is taking place.

(12)
(13)

Contents

List of Figures iii

List of Tables v

List of Acronyms vii

1 Introduction 1

1.1 Motivation . . . 1

1.2 Objectives . . . 2

1.3 Contributions . . . 2

1.4 Document Structure . . . 3

2 Key Enablers and State of the Art 5 2.1 The Fifth Generation (5G) of Networks . . . 5

2.2 Software Defined Networks . . . 6

2.2.1 Architecture . . . 7

2.2.2 OpenFlow . . . 10

2.2.3 OpenFlow Forwarding Devices . . . 11

2.3 Virtualization Environment . . . 12

2.3.1 Containerization . . . 14

2.3.1.1 Comparing Virtualization . . . 15

2.3.2 Network Function Virtualization . . . 16

2.3.2.1 NFV Architecture . . . 18

2.4 Cloud Computing . . . 20

2.5 Virtualization of Home Network . . . 22

(14)

2.7 Chapter Considerations . . . 24

3 Scenario Description and Implementation 27 3.1 Problem Statement . . . 27 3.2 Framework Overview . . . 28 3.3 Scenario Description . . . 29 3.3.1 Network Elements . . . 29 3.3.1.1 OSM . . . 29 3.3.1.2 Cloud-VIM . . . 30 3.3.1.3 SDN Controller . . . 31 3.3.1.4 vCPE Cluster . . . 31 3.3.1.5 Physical CPE . . . 31 3.3.1.6 Virtual CPE . . . 32

3.4 Dynamic Instantiation Use Case . . . 32

3.4.1 vCPE instantiation . . . 33

3.4.2 vCPE migration . . . 35

3.5 Video-Stream Live Migration Use Case . . . 36

3.5.1 vCPE instantiation . . . 39

3.5.2 vCPE migration . . . 39

3.6 Chapter Considerations . . . 40

4 Proof-of-Concept Evaluation and Discussion 41 4.1 End-to-end delays . . . 41

4.2 Instantiation and Migration delays . . . 42

4.3 Throughput impact . . . 45

4.3.1 TCP & UDP . . . 46

4.3.2 Full HD & Ultra HD . . . 47

4.4 Chapter Considerations . . . 49 5 Final Remarks 51 5.1 Conclusions . . . 51 5.2 Future Work . . . 52 References 53 ii

(15)

List of Figures

2.1 Different layers of traditional networking. . . 7

2.2 SDN architecture layers. . . 8

2.3 SDN main interfaces. . . 9

2.4 Main componentes of an OpenFlow Switch. . . 11

2.5 Linux OS running as guest inside of a Windows host machine. . . 13

2.6 Virtual Machines Vs Containers . . . 17

2.7 NFV Architecture . . . 18

2.8 Cloud computing models . . . 21

2.9 Virtualization of Home Network . . . 23

3.1 Framework of the proposed architecture . . . 28

3.2 Motivational scenario . . . 30

3.3 vCPE architecture . . . 32

3.4 vCPE linkage to end-node . . . 33

3.5 High-level signalling for vCPE instantiation and migration . . . 34

3.6 Scenario variation for video migration . . . 37

3.7 High-level signalling for vCPE instantiation and migration . . . 38

4.1 Overall instantiation and migration (use case 1) . . . 44

4.2 Overall instantiation and migration (use case 2) . . . 45

4.3 TCP & UDP throughput . . . 46

4.4 Migration impact on video streaming throughput . . . 48 4.5 Migration impact on video streaming throughput with imposed packet losses 49

(16)
(17)

List of Tables

2.1 Some aspects of VMs and containers [23] [24]. . . 16

4.1 End-to-end delays . . . 42

4.2 Instantiation and migration delays (use case 1) . . . 43

4.3 Instantiation and migration delays (use case 2) . . . 44

(18)
(19)

List of Acronyms

3GPP The 3rd Generation Partnership Project

4G Fourth Generation of Mobile Networks

5G Fifth Generation of Mobile Networks

API Application Programming Interface

ARP Address Resolution Protocol

BGP Border Gateway Protocol

CAPEX Capital Expenses

COE Container Orchestrator Engine

CPE Costumer Premise Equipment

CPU Central Processing Unit

CVNF Containerized VNFs

DHCP Dynamic Host Configuration Protocol

DNS Domain Name Server

EIGRP Enhanced Interior Gateway Routing Protocol

ETSI European Telecommunications Standards

FHD Full High Definition

ForCES Forwarding and Control Element Separation

GUI Global User Interface

HTTP Hypertext Transfer Protocol

(20)

IaaS Infrastructure as a Service

IoT Internet of Things

IP Internet Protocol

ISG NFV Industry Specification Group for Network Functions Virtualization

ISP Internet Service Provider

MAC Media Access Control

MANO NFV Management and Orchestration

Mbps Megabits per second

NAT Network Address Translation

NFV Network Functions Virtualization

NFVI NFV Infrastructure

ONF Open Networking Foundation

OPEX Operational Expenses

OS Operating System

OSM Open Source MANO

OSPF Open Shortest Path First

OvS Open vSwitch

OvSDB Open vSwitch Database

PaaS Platform as a Service

QoE Quality of Experience

RG Residential Gateway

SaaS Software as a Service

SDN Software Defined Networks

SLA Service Level Agreements

STB Set-Top Box

TCP Transmission Control Protocol

TLS Transport Layer Security

UDP User Datagram Protocol

UHD Ultra High Definition

(21)

VM Virtual Machine

(22)
(23)

CHAPTER 1

Introduction

1.1

Motivation

The Fifth Generation of Mobile Networks (5G) comes to revolutionize the existent telecommunications networks empowering a digital era with a fully connected society. Different application requirements and different device types delivering different services to the users must coexist in the same network environment. These conditions, allied with different types of connection and network characteristics demands, impose a large challenge to current networks.

Two of the proeminent aspects that 5G provides are virtualization and orchestration mechanisms, as a way to perform and better manage the operator’s network. Instead of using the traditional physical servers hosting the network functions, they are instantiated and served in a virtualization platform. This way, the network acquires a new dimension in terms of configurability, scalability and elasticity.

However, this virtualization potencial goes behind the operator’s functions, allowing the virtualization of home network equipment at the costumer’s premises, such as routers and set-top-boxes therefore being referred to as Costumer Premise Equipment (CPE). This type of equipment has software and hardware installed, usually with reduced resources, that have to process the usual networking capabilities (Firewall, Network Address Translation (NAT), software for TV viewing, etc). With the growing of Internet of Things (IoT) and multiple devices connected to the same home router, its responsiveness may be affected.

(24)

CHAPTER 1. INTRODUCTION

This dissertation stands on the elaboration and design of a framework where the usual network functions present in the home network equipments, are deployed in a virtualized platform on the operator’s network creating then a virtual instance of the CPE, the vCPE.

1.2

Objectives

By employing some technologies such as Software Defined Networks (SDN), Network Functions Virtualization (NFV) and Cloud Computing (approached in chapter 2), and by decoupling the network functions of the physical CPE into a vCPE instance, the objectives of this dissertation can be focused in two main points:

• the implementation of a virtualized architecture of vCPEs;

• evaluate the implemented architecture in terms of feasibility and bandwidth demands,

maintaining the quality of the delivery services from network operators.

1.3

Contributions

Results obtained in this dissertation contributed to two national projects, the first one called Mobilizador 5G1, funded by Fundo Europeu de Desenvolvimento Regional

(FEDER) through Programa Operacional Regional de Lisboa (POR LISBOA 2020) and Programa Operacional Competitividade e Internacionalização (COMPETE 2020) do Portugal 2020 [Projeto 5G com o nº 024539 (POCI-01-0247-FEDER-024539)] programs and the second one 5G-CONTACT - 5G CONtext-Aware Communications optimization funded by FCT/MEC and FEDER through POCI-01-0247-FEDER-024539 and POR LISBOA 2020 e COMPETE 2020 programs. Both were developed under the support of Instituto de Telecomunicações de Aveiro.

Also with the goal of disseminating those results, two papers were submitted and already accepted:

• "Dynamic Modular vCPE Orchestration in Platform as a Service Architectures" for

2019 IEEE 8th International Conference on Cloud Networking (CloudNet) [1];

• "Trafficaware Live Migration in Virtualized CPE Scenarios" for the NFVSDN’19

-MOBISLICE II conference [2].

1https://5go.pt

(25)

CHAPTER 1. INTRODUCTION

1.4

Document Structure

Besides this introduction chapter, the present document is composed of more four chapters with the following content:

Chapter 2 - Key Enablers and State of the Art: This chapter aims to provide

an overview about the current status of vCPE deployments and to give a brief presentation of the technologies used along the practical work;

Chapter 3 - Scenario Description and Implementation: Following the

proposed objectives, all frameworks and implementations made are explained in this chapter;

Chapter 4 - Proof of Concept Evaluation and Description: Here, some

concrete performance results are presented and discussed;

Chapter 5 - Final Remarks: In the end, some conclusions are taken about

(26)

CHAPTER 1. INTRODUCTION

(27)

CHAPTER 2

Key Enablers and State of the Art

This chapter presents some fundamental concepts and emerging technologies that are essential to a better understanding of the framework implemented in this dissertation. It also features a section addressing previous studies and research related to this dissertation scope of work.

2.1

The Fifth Generation (5G) of Networks

5G applies to the upcoming generation of mobile telecommunication standards defined by The 3rd Generation Partnership Project (3GPP)1 and its predicted to be launched in

2020 [3]. According to [4], the 5G network will not only be an upgrade from the actual Fourth Generation of Mobile Networks (4G) in terms of velocity, but it aims to deeply evolve all the network infrastructure, architecture and technologies to a new level. All over the world, mobile access to the Internet is becoming essential for doing business in all industries, and 5G aims to interconnect the world without limits by employing intelligent technologies addressing these fundamental objectives [5]:

(i) Implementation of massive capacity and connectivity;

(ii) Support for an increasingly diverse set of services, applications and users - all with extremely diverging requirements;

(28)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

(iii) Flexible and efficient use of all available spectrum for different network deployment scenarios.

By complying the topics enumerated before, 5G will have a gigantic capacity for the delivery of services allowing so fast connections between end users and the network that it appears the distance between connected people and connected machines will shrink virtually to near zero [5]. Also, it aims to provide a very high data rate for a massive number of connected users ensuring an approximately zero latency. In addiction to these characteristics, the 5G network is expected to improve efficiency while decreasing the cost, it will be able to provide an IoT with billions of different devices simultaneously connected and be able to integrate with previous and current cellular and Wi-Fi standards. All these aspects contribute to enhance the mobile communications in terms of higher peak bit rate, better coverage, higher number of supported devices and more reliable communication [3] [6].

In the background of this big transformation there are some new emerging technologies that will be approached in the following sections.

2.2

Software Defined Networks

Today computer networks are typically created using several networking devices such as routers or switches, that are responsible for managing the traffic data so they can establish communication. These network architectures can be divided into three planes, or layers, of functionality: the control, data, and management planes [7], depicted in Figure 2.1. The control plane is responsible for exchanging routing information representing the routing protocols used. Building Address Resolution Protocol (ARP) tables, running routing protocols like Open Shortest Path First (OSPF), Enhanced Interior Gateway Routing Protocol (EIGRP), and Border Gateway Protocol (BGP), or learning Media Access Control (MAC) addresses to build a switch MAC address table are some examples of the tasks performed by the control plane. The data plane is related to networking devices which are responsible for forwarding data traffic. The data plane takes care of matching Internet Protocol (IP) destinations in the routing table and MAC addresses for forwarding, relying on the information that the control plane supplies. The management plane includes the software services used to control and access the network devices. Therefore, network policies are defined in the management plane, the control plane applies the policy to the networking devices, and the data plane executes it by forwarding data accordingly [7] [8].

(29)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

Figure 2.1: Different layers of traditional networking [7].

In traditional IP networks, the control and data planes are linked together embedded in the same networking devices, which makes IP networks difficult to manage. Due to this, operators need to configure network policies individually into each device using specific commands of each manufacturer [9]. This limitation is the main reason why traditional networks are seen as rigid, complex to manage and control, and makes the innovation difficult which leads to a phenomenon known as Internetossification [7]. So, since the emerging Internet applications and services are becoming increasingly more complex and demanding, network researchers are confronted with the need to make the Internet able to evolve to addresse these new challenges [10].

Hence, emerges the SDN networking paradigm whose main feature is the separation of the data and control planes to allow networks to become programmable. Instead of enforcing policies and running protocols, the network hardware devices are reduced to simply forwarding devices since their switching and routing functionalities are moved to a centralized controller of the SDN architecture.

2.2.1

Architecture

A SDN architecture can be seen as a composition of different layers, each of it with its own specific functions. The following list explains the architectural components in Figure 2.2 [11] [12]:

(30)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

Figure 2.2: SDN architecture layers [11] .

instructions received from the control plane and is usually the termination point for control-plane services and applications. The forwarding plane is also widely referred to as the "data plane" or the "data path".

Operational Plane- relates to network device resources and its state, e.g., whether

the device is active or inactive, the number of ports available, the status of each port, memory available, and so on. The operational plane is usually the termination point for management-plane services and applications.

Control Plane - responsible for making decisions on how packets should be

forwarded by one or more network devices and pushing such decisions down to the network devices for execution. As the control plane’s main task is to suit the forwarding tables, it usually focuses mostly on the forwarding plane and less on the operational plane of the device. However, the control plane may be interested in operational-plane information, which could include, for instance, the current state of a particular port or its capabilities.

Management Plane - monitoring, configuring, and maintaining network devices,

e.g., making decisions regarding the state of a network device, is the role of the management plane. In opposite to the control plane, it mainly focuses on the

(31)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

operational plane.

Application Plane - The plane where applications and services that define the

network behavior reside. Applications that directly (or primarily) support the operation of the forwarding plane (such as routing processes within the control plane) are not considered part of the application plane.

SDN uses a centralized controller that has a global view of the network topology and has direct control over the data plane elements. The SDN controller is responsible for feeding the switches in the data plane with information from its control plane [8]. It assumes the role of the "brain" of the network and can be a physical hardware device or deployed in a virtual machine, running separately from the network equipments [9]. By centralizing network state in the control layer, the SDN controller gives network managers the flexibility to configure, manage, secure, and optimize network resources or to process any change in the network topology or behavior in an prompt way [13].

To trade information between the application and forwarding planes, the SDN controller uses, respectively, two main interfaces: northbound and southbound (Figure 2.3).

Figure 2.3: SDN main interfaces [8].

The northbound interface connects the control plane to the application plane and it is used to access the SDN controller itself. This allows the network administrator to access

(32)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

the SDN controller to configure it or to retrieve information from it. This could be done through a Global User Interface (GUI)2 or an Application Programming Interface (API)3

which allows other applications access to the SDN controller so they can: list information about all network devices in the network, show the topology of the entire network, show the status of a physical interface, configure IP addresses, and so on. RESTful4 APIs are

quite used in northbound interfaces [8].

On the other hand, the southbound interface defines the connection between the SDN controller and the network switches which are part of the data plane. As it is a software interface, it often uses APIs to implement the controller-switch interactions. Such APIs are part of some main SDN standard protocols: NETCONF, OpenFlow and Forwarding and Control Element Separation (ForCES). The most popular is OpenFlow protocol which will be described in the following section [8] [9].

2.2.2

OpenFlow

OpenFlow is an open-source standard protocol for SDN environments that defines how the traffic between the control plane and data plane is handled. Originally designed by researchers at Stanford University and the University of California at Berkeley, this protocol has been maintained and standardized by the Open Networking Foundation (ONF) [9]. OpenFlow is being widely adopted by the networking community since it is the only standardized SDN protocol that allows direct manipulation of the forwarding plane of network devices. The OpenFlow Switch, SDN controller and a secure-communication channel are considered the fundamental elements of this protocol, and all must be OpenFlow-based. According to the ONF white paper [14], an OpenFlow-based SDN architecture can achieve the following benefits:

• Centralized control of multi-vendor environments: the controller software can be

implemented in various network devices from different vendors;

• Reduced complexity through automation of many management tasks that are made

manually today;

• Increased network reliability and security and more granular network control;

2it allows interaction between users and electronic devices through graphical icons and visual indicators. 3is a particular set of rules (code) and specifications that software programs can follow to communicate

with each other.

4is a set of functions which developers can perform requests and receive responses via HTTP protocol.

(33)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

• Better user experience that can be better adapted to dynamic user needs.

These benefits can be made possible because OpenFlow works on a per-flow basis, that is, identifies the network traffic based on pre-defined match rules that can be statically or dynamically programmed by the SDN control software [13].

2.2.3

OpenFlow Forwarding Devices

In a network, each OpenFlow Switch connects to a controller through a secure channel which is the interface linking these two devices. It is called a secure channel because the messages must be formatted according to the OpenFlow protocol and the channel is usually encrypted using Transport Layer Security (TLS) and running over Transmission Control Protocol (TCP).

Figure 2.4: Main componentes of an OpenFlow Switch. [14]

As shown in Figure 2.4, an OpenFlow Switch is composed by one or more flow tables and a group table performing lookups and packet forwarding. Each flow table contains a set of flow entries that contains matching fields, stats field, and an action list field with a set of instructions to apply to the matching packets. When an OpenFlow switch receives a packet, its header fields are verified and compared to related fields in the flow table entries. If an entry corresponds to this packet header there is a match and the switch will execute the instructions associated with the specific flow entry. If there is no match found in a

(34)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

flow table, depending on the configuration of the table-miss flow entry, the packet may be forwarded to the controller over the OpenFlow channel, dropped, or delivered to the next flow table through a pipeline process. In each flow table, the flow entries are sorted from top to bottom according to their priority. There are several OpenFlow switches and one of the most popular ones is the Open vSwitch (OvS)5 [14].

Obviously, despite that SDN switch performance is dictated by packet forwarding speed and flow table capacity, it’s easy to realize that the controller is the most important element of the OpenFlow protocol. It supports network applications and determines the rules to be stored and applied by switches. These rules are settled by adding, updating or deleting flow entries in flow tables of the switches. Examples of the most popular controllers used are OpenDayLight6, Floodlight7, Ryu8, POX9 and NOX10 [9] [14].

2.3

Virtualization Environment

The virtualization concept is very broad and although traditionally attributed to operating systems, it can also be applied to applications, services, networks and more. Virtualization is a technology that enables the creation of a virtual instance, or a Virtual Machine (VM), of a computer system, a storage device or a network device running in a layer abstracted from the actual hardware. Usually, it refers to the possibility of running multiple Operating System (OS)s simultaneously on the same physical machine, commonly called host, while VMs are referred to as guests (Figure 2.5). Each VM can interact independently and run different OSs or applications sharing the same resources of the host machine since it distributes its capabilities among users or environments. To the applications running inside the VM, it can appear as if they are on their own dedicated machine, whereas the OS, libraries, and other programs are unique to the virtualized system and unrelated to the host OS. This allows users to run applications meant for a different OS without having to switch computers or reboot the system into other OS [15] [16].

Virtualization is possible because there is a software called hypervisor that separates the physical hardware from the virtual environments. Hypervisors take the physical resources such as the Central Processing Unit (CPU)’s memory or Input/Output (I/O) interfaces

5http://www.openvswitch.org 6https://www.opendaylight.org 7http://www.projectfloodlight.org/floodlight/ 8https://osrg.github.io/ryu/ 9http://sdnhub.org/tutorials/pox/ 10https://github.com/noxrepo/nox 12

(35)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

Figure 2.5: Linux OS running as guest inside of a Windows host machine.

and network traffic, and divide them so that virtual environments can use them. When the virtual environment is running and a user or program issues an instruction that requires additional resources from the physical environment, the hypervisor relays the request to the physical system and caches the changes. Briefly, a hypervisor is a software layer that can monitor and virtualize the resources of a host machine according to the user requirements [15] [17].

Its benefits are many and by creating multiple resources from a single computer or server, virtualization improves scalability, flexibility and control, minimizing energy consumption, infrastructure costs and maintenance. It also allows for greater isolation by removing the dependency on a given hardware platform [16].

As indicated before, despite the virtualization concept is traditionally assigned to OSs, it can also be applied to applications, services, networks and even more. Thus, there are several types that can be approached [17] [15]:

Data Virtualization: allows companies to deal with data dynamically, providing

processing capabilities that combine data from multiple sources, and transform it according to the user needs. Data virtualization allows the data to be treated as a single source delivering it in the required format to user or application requirements;

Desktop Virtualization: allows one centralized server to deploy simulated desktop

environments to hundreds of physical machines at once. In opposite to traditional desktop environments that are physically installed, configured, and updated on

(36)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

each machine, desktop virtualization allows admins to perform mass configurations, updates, and security checks on all virtual desktops;

Operating System Virtualization: by OS virtualization it is possible to deploy

multiple operating systems on a single machine and running them side-by-side. It massively reduces hardware costs, since computers don’t require such high out-of-the-box capabilities;

Storage Virtualization: gives the user the ability to aggregate the hardware

storage space from several interconnected storage devices into a simulated single storage device where multiple users may access it;

Application Virtualization: here, an application can run in an encapsulated form

without being dependant on the operating system below. In addition, an application created for one OS can run on a completely different operating system;

Network Functions Virtualization (NFV): is designed to split and distribute

the network’s key functions like firewall, Domain Name Server (DNS) or Dynamic Host Configuration Protocol (DHCP) into different and independent channels to then be assigned to a particular server or device. The idea of NFV is to masquerade the real complexity of the network by separating it into parts that are easy to manage. More details about this type of virtualization will be discussed in the section 2.3.2.

2.3.1

Containerization

In the 1970s the idea of a container on Unix systems was introduced with the purpose of creating an isolated environment where services and applications could run without interfering with other processes. The expression of containerization is derived from shipping containers that are a method to store and ship any kind of cargo. Over the years, this virtualization concept has become more popular and by 2012, containers have gained big visibility as they are currently in the process of transforming the way the IT industry does businesses [18] [19] [20].

Like VMs, the containers simulate an operating system (OS) by providing the software with all of the necessary components and configuration that it needs to run but in an easier and lightweight way. A container is thus a unit of software that packages up the code and all its dependencies so that applications or microservices can run uniformly regardless of the host environment. This ensures real software portability. While each VM has a full

(37)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

guest OS image besides the binaries and libraries needed for the applications, in a container each application runs as an isolated process in user space on the host OS sharing the same kernel11rather than being executed by the hypervisor. This leads the containers to use less

memory and have a faster start-up time. Therefore, containers are a much more portable and lightweight virtualization concept than VMs [21].

To orchestrate and deploy containers, some tools have emerged where containers are built on, being Docker the one with the most interest in this work.

Docker, an open-source technology that was launched in 2013 is one of the most popular container solutions nowadays. To launch a container, there is a docker image which is an executable package with all necessary settings, libraries, code and everything needed to run the desired application. This docker image becomes a container when it is built and running on docker engine12.

Docker is designed in such a way to improve the productivity of developers since it is really easy and fast to run a container due to its small size. The developers can thus focus on developing their applications without concerns about the system that it will be running on. The docker containers are extremely portable and maintain the same performance regardless of the host environment. When a container is launched, it keeps running in an isolated way on top of the OS’s kernel. Other main advantages of Docker containers are scalability, rapid delivery and its smart use of resources [22].

2.3.1.1 Comparing Virtualization

In this topic an overall of virtual machines and containers is presented, with some aspects of both exposed in Table 2.1 and their architecture in Figure 2.6.

11The kernel is the central module of a computer’s operating system, with complete control over

everything in the system.

(38)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

Table 2.1: Some aspects of VMs and containers [23] [24].

Virtual Machines Containers

Heavyweigth solution with several

GigaBytes; Lightweight solution with tens ofMegaBytes; Fully isolated with its own services

and hence more secure; Sharing the same kernel resourcesimplies possibility less security;

Startup in minutes; Startup in milliseconds;

Runs on its own OS; Run in any environment regardless ofthe infrastructure; Can run an OS that is different from

its host machine; Needs to use the same OS as the host;

Requires more memory and hardware

resources. Takes less memory space since it doesnot need to maintain an OS. Looking at the previous table, it makes sense to afirm that there isn’t a best solution for a virtualized environment. While VMs are best suitable for determined purposes, so do containers are for others. Depending on the user’s intentions, containers can be best applied rather than VMs and vice-versa. For example, VMs are a better idea for embedded systems since they run on its own OS, and containers are better to use to run web applications since they can run in any environment regardless of the infrastructure.

2.3.2

Network Function Virtualization

The concept of NFV is originated from the telecommunication service providers requirements, to accelerate the deployment of new network services worldwide. Telecommunications networks are becoming over populated with a large and increasing variety of hardware devices, leading to an overload situation in the network. To overcome this, the NFV technology aims to transform the way network operators architect networks, evolving standard IT virtualization technology to reinforce many network equipment types onto industry.

Network operators believe that NFV is one of the key technology enablers for 5G. So, in November 2012 many industries and network organizations, formed the Industry Specification Group for Network Functions Virtualization (ISG NFV), as part of the

(39)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

Figure 2.6: Virtual Machines Vs Containers

European Telecommunications Standards (ETSI)13. This is the lead group responsible

for the development of requirements, architecture, standardization and other concerned issues for virtualization of various functions within telecommunication networks.

The main goal of NFV is to implement and manage Virtual Network Function (VNF) abstracting them from the hardware network devices. A VNF is the software instance in NFV technology, running different processes for a network function in a virtualized environment such as VMs. By decoupling network functions from the physical network devices they can easily be created, moved or migrated from one equipment to another in various locations in the network as required, without the need for installation of new equipment. This way, NFV leads to significant reductions in Operational Expenses (OPEX) and Capital Expenses (CAPEX) through reduced equipment costs and reduced power consumption, and facilitates the deployment of new services with increased agility and in a faster form [25].

Despite the benefits mentioned above that NFV provides, there are some technical challenges that ISG NFV have to take into account [26] [27]:

• ensure that virtualized network platforms will be simpler to operate than what exists

today;

(40)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

• maintain the network stability without degradation during application load and

relocation;

• need of a consistent management and orchestration architecture to leverage the

flexibility of VNFs in a virtualization environment;

• accomplish high-performance VNFs which are portable among different hardware

vendors, and different virtualization support systems;

• migrate from the actual network infrastructure in a smoothly way to NFV-based

solutions.

There are several use cases of NFV, including the virtualization of cellular base station, mobile core network and home network. This dissertation will only focus on the virtualization of home network, addressed in section 2.5.

2.3.2.1 NFV Architecture

The NFV architecture (shown in Figure 2.7) has been structured by ETSI ISG NFV and it is composed by three fundamental elements: the NFV Infrastructure (NFVI), Virtualized Network Functions (VNFs) and Services, and the NFV Management and Orchestration (MANO), detailed next [25] [26] [27] [28] [29].

Figure 2.7: NFV Architecture [30]

(41)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

a) NFV Infrastructure

It is the combination of both hardware and software resources, and the virtualization layer, on top of which VNFs are executed. Each NFVI block is implemented as a distributed set of NFVI nodes that can be deployed and controlled geographically to provide service high-availability and to support the latency objectives of the different use cases. As an important element in the NFVI domain, the virtualization layer abstracts and decouples the VNF software from the underlying hardware, thus ensuring a hardware independent lifecycle for the VNFs.

b) Virtualized Network Functions and Services

As referred before, a VNF is a software implementation of network functions (like firewall, DHCP, DNS, etc) that is deployed in a virtual environment such as a VM. Each VNF can run on one single VM or be deployed over multiple VMs.

A service is what the Internet Service Provider (ISP) can offer to the customer and is normally composed by one or more VNFs. In the users’ perspective, the services should have the same or better performance running in the VMs instead of the physical equipment. So, the number, type, and ordering of VNFs that make services up are determined by it’s functional and behavioural specification.

c) Management and Orchestration (MANO)

As the name itself suggests, MANO is the block responsible to manage and orchestrate all the necessary physical and/or software resources in the NFV framework and then facilitate the relationship between NFVI and VNFs and services. MANO oversees the lifecycle of VNF instances, configuring them as well as the infrastructure these functions run on.

On one hand, the VNFs can be composed together to reduce management complexity but on the other hand, they can be decomposed into smaller functional blocks for reusability and faster response time. This block is also responsible for defining which interfaces can be used for communications between the different components of NFV.

(42)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

2.4

Cloud Computing

Cloud computing is an online emerging technology through which data and computer resources are stored in servers and typically provided as a subscription-based service to clients. It is an on-demand and automated service, that is, a costumer should be able to request a service and receive it right way without human interaction [31]. Cloud computing provides a shared pool of computing resources that can be provided and stopped how and when the users need. In this computing model, personal or organizational users can access all of their documents and data from any device connected to the Internet [32]. The need for the user to be in the same physical location as the hardware that stores its data, is then removed.

Usually, many literature works define cloud computing with the follow characteristics [31] [32] [33]:

on-demand self service: a costumer should be able to get and terminate a service

automatically without requiring manual intervention;

broad network access: the service should be available using a variety of different

devices (computers, tablets or smartphones) being the internet connection the only necessary element for full access of the subscribed services;

resource pooling: the cloud provider should assign resources available dynamically

to multiple consumers using technologies such as virtualization;

rapid elasticity: resources should be rapidly and elastically provisioned and released

without manual intervention when no longer needed, and can be purchased in any quantity at any time;

measured service: the cloud provider must be capable of measuring and monitoring

the resources used by a costumer to then implement a "pay-per-use" subscription model.

With the appearance of this technology, the cost of computation, application hosting, content storage and delivery is significantly reduced [34]. However, there are some barriers when adopting cloud based services including issues of confidentiality, privacy, security and regulation [32].

Figure 2.8 outlines the different deployment and service models that compose cloud computing [35] [34] [36] [37]:

(43)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

Figure 2.8: Cloud computing models

Deployment Models:

− Public Cloud is characterized to be owned and operated by a cloud service provider. The term "public" does not mean that a user’s data is publicly visible, but refers to all costumers sharing the same infrastructure pool with limited configuration, security protections, and availability variances. One of the advantages of a public cloud is that each individual client can subscribe a cloud service under a "pay-per-use" model. Any subscriber can access its cloud space with an Internet connection;

− Private Cloud is used and controlled exclusively by the organization that owns its infrastructure. Usually, a private cloud is established for a specific group or organization and limits access to just that group. Comparing to a public cloud, in a private cloud service, data and processes are managed without restrictions of network bandwidth, security exposures and legal requirements. In contrast, with this model the high cost of creating and maintaining the cloud is normally associated;

− Hybrid Cloud is the combination of both private and public cloud models. One can be assigned to critical processes and the other dedicated to secondary processes. This solution is not easy to achieve, especially because it is difficult to cohabit and interconnect these two cloud models;

− Community Cloud is a model used for a set or organizations that normally have common interests. These interests can be specific security requirements, aspects of

(44)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

flexibility of use, or a common mission. All the members of the community share access to the data and applications in the cloud.

Service Models:

− Infrastructure as a Service (IaaS) deals with computational infrastructure providing basic storage and computing capabilities. IaaS offers network infrastructures as a service, like VMs, routers, switches and storage. For example, when a costumer creates a VM in the cloud, IaaS provides the capability of choosing which OS to use, the number of CPU cores and the amount of memory needed. Examples: Amazon Web Services (AWS), Cisco Metapod, Microsoft Azure, Google Compute Engine and OpenStack;

− Platform as a Service (PaaS) provides development environment as a service giving the subscribers access to all the components necessary they require to develop and run applications over the internet. The user can control the applications running in the environment, but not the OS and network infrastructure on which they are running. Examples: Heroku, Force.com, Google App Engine, Apache Stratos and OpenShift;

− Software as a Service (SaaS) represents a set of applications accessible through the use of a browser, that run in a cloud environment. With SaaS, the user pay for the application and just have to run it, while the cloud provider takes care of installation and the maintenance of the virtual servers. SaaS eliminates the need to have a physical copy of software installed on users’ devices, by providing them access to the applications on the cloud. Examples: Google Apps, Dropbox, Salesforce, Cisco WebEx, Concur and GoToMeeting.

2.5

Virtualization of Home Network

Typically, the ISP offers home services through dedicated CPE devices which include Residential Gateway (RG)s for Internet access and Set-Top Box (STB)s for multimedia services. With the actual architecture (above the dashed line on Figure 2.9), the delivery of IPTV services is known to be complicated, due to the interactive stream control functions like rewind and fast-forward. Another complication is the gateway hardware becoming obsolete with the service innovation or if the home users desire to change services

(45)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

but not the equipment. These complications are attenuated since the NFV technology facilitates the virtualization of home network, more specifically the network functions that are currently installed in the physical CPEs such as firewall, DHCP server and NAT. These functions can be moved and be implemented in a virtual environment creating a virtual correlative of the physical CPE.

Through this virtualization, the network and service operators only need to provide low cost CPE devices to costumers with low maintenance requirements making them adaptable to new technologies and protocols. This solution (under the dashed line on Figure 2.9) presents numerous other advantages to network operators and end users. First, it introduces new services more smoothly by minimizing the dependency on the CPE functions. Second, it improves the Quality of Experience (QoE) by offering almost unlimited storage capacity and enabling access to all services from different locations and multiple devices like smartphones or tablets. Third, it enables the user to change its service without altering the hardware. Finally, it is possible to share some functionalities of RGs and STBs among costumers [27] [38].

Figure 2.9: Virtualization of Home Network [27]

2.6

Background Work

Having these technologies referred in the previous sections in mind, the academic and research community have been working in some solutions capable of virtualizing home network equipments (i.e. CPE). In [39] it was developed a virtual CPE (vCPE)

(46)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

solution integrating both physical and virtual segments in a cloud environment with a chain of some VNFs. The authors of [40] presented an evaluation on the practicality and performance of a vCPE prototype system based on open-source software. In [41], an automated cloud platform for NFV deployment based on SDN-NFV technologies is proposed. A VNF store exposed by the network operator is presented where the enterprise costumers have a management system of the network functions associated to their own CPE. The performance impact and perception of a residential SDN using a cloud-based control was measured in [42] with the help of the recruitment of 270 residential users located in the United States. By measuring the connections latency the authors characterized the residential network connections, impacted by a cloud-hosted OpenFlow controller. The outcome of this paper was that residential SDN and middleboxes are feasible for roughly 90% of US users.

Focusing on resource sharing and VNF migration, in [43] several load balancing strategies are compared in an implemented test-bed that requests online video content. However, there is too few information about the simulation procedures and its results. Also in [44] was designed an algorithm to achieve the minimal cost by choosing the optimal place between the edge network and the cloud to deploy VNF instances. Here, the authors just concentrated on CPU and memory impact not having a migration feature in their architecture. Finally, [45] presents a sort of live migration models of VMs handling various approaches issuing load balancing among data-centers with minimum downtime impact.

In the papers related above, there is none demonstration of a real use case and this is where this dissertation differs. As explained and illustrated in the next chapters, the framework developed considers a dynamically instantiation and migration of the vCPE taking in account the traffic behaviour during the processes.

2.7

Chapter Considerations

This chapter presented a brief overview of the most relevant topics approached in the work of this dissertation. Some technologies here presented will have an important role in the next coming generation of telecommunications networks (5G).

With the control plane centralized, SDN came with the possibility for the network administrator to assure any change on the network topology instantly. NFV focuses on optimizing the network services by decoupling the network functions, thus its a true statement to say that SDN and NFV work hand-in-hand. Cloud computing provides a new

(47)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

method of on-demand computing since the users do not need to own and maintain their own computing infrastructure, obtaining the computing resources for their computing tasks through the network instead. The virtualization technology introduces a better resource management and usability dependencies on hardware by abstracting the resources needed to run software. Through virtualization, users’ applications shall be run securely and in an isolated space, either virtual machines or containers.

(48)

CHAPTER 2. KEY ENABLERS AND STATE OF THE ART

(49)

CHAPTER 3

Scenario Description and

Implementation

This chapter aims to provide an overview of the proposed scenario as well as the details of all its elements and implementation. Two use cases of vCPE deployment within such scenario are also described.

3.1

Problem Statement

With the growing of the IoT extending its reach into households and enabling easy home automation availability, more and more devices request home router equipments to host their connections. Herewith, managing so many devices becomes complicated as each device has different network requirements and typical routers do not take this into account. Putting this together with the issues referred in subsection 2.5, the network operators intend to manage the network differently by using SDN and NFV technologies.

This dissertation intends to bring a new vision on CPE virtualization by deploying a dynamic architecture where the virtual CPE instances are orchestrated by a SDN controller application. Two use cases are approached: one that dynamically instantiates CPEs in a cloud environment with containerized VNFs associated (Section 3.4), and the other with the objective of delivering live stream videos in a flexible and agile way using the same deployment strategy as in the first case (Section 3.5).

(50)

CHAPTER 3. SCENARIO DESCRIPTION AND IMPLEMENTATION

3.2

Framework Overview

In the perspective of a network operator, such as an ISP, a virtualized platform is required to host some network functions to achieve a new network dimension in terms of configurability, scalability and elasticity. Therefore, a framework is proposed where a vCPE is deployed in a separated infrastructure from the operator’s one, avoiding the need for network operators to own their own infrastructure. Following this context, a PaaS is requested by the operator for deploying vCPEs with some pre-established Service Level Agreements (SLA)1.

Figure 3.1 illustrates the different layers of the proposed architecture for the vCPE deployment. The bottom layer is the resources layer, related to an IaaS implemented in an inner data-center running OpenStack Queens2 as cloud infrastructure (i.e., Cloud-VIM)

allowing the sharing of data-center hardware resources. In the middle layer, the VNFs are deployed in VMs that are orchestrated by the MANO. For this entity the Open Source MANO (OSM) release 53 was used. Finally, the top layer is composed by a

Container Orchestrator Engine (COE) that orchestrates and manages a cluster of VMs for deploying Containerized VNFs (CVNF). The combination of the three layers results in a PaaS environment with Docker Swarm4 used as a COE for cluster management.

PaaS / COE (Docker Swarm) Resources IaaS / VIM (OpenStack) NFVO (OSM) Infrastructure Containers VMs CVNF#2 CVNF#1 CVNF#3 VNF #1 VNF #2 VNF #3

Figure 3.1: Framework of the proposed architecture

1SLA is an agreement between the service provider and the client on some particular aspects of the

service.

2https://www.openstack.org/software/queens/

3https://osm.etsi.org/wikipub/index.php/OSM_Release_FIVE 4https://docs.docker.com/engine/swarm/

(51)

CHAPTER 3. SCENARIO DESCRIPTION AND IMPLEMENTATION

With these entities instantiated, the vCPE can be created hosting multiple chained CVNFs instantiated in a virtual environment. Also, the framework has the ability to scale the size of the cluster up or down in order to migrate vCPEs among VMs. This feature balances the workload of each VM keeping the vCPEs’ performance in terms of throughput and delay. In the following sections more details will be given and all the implementation procedures explained.

3.3

Scenario Description

Figure 3.2 illustrates a general schematic of how a vCPE is dynamically instantiated in this scenario. In this proposal, a vCPE is a clone of a physical CPE (referred from now on as pCPE) regarding to its network functions and features and is defined as a chain of multiple CVNFs instantiated in a virtual environment. Since the network operator does not have control over the virtualized platform, the vCPEs are deployed over a PaaS offered by an infrastructure manager who hosts the network operator when the instantiation of a network controller and a cluster of nodes (i.e., VMs) takes place (event 0 in Fig. 3.2). VMs belonging to the cluster are where the CVNF can be deployed, by the infrastructure manager. A vCPE is instantiated when a pCPE powers up and connects to the network (event 1 in Fig. 3.2). This connection of a pCPE triggers the SDN controller to request the service orchestrator the creation of a new vCPE (event 2 in Fig. 3.2). With this request in hand, the service orchestrator asks the VNF manager to instantiate a chain of CVNFs in one node of the cluster (event 3 in Fig. 3.2). When this deployment is completed, a REST notification message is sent to the SDN controller (event 4 in Fig. 3.2) which then creates the datapath between the pCPE and vCPE via a Layer 3 type GRE tunnel (events 5 and 6 in Fig. 3.2). At this point, pCPE is able to redirect its traffic to the correlative vCPE.

In the following section more details about all the network elements evolved will be given.

3.3.1

Network Elements

3.3.1.1 OSM

This block, running in a data-center (at Instituto de Telecomunicações in Aveiro) VM with 4 vCPUs and 8 GB of RAM, is responsible for the instantiation of all the VNFs in the IaaS environment. It has also the duty to configure and monitor all the VNFs deployed

(52)

CHAPTER 3. SCENARIO DESCRIPTION AND IMPLEMENTATION Master Slave#1 Slave#n Network agent Service Orchestrator

vCPE Cluster (Docker Swarm) SDN Controller (2) (3) (5) & (6) (6)

Cloud VIM (OpenStack) Resource Orchestrator VNF Manager OSM (0) (0) (0) W-NIC NIC OvS Network agent PNF CPE#1 OvS CVNF #1 CVNF #2 vCPE#1 vCPE#k (1) GRE Tunnel (4)

Figure 3.2: Motivational scenario and it is formed by the following inner blocks:

1. Service Orchestrator: uses the VNF manager to configure and monitor the VNFs and instantiate them through the Resource Orchestrator;

2. Resource Orchestrator: requests VNFs’ instantiation to the VIM and notifies the SDN controller when this procedure is concluded;

3. VNF Manager: configure the VNFs using network descriptors with the necessary parameters.

3.3.1.2 Cloud-VIM

As said before, the Cloud-VIM element provides a IaaS in an inner data-center responsible to allocate the necessary VMs to make all the scenario up, corresponding to the OSM and vCPE cluster. OpenStack Queens was used from which the storage and resources of all framework can be controlled and managed.

(53)

CHAPTER 3. SCENARIO DESCRIPTION AND IMPLEMENTATION

3.3.1.3 SDN Controller

The SDN controller is the brain of the entire framework since it is this element that configures all the datapath between pCPE and vCPE. To ensure this pCPE-vCPE communication, the controller uses the Open vSwitch Database (OvSDB) protocol to perform the actions of creating OvS bridges as well as adding the necessary ports and OpenFlow protocol to add flow rules on them. Both physical and virtual CPEs use OvS5

software v2.9 acting as OpenFlow forwarding devices. A SDN controller application was developed based on the Ryu controller, using Python v2.76 as a programming language.

In the Cloud-VIM there is a VM hosting the SDN code running Ubuntu server 18.04 LTS with 2 vCPUs and 4 GB of RAM.

3.3.1.4 vCPE Cluster

The vCPE cluster is formed by a set of nodes (i.e., VMs) instantiated in the OpenStack environment. Each one of the VMs runs Ubuntu server 18.04 LTS with 1 vCPU and 1 GB of RAM. The instantiation of a node in the cluster is made through Docker Swarm which has the ability to scale up or down the number of nodes, and perform some monitoring metrics with the Prometheus7 tool. Each node belonging to the cluster can embrace multiple

CVNFs that are requested by the SDN controller. 3.3.1.5 Physical CPE

As its own name suggests, the pCPE is a physical node with wireless capabilities that is implemented in an APU2C4 board running Ubuntu server 14.04 LTS with 4 GB of RAM. In this node all the network configurations in the kernel were disabled with the purpose of performing them in the cloud, namely in the corresponding vCPE. To enable this node to act as a wireless access point, there is a Docker container running hostapd8

that creates a Wi-Fi network in IEEE 801.11n at 5Ghz. In this Docker container there is also the isc-dhcp-relay software that skips the responsibility to treat the incoming DHCP Requests, forwarding them to the DHCP server placed on the vCPE.

5http://www.openvswitch.org

6https://www.python.org/download/releases/2.7/ 7https://prometheus.io

(54)

CHAPTER 3. SCENARIO DESCRIPTION AND IMPLEMENTATION

3.3.1.6 Virtual CPE

As said before, a vCPE can be characterized as a chain of containerized VNFs (i.e., CVNFs) deployed over a PaaS. As shown in Figure 3.3, a vCPE is defined as two Docker containers, both based on Alpine Linux, linked to an OvS bridge. One of the CVNFs, runs isc-dhcp-server9 to offer IP addresses to the DHCP Requests comming from the pCPE,

along with DNS information. The other one not only applies some firewall rules through the iptables software but also masquerades the IP of the packets comming from the private Wi-Fi network created in the pCPE.

One important feature about this element is that for each pCPE there is only one and just one vCPE that reproduces its physical correspondent network functions. That assures total independence of each vCPE assuming a unique relationship between a pCPE-vCPE pair. OvS DHCP & DNS server Firewall & NAT 

Figure 3.3: vCPE architecture

There is an important consideration to have in account about these network elements which is that not all of them were developed from scratch. The elements fully developed by the author of this dissertation were the SDN controller application, the pCPE and the vCPE as well as its CVNFs. All the other structures of the framework were already available and developed by thirds, being integrated on this work.

3.4

Dynamic Instantiation Use Case

The procedures for dynamic instantiation of the scenario illustrated in Figure 3.2 are described in this section having the high-level signalling of Figure 3.5 as reference. After the

9https://www.isc.org/dhcp/

(55)

CHAPTER 3. SCENARIO DESCRIPTION AND IMPLEMENTATION

scenario is built up, is expected to occur communication between any network device, that might connect to the Wi-Fi interface of the pCPE, and an end-node VM created for testing purposes (Figure 3.4). The end-node is instantiated in the OpenStack running Ubuntu server 18.04 LTS with 8 vCPU and 16 GB of RAM, and is unrelated with the other VMs regarding to the vCPE cluster instantiated in the same OpenStack environment. Figure 3.4 also shows an OvS "output bridge" linked to all vCPEs presented in the cluster node. This "output bridge" is created by the SDN controller as soon as a node is created in the cluster, and assures the communication between all the packets coming from any vCPE with the end-node being its destiny. End-node and "output brigde" have communication through a private subnet created on OpenStack. In this use case, a multitude of vCPEs can be created in one single cluster’s VM since there are available resources on it.

It is assumed in Figure 3.5 that the cluster was already instantiated by OSM and ready for vCPE deployment with an "output bridge" created on each cluster node. At the beginning, the cluster is composed by a master VM, with a developed network agent application, and one slave VM. The vCPE cluster can have a maximum of five nodes, one master and four slaves.

vCPE #1 vCPE #2

vCPE #n

OvS

(output bridge) End Node

Figure 3.4: vCPE linkage to end-node

3.4.1

vCPE instantiation

When a pCPE is powered up and gets a network connection, a message is sent to the controller via OvSDB protocol (message #1 in Fig. 3.5). As the controller realizes that the message has come from a pCPE through its system_id, it requests OSM to instantiate a new stack of CVNFs via REST protocol (message #2 in Fig. 3.5). In its turn, the OSM verifies the load of cluster’s nodes (message #3 in Fig. 3.5) and instantiates the required CVNFs (i.e., docker containers) in the node with more available resources at that moment.

(56)

CHAPTER 3. SCENARIO DESCRIPTION AND IMPLEMENTATION

SDN controller

(1) new connection (OVSDB)

(2) stack instantiation request (REST)

(5) stack instantiation requested (REST)

(10) attached containers to OvS bridge (REST) (7a/b) instantiate new OvS bridge (OVSDB)

(9a/b) create tunnel ports (OVSDB) (11a/b) update flow tables (OF)

(3a/b) verifies the load of cluster's nodes (SSH)

Node Swarm Cluster

(12) high load node alert (REST)

(22) update tunnel end-points (OVSDB)

(24) delete vCPE's bridge from old node (OVSDB) (25) delete old ports from old output bridge(OVSDB) (26a) stack remove request (REST)

(26c) stack remove requested (REST)

Node

CPE power up

(8) attach vCPE to output bridge (OVSDB)

CPE Migration

(6) stack instantiated (REST) (4) stack instantiation request (SSH)

(14a/b) verifies the load of cluster's nodes (SSH) (16) stack instantiation requested (REST)

(17) stack instantiated (REST) (15) stack instantiation request (SSH)

(26b) stack remove (SSH) (13) stack instantiation request (REST)

(21) attached containers to OvS bridge (REST) (18a/b) instantiate new OvS bridge (OVSDB) (20) create tunnel ports (OVSDB)

(19) attach vCPE to output bridge (OVSDB)

(26d) stack remove completed (REST) (23a/b) update flow tables (OF)

CPE MANO

Figure 3.5: High-level signalling for vCPE instantiation and migration

(57)

CHAPTER 3. SCENARIO DESCRIPTION AND IMPLEMENTATION

As soon as the containers are up and running, a REST message is sent to the controller (message #6 in Fig. 3.5) with information about name, network interfaces and their IP and MAC addresses of the VM that hosts the vCPE. Then, via OvSDB, the controller instantiates new OvS bridges (message #7 in Fig. 3.5) both in pCPE and in the VM referred in the previous message, being part of the vCPE architecture as showed in 3.3. Next, the controller links the vCPE’s OvS bridge to the output bridge adding patch ports to connect them (message #8 in Fig. 3.5). After, in the message #9 in Fig. 3.5, in pCPE’s and vCPE’s OvS bridges, tunnel ports are created with the proper end-point IPs defined. At this point the only thing missing to have a vCPE full designed is the attachment of the docker containers created in message #2 in Fig. 3.5 to the vCPE’s OvS bridge. Via REST protocol, controller executes the ovs-docker command in the vCPE’s node adding a port in the vCPE’s OvS bridge connecting to each docker container. Finally the last step of the vCPE instantiation is to update flow tables of all OvS bridges to set rules for the incoming packets respecting an end-to-end communication.

Once all these SDN mechanisms are completed, the vCPE is able to carry all the traffic between the pCPE and the end-node, passing across the CVNFs belonging to the vCPE.

3.4.2

vCPE migration

A vCPE migration is the process to move a vCPE already established in a node to another node of the cluster without affecting the current traffic. The vCPE migration is triggered when a certain pre-established threshold is crossed by the CPU workload of the respective node. In this case, when the VM reaches 85% of the workload, the network agent application present in the cluster’s node sends an high load alert message to the SDN controller (message #12 in Fig. 3.5). In its turn, the controller verifies which vCPEs are present in the overloaded node and depending on the SLA defined on its creation, migrates the one with higher-level priority. If the workload remains above the threshold for 5 minutes, the network agent alerts the OSM which in turn instantiates a new VM adding it to the cluster. In the same way, if the workload decrease to less than 30% and remains it for 5 minutes, the OSM is alerted and removes the node with less load after migrating the vCPEs presented in it. The workload level is a set of collected metrics such as CPU usage, memory and I/O devices. By measuring the workload on each cluster’s node, the deployment of a new vCPE is always assured with the addition of new nodes to the cluster if needed. This way it is possible to maintain the SLA defined upon the vCPE instantiation.

Referências

Documentos relacionados

When observing care in health services relat- ed to the number of pregnancies due to sexual violence, in this study, the occurrence of 14.2% (55 cases) of women not receiving

A classificação segundo o foco de cada pesquisa foi definida com base no trabalho de Reina e Ensslin (2011) em: Capital intelectual, Ativos intangíveis, Goodwill. A partir

Kelsen destaca que o labor interpretativo também se estende aos indivíduos igualmente intérpretes que necessitam compreender a lei para observá-la, evitando a sanção, assim como

Water glass modification with the colloidal solution of ZnO nanoparticles in propanol confirmed the effect of modifier on the water glass wettability of sand

social assistance. The protection of jobs within some enterprises, cooperatives, forms of economical associations, constitute an efficient social policy, totally different from

Este artigo discute o filme Voar é com os pássaros (1971) do diretor norte-americano Robert Altman fazendo uma reflexão sobre as confluências entre as inovações da geração de

Mi vida académica estuvo siempre vinculada de forma directa a las tecnologías digitales, tanto en la enseñanza media, como bachiller técnico en electrónica y control, como

The second part of the study discusses the importance of chemical composition, including the additions of 3% Cu, 1,5% Ni and 1,5% Mg, while in the third part attention was focussed