• Nenhum resultado encontrado

Offloading methods on MEC networks using SDN technologies

N/A
N/A
Protected

Academic year: 2021

Share "Offloading methods on MEC networks using SDN technologies"

Copied!
125
0
0

Texto

(1)

Universidade de Aveiro Departamento de Eletrónica,Telecomunicações e Informática

2020

João Pedro

Almeida Maia

Métodos Offloading em redes MEC com recurso a

tecnologias SDN

Offloading methods on MEC networks using SDN

technologies

(2)
(3)

If you hide your ignorance, no one will hit you and you'll never learn.

 Ray Bradbury

Universidade de Aveiro Departamento de Eletrónica,Telecomunicações e Informática

2020

João Pedro

Almeida Maia

Métodos Offloading em redes MEC com recurso a

tecnologias SDN

Offloading methods on MEC networks using SDN

technologies

(4)
(5)

Universidade de Aveiro Departamento de Eletrónica,Telecomunicações e Informática

2020

João Pedro

Almeida Maia

Métodos Offloading em redes MEC com recurso a

tecnologias SDN

Offloading methods on MEC networks using SDN

technologies

Tese apresentada à Universidade de Aveiro para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Engenharia de Computadores e Telemática, realizada sob a orientação científica do Doutor Daniel Nunes Corujo, Investigador Doutorado do Departamento de Eletrónica, Telecomuni-cações e Informática da Universidade de Aveiro, e do Doutor Rui Luís Andrade Aguiar, Professor catedrático Departamento de Eletrónica, Telecomunicações e Informática da Universidade de Aveiro.

(6)
(7)

Dedico este trabalho aos meus pais pelo seu enorme apoio ao longo de todos estes anos.

(8)
(9)

o júri / the jury

presidente / president Professor Doutor António José Ribeiro Neves

Professor Auxiliar do Departamento de Eletrónica, Telecomunicações e Informática da Universi-dade de Aveiro

vogais / examiners committee Professor Doutor Bruno Miguel de Oliveira Sousa

Professor Auxiliar do Departamento de Engenharia Informática da Faculdadede Ciências e Tec-nologia da Universidade de Coimbra

Doutor Daniel Nunes Corujo

Investigador Doutorado no Departamento de Eletrónica, Telecomunicações e Informática da Uni-versidade de Aveiro

(10)
(11)

agradecimentos /

acknowledgements Primeiramente, agradeço aos professores Doutor Daniel Corujo e Doutor RuiAguiar pela orientação, revisão, e sugestão desta tese. Também, agradeço aos investigadores Rui Silva e Daniel Santos pelos seus conselhos e sug-estões ao longo do desenvolvimento desta tese.

Agradeço aos meus pais por todo o apoio e motivação ao longo de todos estes anos de universidade, e por nunca me deixarem desistir de alcançar os meus objetivos.

Por fim, agradeço a todos os meus amigos por todos os momentos divertidos que partilhámos, por todas as gargalhadas que demos, e por todos os con-selhos que me deram.

Esta dissertação de mestrado foi realizada no âmbito do projeto de investi-gação PTDC/EEI-TEL/30685/2017 "5G-CONTACT - 5G CONtext-Aware Com-munications optimization" financiado pela FCT/MEC, e do projeto POCI-01-0247-FEDER-024539 “5G”, financiado pelo FEDER (através do POR LISBOA 2020 e do COMPETE 2020) e foi desenvolvida com o apoio do Instituto de Telecomunicações UID/EEA/50008/2019.

(12)
(13)

Palavras Chave SDN, MEC, NFV, Cloud, Rede 5G, Offloading.

Resumo À medida que as redes móveis de 5a geração (5G) começam a aparecer, surgem também novas tecnologias. Visto que a Internet das Coisas (IoT) é uma das principais razões para a criação desta nova geração, uma vez que exigem grandes largura de banda e conexões rápidas, estas novas tecnolo-gias visam beneficiar redes 5G e, assim, melhorar as velocidades da rede e oferecer serviços a um maior número de utilizadores. Os dispositivos IoT também não possuem os recursos necessários para executar tarefas comple-xas, sendo necessário transferir estas tarefas complexas para sistemas mais potentes, localizados em datacenters da cloud. Ainda assim, transferir estas tarefas para estes datacenters pode não ser eficaz devido aos atrasos na co-municação. Os ambientes que utilizem Computação de Acesso Múltiplo na Fronteira (MEC) podem reduzir estes atrasos na comunicação, deslocando recursos dos datacenters da cloud mais perto dos utilizadores. Ao aplicar técnicas de Redes Definidas por Software (SDN) e Virtualização de Funções da Rede (NFV) através do uso de um controlador e serviços a correrem como Funções de Rede Virtualizadas (VNFs), é possível redirecionar o tráfego com destino aos datacenters para o servidor MEC local e fornecer serviços mais rápidos.

Nesta tese, foi implementada uma arquitetura MEC integrando um controlador SDN e fornecendo serviços de transferência de tarefas como VNFs. Ao usar estes serviços e uma aplicação que testa a velocidade da rede para testar a rede, a arquitetura provou que consegue suportar tempos de resposta rápi-dos. Esta tese também apresenta um estudo sobre o impacto dos containers e das máquinas virtuais (VMs) na velocidade da rede, usando as aplicações anteriores.

(14)
(15)

Keywords SDN, MEC, NFV, Cloud, 5G Network, Offloading.

Abstract As the 5th-Generation (5G) of mobile networks are starting to show up, new technologies are also emerging. Seeing that massive Internet of Things (IoT) is one of the main reasons for the creation of this new generation of mobile net-works, since they require massive bandwidth and fast connections, these new technologies aim to leverage 5G networks and thus, improve network speeds as well as deliver services to a wider range of customers. IoT devices also lack the necessary computation resources to execute complex tasks, hence requiring to offload their tasks to more capable systems located at cloud data-centers. However, offloading to these datacenters may not be effective due to the communication delay. Multi-Access Edge Computing (MEC) environments can reduce this delay by relocating cloud resources closer to the end-users. By also applying Software Defined Networking (SDN) and Network Function Virtualization (NFV) techniques through the use of a controller and hosting services as Virtualized Network Functions (VNFs), they can redirect the traffic directed to the cloud datacenters to the local MEC server and provide faster services.

In this thesis, a MEC architecture integrating an SDN controller and provid-ing offloadprovid-ing services as VNFs was implemented. By usprovid-ing these offloadprovid-ing services along with a speed test application during tests, it proved to support fast response times. This thesis also presents a study on how containers and Virtual Machines (VMs) affect the network speed, by hosting the previous ap-plications.

(16)
(17)

Contents

Contents i

List of Figures v

List of Tables vii

Glossary ix 1 Introduction 1 1.1 Motivation . . . 1 1.2 Objectives . . . 3 1.3 Contributions . . . 3 1.4 Structure . . . 4

2 State of the Art 5 2.1 5G Networks . . . 5

2.2 Virtualization . . . 8

2.2.1 Containers . . . 9

2.2.1.1 Differences between containers and VMs . . . 9

2.2.1.2 Docker . . . 10

2.2.1.3 Kubernetes . . . 11

2.2.2 Network Function Virtualization . . . 13

2.2.2.1 Network Function Virtualization Architecture . . . 14

2.2.2.2 Network Function Virtualization and Software Defined Networking relationship . . . 16

2.3 Cloud Computing . . . 16

2.3.1 OpenStack . . . 19

2.4 Software Defined Networking . . . 20

2.4.1 Software Defined Networking Architecture . . . 22

(18)

2.4.3 Software Defined Networking in 5G networks . . . 24 2.4.4 OpenFlow . . . 25 2.4.4.1 OpenFlow Architecture . . . 26 2.4.4.2 OpenFlow Pipeline . . . 27 2.4.5 Open vSwitch . . . 28 2.4.6 Floodlight . . . 31 2.4.6.1 Floodlight Architecture . . . 32

2.5 Multi-Access Edge Computing . . . 33

2.5.1 Multi-Access Edge Computing Architecture . . . 35

2.5.2 Offloading in MEC environments . . . 37

2.5.3 MEC and other edge paradigms relationship . . . 38

2.5.4 MEC, SDN, and NFV relationship . . . 39

2.6 Related Work . . . 39

2.6.1 Integration of MEC, SDN, and NFV . . . 39

2.6.2 Containers in MEC environments using SDN controllers . . . 42

2.6.3 Offloading applications in MEC environments . . . 43

2.7 Summary . . . 45 3 Proposed Solution 47 3.1 Requirements . . . 47 3.2 Stakeholders . . . 48 3.2.1 Network Operators . . . 48 3.2.2 Service Providers . . . 48 3.2.3 End Users . . . 48 3.3 High-level Architecture . . . 48 3.4 Low-level Architecture . . . 49 3.4.1 User . . . 50 3.4.2 Access Point . . . 50 3.4.3 MEC host . . . 50 3.4.4 Azure Cloud . . . 51 3.5 Workflow . . . 51 3.5.1 Traffic to MEC VM . . . 52 3.5.2 Traffic to K8s VM . . . 54 3.6 MEC Applications . . . 56

3.6.1 Speed Test Application . . . 56

3.6.2 MEC Application Example Using Different Offloading Methods . . . 58

3.6.2.1 Full Offloading Application . . . 58

(19)

3.7 Summary . . . 61

4 Tests and Results 63 4.1 Tests Setup . . . 63

4.1.1 Speed Tests . . . 64

4.1.2 Offloading Tests . . . 64

4.2 Speed Tests Results . . . 66

4.3 Full Offloading Application Tests Results . . . 70

4.4 Partial Offloading Application Tests Results . . . 75

4.5 Comparasions between the two offloading applications . . . 88

4.6 Conclusions . . . 91

5 Conclusion 93 5.1 Final Remarks . . . 93

5.2 Future Work . . . 94

(20)
(21)

List of Figures

2.1 Differences between containers and VMs[11] . . . 9

2.2 Docker architecture [18] . . . 10

2.3 Kubernetes architecture . . . 12

2.4 European Telecommunication Standards Institute (ETSI) NFV architecture [33] . . . 15

2.5 Representation of a Cloud Computing architecture . . . 18

2.6 Openstack architecture [38] . . . 20

2.7 SDN architecture [44] . . . 22

2.8 SDN architecture with management [44] . . . 23

2.9 OpenFlow version 1.3.1 switch architecture . . . 26

2.10 Fields of a flow entry [51] . . . 27

2.11 OpenFlow Pipeline [45] . . . 27

2.12 Flowchart of a packet flow [51] . . . 28

2.13 OvS basic architecture . . . 30

2.14 Floodlight architecture [62] . . . 32

2.15 MEC architecture [71] . . . 36

2.16 Example of a MEC, SDN and NFV integration [5] . . . 40

2.17 High-level schematic of the framework [77] . . . 41

2.18 Edge Computing architecture using OpenStack and Kubernetes [24] . . . 43

2.19 The proposed predictive offloading solution in [82] . . . 45

3.1 The proposed solution’s high-level architecture . . . 49

3.2 The proposed solution’s architecture . . . 50

3.3 Workflow of traffic redirection to MEC VM . . . 53

3.4 Workflow of traffic redirection to K8s VM . . . 55

3.5 Workflow of the speed test application . . . 57

3.6 Workflow of Full Offloading Application . . . 59

3.7 Workflow of application 2 . . . 60

(22)

4.2 Speed test average RTT of all cases . . . 70 4.3 Full Offloading Application test plots . . . 74 4.4 Plots of the first group of Partial Offloading Application tests . . . 78 4.5 Plots of the second group of Partial Offloading Application tests . . . 81 4.6 Plots of the third group of Partial Offloading Application tests . . . 85 4.7 Plots of all Partial Offloading Application tests . . . 88 4.8 Plots of both applications tests . . . 90

(23)

List of Tables

2.1 Summary of related work . . . 45

3.1 Summary of the applications/modules . . . 61

4.1 ST-1 speed test results . . . 67 4.2 MEC VM speed test results . . . 67 4.3 K8s VM speed test results . . . 68 4.4 Full Offloading Application tests results . . . 72 4.5 Results from the first group of Partial Offloading Application tests . . . 76 4.6 Results from the second group of Partial Offloading Application tests . . . 80 4.7 Results from the third group of Partial Offloading Application tests . . . 83 4.8 PO-4 test results . . . . 86

(24)
(25)

Glossary

3GPP 3rd Generation Partnership Project 4G 4th-Generation of Mobile Networks 5G 5th-Generation of Mobile Networks 5GMF Fifth Generation Mobile

Communications Promotion Forum

5G NSA 5G Non-Standalone

5G-PPP 5G Public-Private Partnership Program

ACL Access Control List

A-CPI Application-Controller Plane Interface

AF Application Function

AP Access Point

API Application Programming Interface

ASCII American Standard Code for

Information Interchange

AWS Amazon Web Services

BDDP Broadcast Domain Discovery Protocol

BSS Business Support System

CaaS Containers-as-a-Service

CAPEX Capital Expenditure

CDN Content Delivery Network

CFS Customer Facing Service

CI/CD Continuous Integration/Continuous

Deployment

CN Core Network

CORD Central Office Re-architected as a Datacenter

COTS Commercial off-the-shelf

CPU Central Processing Unit

CRUD Create, Read, Update or Delete

D2D Device-to-Device

D-CPI Data-Controller Plane Interface

DDoS Distributed DoS

DN Data Network

DNS Domain Name System

DoS Denial-of-Service

DTN Delay Tolerant Network

EM Element Management

ETSI European Telecommunication Standards Institute

FCAPS Fault, Configuration, Accounting,

Performance and Security

Gbps Gigabits per second

GHz Gigahertz

GUI Graphical User Interface HTTP Hypertext Transfer Protocol IaaS Infrastructure-as-a-Service IBM International Business Machines

Corporation

ICN Information Centric Network

IEEE Institute of Electrical and Electronics Engineers

IETF Internet Engineering Task Force IETF SFC WG IETF Service Function Chaining

Working Group

IP Internet Protocol

IoT Internet of Things

IoV Internet of Vehicles

IRTF Internet Research Task Force IRTF NFVRG IRTF NFV Research Group ISG Industry Specification Group

ITU International Telecommunications Union

K8s Kubernetes

KVM Kernel Virtual Machine

LADN Local Area Data Network

LAN Loca Area Network

LLDP Link Layer Discovery Protocol

LoRa Long Range

LTE Long Term Evolution

LXC Linux Container

M2M Machine-to-Machine

MANO Management and Orchestration

MEC Multi-Access Edge Computing

mmWave Millimeter Wave

(26)

NASA National Aeronautics and Space Administration

NAT Network Address Translation

NBI Northbound Interface

NEF Network Exposure Function

NF Network Function

NFV Network Function Virtualization

NFVI NFV Infrastructure

NFVIaaS NFVI as a Service

NFV MANO NFV Management and Orchestration NGMN Next Generation Mobile Networks

MAC Media Access Control

MCC Mobile Cloud Computing

MEO Multi-Access Edge Orchestrator MPLS Multiprotocol Label Switching

NFVO NFV Orchestrator

NIST National Institute of Standards and Technology

ONF Open Networking Foundation

ONOS Open Network Operating System

OPEX Operating Expenditure

OS Operating System

OSM Open Source MANO

OSS Operations Support System

OvS Open vSwitch

PaaS Platform as a Service

QoE Quality of Experience

QoS Quality of Service

RACS Radio Applications Cloud Servers

RAM Random Access Memory

(R)AN (Radio) Access Network

REST Representational State Transfer

RTT Round-Trip Time

SaaS Software-as-a-Service

SCF Small Cell Forum

SDEC Software Defined Mobile Edge Computing

SDN Software Defined Networking

SDK Software Development Kit

SecaaS Security as a Service

SLA Service Level Agreement

SSH Secure Shell

TCP Transmission Control Protocol TLS Transport Layer Security

UE User Equipment

UI User Interface

UPF User Plane Function

vCDN Virtual Content Delivery Network VIM Virtualization Infrastructure Manager

VLAN Virtual LAN

VM Virtual Machine

VNF Virtualized Network Function

VNFM VNF Manager

VPN Virtual Private Network

vswitch virtual switch

(27)

CHAPTER

1

Introduction

1.1 Motivation

Nowadays, millions of people use mobile devices to access a variety of services such as video provisioning, online gaming, ordering food, and many others. According to Cisco’s global mobile data traffic forecast [1], mobile devices reached 8.6 billion in 2017, causing global mobile data traffic to grow to 71%. Cisco 1 expects a growth in mobile traffic, representing 20% of the total Internet Protocol (IP) traffic in 2022, with smartphones surpassing 90% of the mobile data traffic.

The consequent evolution in network technology enabled this massive growth of mobile devices, and consequently, mobile traffic. In 2017, the 4th-Generation of Mobile Networks (4G) was accountable for 72% of the mobile traffic and predictions forecast that in 2022, 54.3% of total mobile connections will be 4G connections. The actual 4G networks can support the increasing demand, although they face several difficulties and suffer from mobile data surge since they are limited in how much traffic they can handle. So, a new generation of networks is being developed, designated as the 5th-Generation of Mobile Networks (5G).

5G networks started appearing this year2 3. In 2022, Cisco estimates that 5G networks will handle over 400 million connections, holding over 10% of the total mobile traffic. Besides, they will offer low latency, high bandwidth, wide-coverage, and better Quality of Experience (QoE) to the users while connecting millions of devices such as smartphones, tablets, laptops, and Internet of Things (IoT) equipment.

Due to the IoT vision, smart end-user devices capable of connecting to the Internet and each other expanded very quickly. Agriculture, transportation, Smart Cities, and Smart Houses have increasingly been deploying these new devices in order to manage resources more efficiently. IoT devices often rely on Machine-to-Machine (M2M) connections, i.e.,

1

https://www.cisco.com/

2https://about.att.com/story/2019/5g_in_nyc.html 3

(28)

direct connections between devices through one or more communication channels, which are estimated to represent 31% of the total connections in 2022.

Even though they exchange information with each other, most IoT devices require the use of cloud services hosted in a remote datacenter. With the increasing number of these devices and mobile devices, datacenters won’t be able to handle the enormous volume of traffic and process all demands simultaneously. Hence increasing the response time which could be harmful in several cases, for instance, healthcare equipment that requires a fast connection to medical facilities, so it can alert the specialists as fast as possible when an emergency occurs.

Other cases would be fleet management and online gaming, although not as harmful as the previous example. In fleet management, if communication times are lengthy, then company assets may work inefficiently, suffer damage, or in the worst-case scenario, the company can lose assets. Examples such as drone deliveries depend upon location awareness to deliver the product to the correct customer in a short period. Drones need to communicate their status constantly, report their availability as well as their current location, and so forth. With high delays, drones can lose track of their way, deliver a product to the wrong destination, or take too much time to respond to a delivery request.

For online gaming, players require fast connections to servers hosting the game. In case a response takes too long to arrive at its destination, it will cause "lag", and a player will not be able to interact with the game appropriately. Companies have been looking into this issue and have started deploying fast connections in the internal network of the servers. Yet, a very high number of players connected to the servers can overload them4. This problem increases when these games use augmented reality or virtual reality, which requires location awareness and user context.

In some cases, the delay does not originate from the connections but from the time computing an application. If a system doesn’t have sufficient capacity to execute a complex task (e.g., face recognition), it will take a great amount of time to complete it. However, the client can dispatch the task to a more powerful machine (for instance, a public cloud server), which can complete it in a shorter duration, i.e., offload a task. Nonetheless, offloading to a remote public cloud server will increase the latency, making the time to obtain the result or response from the server increase, with the server not executing the task at the desired time. An example of this issue is in face recognition in video surveillance, which needs to process multiple images as well as identify a person in a short time. Since the cameras do not have the processing power, the remote servers execute image processing and identification.

A solution for this delay is the implementation of Multi-Access Edge Computing (MEC) servers with the aid of Software Defined Networking (SDN) technology in a network. MEC is an architectural shift that moves cloud services closer to the end-users by placing cloud computing resources on the edge of the networks. Hence, the users will experience faster services hosted on close servers designated as MEC servers. SDN is the separation of the control plane from the data plane, setting a central controller that overviews the network

4

https://business.financialpost.com/technology/gaming/update%2D1%2Dnintendos%2Dmario% 2Dmobile%2Dgame%2Dsuffers%2Dlaunch%2Dday%2Dserver%2Doverload

(29)

and handles control operations. By deploying an SDN controller within a MEC server, the controller is able to manage the traffic and operations within the server more effectively, and services respond faster.

These services are deployed through virtualization of resources, enabling the provision of a great number of services in the MEC server. However, it’s possible to virtualize the services by hosting them in a container or Virtual Machine (VM) and move them from one server to another, enabling mobility. Both SDN and MEC benefit from this approach, designated as Network Function Virtualization (NFV).

1.2 Objectives

The main aim of this thesis is to create and assess a MEC network that integrates an SDN controller that provides Virtualized Network Functions (VNFs) as services, delivers fast communications, and offers efficient offloading services. A MEC server connected to a VM in the cloud computing service Microsoft Azure5 was created by resorting to the cloud Operating System (OS) OpenStack 6. The MEC server follows the MEC and 5G specifications designed by the ETSI and the 3rd Generation Partnership Project (3GPP), respectively, considering interfaces related to MEC host management.

The MEC server hosts two VMs, with one hosting the container-orchestration software Kubernetes (K8s) 7 to deploy Docker8 containers that host the same applications that run in

the Azure Cloud. By resorting to these applications, it’s possible to study the speed of the network and compare the performance of hosting cloud applications on VMs and containers.

For this thesis, three applications were developed using version 3.6 of Python: one to analyze the connection speed between two entities, and the remaining two that use offloading mechanisms to test the container and VM efficiency. Both offloading applications use different methods: one application transmits a file containing code, while the other requests the server to execute it.

1.3 Contributions

The work in this thesis explores the use of offloading applications in a MEC platform integrated with an SDN controller in order to provide a more efficient service to mobile users. This resulted in the submission of the paper "Assessment of Communication and Computation Improvement in MEC offloading using SDN and NFV" with authors João Maia, Daniel Corujo, and Rui L. Aguiar to the Institute of Electrical and Electronics Engineers (IEEE) International Conference on Network Softwarization (IEEE NetSoft 2020), that will occur between June 29 and July 3 of 2020. 5 https://azure.microsoft.com/en-us/ 6 https://www.openstack.org/ 7https://kubernetes.io/ 8 https://www.docker.com/

(30)

1.4 Structure

The remainder of this thesis is organized in the following way:

• Chapter 2 - State of the Art: presents the background and the state of the art of the technologies used in this thesis;

• Chapter 3 - Proposed Solution: the solution to the problem described in 1.1 is presented, along with an explanation of the developed applications;

• Chapter 4 - Tests and Results: displays the obtained results from testing the architecture and conclusions from the observation of these results;

(31)

CHAPTER

2

State of the Art

This chapter describes the key concepts and technologies used in this thesis, as well as related work.

2.1 5G Networks

The 5G represents the next generation of mobile networks and cellular technology. A paradigm shift which will afford greater network speed, lower latency, and higher flexibility while focusing on energy efficiency and supporting always-on capabilities through high carrier frequencies, massive bandwidths, the intense density of base stations and devices and a grand number of antennas.

The goals 5G aims to achieve are the provisioning of very high data rates, ranging from 1 to 10 Gigabits per second (Gbps) (10 times greater than the current Long Term Evolution (LTE) networks), with low round trip latency of 1 millisecond along with a perceived availability of 99.99999%, and the reduction of energy consumption by 90% [2]. Meanwhile, connecting an increasing number of mobile devices and providing a high Quality of Service (QoS) to the users.

According to Hakiri et al. [3], 5G networks will not focus on routing and switching technologies. Instead, they will be open and flexible, allowing them to evolve faster compared to traditional networks. Also, they will be autonomous, with the ability to adapt to the user’s requirements and converge the network communications over multi-technology networks.

5G will create a great number of simultaneous connections bridging users with other users, sensors, cars, or any device, but more importantly, connect devices with other devices through embedded networking interfaces. Thus, enabling the deployment of the IoT vision, connecting millions of devices simultaneously, and shifting the current vision from human-centric interactions to M2M interactions between autonomous devices [2].

As stated in [4], one distinctive feature of 5G networks is the application of a more software-centric approach, promoting automation and service versatility by making use of virtualization (see section 2.2) and cloud technologies. In other words, the networks will be able to manage

(32)

networking functionality and separate the software from the hardware, allowing operators to purchase different applications from various vendors and execute these applications in the same hardware equipment, obtained from a different vendor. This disaggregation and decomposition of various components and functions in the network offers the following benefits [4]:

• More agile and faster time to market of new functions, features, and services; • Faster innovation and reduced costs by providing an open multi-vendor ecosystem; • Faster innovation in hardware and software;

• Optimization of the performance of decomposed functions by placing them in the best locations;

• Better independent scaling of each function;

• Move the user plane functions to the edge of the network, while control functions reside in a central network;

To fulfill the previously mentioned goals, 5G networks must deliver better QoS, QoE, and security than the current 4G networks. M. Agiwal, et al. [2] state that the focus of 5G is the user experience. Therefore, QoE will be the focal point instead of QoS. The QoE classifies the user’s perceived satisfaction with a product or service and focus on metrics such as interactiveness, the feeling of the product and ability to serve purposes, being more subjective and difficult to measure than QoS. Even though the focus is on the QoE, 5G does not neglect QoS. 5G is expected to reduce latency, resource and sharing constraints, and implement QoS-based architectures on both the client and server sides with service failure and degradation prediction along with optimal provisioning. However, QoS does not impact directly the QoE, since a higher QoS does not translate into a higher QoE, being more subjective to parameter buffering, startup time, bitrate, and the number of bitrate switches. As 5G is still in its development phase, there is a need to create standards to guide operators and developers in building and implementing their 5G networks. Many research institutions, mobile operators, network equipment bodies and international organizations, integrated various standardization bodies with this purpose [5]:

• 3rd Generation Partnership Project (3GPP)1: produces releases that provide developers with a stable platform for implementing features in their networks[6]. They planned their 5G specifications for Releases 15 and onwards, with Release 17 currently in development; • 5G Forum: leads the development of key candidate next-generation communications

technologies in Korea;

• 5G Americas2: organization composed of leading telecommunications service providers and manufacturers, fostering the advancement and full capabilities of LTE wireless technology and its evolution beyond to 5G;

• Fifth Generation Mobile Communications Promotion Forum (5GMF) 3: located in Japan, aims to create conduct research and development concerning the 5G Mobile Communications Systems; 1 https://www.3gpp.org/ 2https://www.5gamericas.org/ 3 https://5gmf.jp/en/

(33)

• 5G Public-Private Partnership Program (5G-PPP)4: initiated by the European Union Commission, it focuses on developing the next generation of network technologies accounting key societal challenges and their networking requirements;

• IMT-2020 (5G) Promotion Group: a Chinese initiative, analyzes the main technical scenarios, challenges and key enabling technologies for 5G;

• European Telecommunication Standards Institute (ETSI) 5: initiated various Industry Specification Groups (ISGs) to define standards in numerous 5G technologies such as MEC, NFV, and others;

• Institute of Electrical and Electronics Engineers (IEEE) 6: works on standardization of future 5G systems;

• International Telecommunications Union (ITU) 7: defines the framework and overall goals of future 5G systems;

• Open Networking Foundation (ONF)8: promotes the adoption of SDN through standard development and are responsible for the OpenFlow standard;

• Next Generation Mobile Networks (NGMN) 9: an industry alliance, develops require-ments for 5G mobile broadband technologies, focusing primarily on the needs of mobile network operators;

• Small Cell Forum (SCF) 10: an industry alliance as well. It aims to drive the wide-scale adoption of small cells and to influence and deliver technical inputs that inform and enhance the standards process;

5G has a wide range of applications and use cases. Private and public sectors such as energy, agriculture, city management, health care, manufacturing, and transportation will improve their software services with new options for deploying new network solutions with expected massive bandwidth from the Millimeter Wave (mmWave) [2]. Some examples of applications are Device-to-Device (D2D) communication, M2M communication, massive IoT, health care and wearables which can boost productivity and offer a better experience to users. The authors in [2] state that through the use of IoT, D2D, and M2M communications it is possible to implement advanced vehicular communications, also known as the Internet of Vehicles (IoV), for autonomous driving vehicles. IoV comprises interconnected vehicles providing robust traffic management and reduced collision probabilities by exploring roadside cooperative and non-cooperative relay nodes.

Some 5G services are already available in a few areas in various countries, even though these are early-generation 5G services designated as 5G Non-Standalone (5G NSA), as 5G will be slowly introduced in 2020 and is expected to become accessible in large scale by 2022 [7]. 4 https://5g-ppp.eu/ 5 https://www.etsi.org/ 6https://www.ieee.org/ 7 https://www.itu.int/en/Pages/default.aspx 8 https://www.opennetworking.org/ 9 https://www.ngmn.org/home.html 10 https://www.smallcellforum.org/

(34)

2.2 Virtualization

Virtualization is a major characteristic of the upcoming 5G networks since it allows them to handle the increasing number of devices in virtualized environments. [8] defines virtualization as the process of running a virtual instance of a computer system in a layer abstracted from the actual hardware. [9] gives another definition: a set of technologies that allows the creation of multiple simulated environments or dedicated resources from a single, physical hardware system.

Virtualization is divided into five types [10]:

1. OS Virtualization: the most well-known use of virtualization, consisting of an OS hosting several “guest” OSs, sharing resources from the same machine;

2. Data Virtualization: consolidation of data spread in different locations into a single source, transforming data according to user needs;

3. Desktop Virtualization: usually mistaken with OS Virtualization, it deploys simulated desktop environments to hundreds of physical machines from a central administrator; 4. Server Virtualization: virtualizing a server allows it to perform more functions,

parti-tioning it so that the components can serve multiple functions;

5. Network Function Virtualization (NFV): it is the separation of network key functions from proprietary hardware and distributed among different environments. This topic is further discussed in subsection 2.2.2;

Taleb et al. [11] stated that virtualization grants the possibility of a single physical machine to host several OSs, accessing them without switching devices or rebooting. Each OS seems to be running on a single dedicated machine when in reality, it is running on a VM, an abstraction of the physical hardware stack, requiring a full OS image, supplementary binaries, as well as libraries for hosting applications and services.

Managing all hosted VMs in one machine is the hypervisor, software which manages the resources of the hosting machine (Central Processing Unit (CPU), memory, storage, network interfaces, and other hardware or software resources) and is capable of creating or removing VMs. Hypervisors are classified in two types [8]: bare metal, which run guest VMs directly on the system hardware (examples are Xen Project 11, Linux Kernel Virtual Machine (KVM) 12, and Oracle VM Server for x86 13), and "hosted" hypervisors, which behave like a normal application that can be started and stopped (examples are VirtualBox

14, VMWare Workstation Player15, and QEMU16).

By resorting to virtualization, network providers benefit from the scalability, flexibility, reduction of costs, an increase in resource availability and performance, easier management, a decrease of downtime [12], and decoupling from the hardware. Besides, VMs can serve as test environments for new applications since they provide isolation from other VMs and the host

11 https://xenproject.org/ 12 https://www.linux-kvm.org/page/Main_Page 13 https://www.oracle.com/virtualization/vm-server-for-x86/ 14 https://www.virtualbox.org/ 15 https://www.vmware.com/products/workstation-player.html 16 https://www.qemu.org/

(35)

OS, in other words, the malfunctioning or jeopardizing of a VM will not affect the remaining VMs and the hosting device.

2.2.1 Containers

A portable, lightweight, and high-performance alternative to VMs are containers, which are considered lightweight VMs designed to run a single applicationlightweight VMs designed to run a single application. They are a set of isolated processes that share the same Linux kernel, packing code with its dependencies in a non-heavy computation form, enabling fast deployment, easy instantiation, and seamless portability [8] [13] [14] [15].

Containers are created from images comprising the application, necessary libraries as well as binaries to run the application, and a virtual network interface, which is not exposed to the outside networks by default [16]. According to Bernstein [17], containers share the same host OS, identical to VMs, but they do not require a hypervisor. However, container engines, such as Docker (see subsection 2.2.1.2), can replace hypervisors and manage, create, remove, deploy or modify containers.

IT environments can apply containers to fulfill several needs. For example, the creation of a cloud-native development style, use of DevOps and Continuous Integration/Continuous Deployment (CI/CD), and deployment of microservice architectures with each microservice hosted in its container, to name a few.

2.2.1.1 Differences between containers and VMs

(a) Architectural differences (left: VM, rigth:

con-tainer) (b) Qualitative differences

Figure 2.1: Differences between containers and VMs[11]

Taleb et al. [11] described the differences between containers and VMs, with a summarization displayed in figure 2.1b. They stated that containers distinguish themselves from VMs by applying abstraction at the OS level, while VMs abstract the physical layer, as can be seen in figure 2.1a. Because the abstraction takes place at the physical level, VMs require a full virtualized hardware stack, thus being heavy-weight and less efficient than native OSs. In contrast, containers do not require the virtualized hardware stack, causing them to have a similar CPU, memory, and storage performance to the host OS.

(36)

However, containers were designed to run a single application, and VMs can run several heavy applications simultaneously. Despite their lightweight and increased performance, containers are less secure than VMs, for they provide full isolation while containers only provide isolation at the process level, hence being more susceptible to attacks.

2.2.1.2 Docker

When discussing containers, the most common technology mentioned is Docker. Docker is a container engine, i.e., it is a container virtualization technology that creates, tests, and deploys applications using containers by relying on copy-on-write, namespaces, and control groups (also known as cgroups) to isolate processes and abstract the underlying layer [16] [17] [18] [19].

Docker was launched in 2013 as an open-source container engine [14] and resorted to a simple command-line interface to manage its containers, build layered images, run the server daemon, and access a library of pre-built container images [15]. It was initially built on top of the Linux Container (LXC) technology17, although it moved from this technology and now uses its containers based on layered images [19].

As described by Docker [18], each container is built from a layered image, which is a read-only template with instructions. Wach image is created from a Dockerfile, a text document that contains all the commands a user calls on the command line to assemble a Docker image [20]. Each command in the Dockerfile creates a layer, and when rebuilding an image, it rebuilds only the altered layers.

Figure 2.2: Docker architecture [18]

The Docker architecture, displayed in figure 2.2, comprises three components that cooperate to build, run, and manage containers [18]:

• Client: the main approach to interact with the Docker software, by using the command ’docker ’ which in turn resorts to the Docker Application Programming Interface (API) to communicate with the daemon. Both the client and the daemon can be located in the same system or in separate hosts, which communicates through a Representational State Transfer (REST) API;

17

(37)

• Daemon: software that listens for requests sent by the client and manages the containers, images, networks, and volumes;

• Registry: a repository that stores the pre-built Docker images, granting a wide range of options to users. The default repository is the official public repository supported by Docker, designated as Docker Hub18;

Docker containers enable modularity and fast deployment in any platform. It uses layers and image version control, as well as rollback capability, to add flexibility to image creation [19]. Besides, Docker containers have performance identical to native applications, differentiated by 2% at high concurrency [21], and do not possess the boot process, thus starting almost instantaneously [13].

Docker is great at managing single containers and displays the option to manage diverse containers simultaneously [18], but struggles when managing a high number of containers. The best option is to use container orchestrators designed to orchestrate large numbers of containers across several hosts or in one host, such as Kubernetes 19 (see subsection 2.2.1.3) and Docker Swarm 20.

2.2.1.3 Kubernetes

In production environments, it may be required to manage a large number of containers. In these cases, Docker alone is not the best solution, and it is necessary to use technologies that automate operations such as deployment and scaling of containers. Kubernetes is one solution for these cases.

Kubernetes, or K8s for short, is a portable open-source platform, originally developed by Google based on their project Borg [22] and open-sourced in 2014 [23]. For this reason, it is seen as a very reliable and well-tested technology that manages clusters of hosts, running containers provided as services, by automating operations such as deploying, scaling, restarting, and rollbacking containers, across multiple server hosts. According to Google, Kubernetes is "the decoupling of application containers from the details of the systems on which they run" thus, simplifying the application development and datacenter operations [17].

To manage a very high number of containers, Kubernetes has several features such as [23]: • Service Discovery, by exposing a container’s IP address or its Domain Name System

(DNS) name;

• Load Balancing, distributing the network traffic in case the access to the container is congested;

• Storage Orchestration, allowing users to use its storage system;

• Automated Rollouts/Rollbacks, allowing the management of the state of containers; • Automatic Bin Packing, deploying containers in the best location;

• Self-healing, by restarting/replacing failed/faulty containers; • Management of secrets as well as configurations of containers;

18 https://www.docker.com/products/docker-hub 19 https://kubernetes.io/ 20 https://www.docker.com/products/orchestration

(38)

Because of the modular programming, service discovery mechanism, self-healing mecha-nisms, the capability of fast scaling applications, and run applications independent from the underlying system, Kubernetes is well-suited for deploying microservice architectures with each node, or pod, hosting a service, communicating with each other over an internal network [24].

The Kubernetes architecture includes a master and several nodes, as illustrated in figure 2.3. The nodes are machines running applications, each managed by a single master, a node that hosts a collection of processes that collectively manage the cluster state [25].

Master kube-apiserver kube-scheduler etcd kube-controller-manager cloud-controller-manager Node Cloud kubectl kubelet kube-proxy Deployment Service Port Pod

Figure 2.3: Kubernetes architecture

According to [26], each node runs three processes that maintain the running pods (the basic execution unit of Kubernetes that represents processes running on a cluster, and can run one or more containers [27]) and provide a Kubernetes runtime environment:

1. a kubelet that ensures containers are running in a pod; 2. a kube-proxy that runs a network proxy;

3. a container runtime which is the software running the containers (e.g., Docker); Typically, nodes are comprised of several pods divided into several deployments, logical components that provide declarative updates for pods and replicas [28].

To expose an application that is running in a set of pods to external networks as a network service, Kubernetes uses services (abstractions that define a logical set of pods [29]), which are divided in four types: NodePort (exposes the application on a port of the node), ClusterIp (exposes the application on a cluster-internal IP), LoadBalancer (exposes the service using a cloud provider’s load balancer), and ExternalName (maps the contents of the application to a DNS name).

To expose an application that is running in a set of pods to external networks as a network service, Kubernetes uses services (abstractions that define a logical set of pods [29]), which are divided in four types:

(39)

2. ClusterIp: exposes the application on a cluster-internal IP;

3. LoadBalancer : exposes the service using a cloud provider’s load balancer; 4. ExternalName: maps the contents of the application to a DNS name;

The master node is comprised of more processes, making the Kubernetes environment and allowing the management of clusters. One of these processes is the kube-apiserver that exposes an API to the outside network as well as to the nodes and works with remaining processes to instruct the nodes. Meanwhile, the kube-scheduler schedules pods to available nodes, while the

kube-controller-manager runs several controllers, each managing nodes, replication operations,

endpoints, and accounts. The master node also includes a dataset that stores all cluster data, named etcd.

The master node is also capable of interacting with the cloud through a controller, designated cloud-controller-manager that interacts with the underlying cloud providers, enabling the control of nodes deployed in the cloud. Moreover, the master node provides a command-line tool, named kubectl, to run commands in the Kubernetes cluster, allowing the user to deploy applications, view logs, manage resources, and execute other operations. 2.2.2 Network Function Virtualization

As stated before, Network Function Virtualization (NFV) is one of the main technologies in 5G networks. As described by ETSI in [30], NFV is the implementation of Network Functions (NFs) in software that can run on a range of industry-standard server hardware. Virtualized Network Functions (VNFs) can be moved into several locations in the network (for example, high volume servers, switches, and end-user devices) without installing new equipment.

As a result of the NFs decoupling from proprietary hardware, NFV provides flexibility to operators by allowing them to deploy NFs where it is necessary. Thus, reducing Capital Expenditure (CAPEX) and Operating Expenditure (OPEX), since they can re-use the network infrastructure and eliminate the difficulty of accommodating more hardware in the network. When introducing a new NF bound to specific hardware, new hardware needs to be installed, increasing costs, leading to a very long product cycle, and increasing the dependency on specialized hardware [30] [31]. These issues are mitigated or even eliminated with the use of the NFV approach.

The main benefit of NFV is the decoupling of NFs and services from proprietary hardware appliances. When introducing a new NF bound to specific hardware, new hardware needs to be installed, increasing costs, leading to a very long product cycle, and increasing the dependency on specialized hardware [30] [31]. These issues are mitigated or even eliminated with the use of the NFV approach.

However, implementing the NFV approach in a network has its challenges, with one of them being performance. VNFs must have identical performance to NFs running in specific hardware while being portable [31], yet resorting to the virtualization technology makes this a challenge. Another challenge is the automation of functions, for without automating processes, VNFs cannot scale efficiently [30].

(40)

The specifications document ETSI GR NFV 001 [32] showcases several use cases for the NFV. Some of these use cases are the virtualization of mobile base stations, home environments, and IoT, as well as the provision of services such as Virtual Content Delivery Network (vCDN), Security as a Service (SecaaS), and application testing.

The authors in [32] also mention NFVI as a Service (NFVIaaS), which consists of providing cloud services deployed as VNFs in an NFV Infrastructure (NFVI) from another service provider, in other words, it provides an environment to deploy VNFs as a service. Two other service models are required to support the mentioned one: Infrastructure-as-a-Service (IaaS) and Networking-as-a-Service (NaaS) (both described in section 2.3). These provide the physical resources (such as networking, storage, and computing resources) while the NFVIaaS provides the virtualization resources.

The authors in [32] also mention NFVI as a Service (NFVIaaS), which consists of providing cloud services deployed as VNFs in an NFV Infrastructure (NFVI) from another service provider, in other words, provide an environment to deploy VNFs as a service. Two other service models are required to support this service model: Infrastructure-as-a-Service (IaaS) and Networking-as-a-Service (NaaS) (both described in section 2.3). These provide the physical resources (such as networking, storage, and computing resources) while the NFVIaaS provides the virtualization resources.

To help operators deploy and develop NFV technology, the following organizations are working on standards to guide them [31]:

• ETSI: besides the description given in section 2.1, ETSI is leading the industry in NFV standardization, focusing on the architectural framework, infrastructure description, NFV Management and Orchestration (NFV MANO), security, resilience, and service quality metrics;

• IETF Service Function Chaining Working Group (IETF SFC WG) 21: the goals of IETF SFC WG are to develop an architecture for the service function chaining, including the necessary protocols and extensions, as well to propose a new approach to service delivery and operation, while working on the management and security implications of these developments;

• IRTF NFV Research Group (IRTF NFVRG)22: an organization promoting research on NFV by organizing research activities in both academia and industry;

• Broadband Forum23: an industry consortium dedicated to developing broadband network specifications and how the NFV can implement the multi-service broadband network;

2.2.2.1 Network Function Virtualization Architecture

The NFV architecture, depicted in figure 2.4, is mainly composed of three key elements [33]: • NFV Infrastructure (NFVI): all the hardware and software resources that build the environment where VNFs are deployed, managed, and executed. These resources can be physical or virtual, with the latter abstracting computing, storage, and network

21 https://datatracker.ietf.org/wg/sfc/about/ 22 https://irtf.org/concluded/nfvrg 23 https://www.broadband-forum.org/

(41)

Figure 2.4: ETSI NFV architecture [33]

resources through a virtualization layer [31]. Besides, the NFVI can be distributed across several locations (in that case, the network providing connection among the different locations is considered to be part of the infrastructure);

– Hardware Resources: the computing, storage, and network resources that provide processing, storage, and connectivity to VNFs through the virtualization layer. The computing resources are assumed to be Commercial off-the-shelf (COTS), i.e., not vendor-locked;

– Virtualization Layer : abstracts the hardware resources and decouples VNFs from the hardware, thus they can be deployed in different physical hardware resources. The virtualization layer can resort to hypervisors and VMs, although it can also use an OS running in a non-virtualized server;

• Virtualized Network Functions (VNFs): the virtualization of NFs in a legacy non-virtualized network. A single VNF comprises several components and can be deployed in multiple VMs [31]. Each VNF is connected to one Element Management (EM) that executes typical management functionality;

• NFV Management and Orchestration (NFV MANO): responsible for the orchestration and lifecycle management of all resources that support the infrastructure virtualization and handles the lifecycle management of VNFs. This component comprises several elements:

– Virtualization Infrastructure Manager (VIM): comprised of functionalities used to control and manage the interaction of a VNF with computing, storage, network, and virtualized resources;

(42)

– NFV Orchestrator : responsible for orchestrating and managing the NFVI and the software resources, as well as realizing network services on NFVI;

– VNF Manager : handles the VNF lifecycle management, being able to serve one or multiple VNFs simultaneously;

– Service, VNF and Infrastructure Description: dataset that provides information concerning NFVI information models, VNF deployment templates, service-related information, and VNF Forwarding Graph. The templates and models are used within the NFV MANO, handling the information contained and exposing subsets of that information to applicable functional blocks;

2.2.2.2 Network Function Virtualization and Software Defined Networking relationship

The NFV technology is complementary to another 5G enabler technology, the Software Defined Networking (see section 2.4). Both technologies encourage the standardization of network hardware and the use of open software while leveraging automation and virtualization to fulfill their functions [31]. Nonetheless, the two do not depend on each other, since each one can perform their functions on the deployed networks in an efficient way.

According to [34], SDN can benefit NFV by providing programmable connectivity between each VNF, where each connection can be configured to one or more VNFs needs. Meanwhile, NFV can virtualize an SDN controller (described in section 2.4) and run it on the best location according to the network needs.

Mijumbi, Rashid, et al. [31] stated that SDN focuses on the decoupling of the control plane from the data plane, thus requiring a new network topology to separate both planes, while NFV separates NFs from proprietary hardware, hence VNFs can be deployed on existing networks.

2.3 Cloud Computing

Nowadays, most enterprises use cloud computing to either provide services to clients, store information, execute industry-related tasks, and many others. National Institute of Standards and Technology (NIST) defines cloud computing as "a model for enabling ubiquitous, conve-nient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction" [35].

Typically, cloud computing platforms comprise several physical machines comprising a single logical entity shared among different players, with each player carrying distinct isolated tasks [11]. This single logical entity is managed by an infrastructure provider that manages the platform and leases resources according to a usage-based pricing model [36].

According to the NIST in [35], a cloud computing platform must contain essential char-acteristics to offer services to its users. These charchar-acteristics are on-demand self-service to provide resources automatically without requiring human interaction, rapid elasticity of resources to scale rapidly outward and inward according to the demand, resource pooling to

(43)

serve the users’ needs, measured service to control its usage, and broad network access to access the services through standard mechanisms.

Depending on its accessibility, cloud environments are frequently categorized into four types [35]:

• Private Cloud: the infrastructure is maintained for the exclusive use of a single entity and may exist on or off-premises. Typically, this type of cloud is behind a corporate firewall, ensuring security and dedicated access [11];

• Public Cloud: the infrastructure is open for use by the general public and managed by a company or academic organization, with the infrastructure located at the manager’s premises. However, this type of cloud lack fine-grained control over data, network, and security settings [36];

• Hybrid Cloud: the infrastructure is a combination of one or more cloud infrastructures bound by standardized or proprietary technology. Commonly, they combine private and public clouds to address the limitations of these two types, even though they require a careful division of several public and private cloud components [36];

• Community Cloud: the infrastructure is for the exclusive use of a community of organi-zations that share the same concerns and is maintained by one or more organiorgani-zations in the community or a third party;

Cloud computing provides several services on-demand basis with each service fulfilling a client’s needs. Depending on their requirements, the clients can choose from three service models [35]:

• Software-as-a-Service (SaaS): the most common service model, provides access to applications hosted in cloud infrastructure, accessible by several devices through a web browser or a program interface. For this service model, it is not required to manage the underlying infrastructure with this responsibility falling upon the cloud providers; • Platform as a Service (PaaS): the consumer is granted the management capability of a

platform by the cloud provider. The consumer can deploy their applications without concerning with underlying infrastructure, controlling only the application-hosting environment;

• Infrastructure-as-a-Service (IaaS): the consumer is granted the capability to provision processing, storage, networking, and other computing resources, allowing them to deploy VMs, firewalls, routers, and load balancers. The consumer does not manage the physical infrastructure, being the cloud provider’s duty to do so;

Usually, the services supplied by the cloud providers fall in one of the before-mentioned service models. There are other service models, such as Containers-as-a-Service (CaaS), Security as a Service (SecaaS), Networking-as-a-Service (NaaS), and NFVI as a Service (NFVIaaS), that fulfill specific needs of consumers.

According to [36], cloud computing is compelling to business owners because of the following advantages: no up-front investment, for it uses use-based pricing model and service providers do not need to invest in infrastructure, low operating costs for service providers since the infrastructure is managed by the cloud provider, high scalability due to the large

(44)

pool of resource available, easy access through the Internet, and reduced business risks as well as maintenance expenses for the infrastructure is managed by well-equipped staff.

Despite its benefits, cloud computing is unable to meet certain requirements, such as low latency and jitter. It also lacks context awareness and mobility support, which are crucial for several applications like vehicular networks and augmented reality [37]. It also suffers from network congestion when traffic is very high and network bottleneck, which could be fatal for time-sensitive applications [13].

Cloud computing is widely used for many services due to its gains and benefits. Examples like Amazon Web Services (AWS)24, Microsoft Windows Azure 25, and Google Cloud 26, are widely used by a large number of organizations. Facebook, YouTube, and Instagram are just some examples of web-based applications hosted in cloud infrastructures that are managed by the companies themselves or by third parties and accessed by an extensive number of users. Zhang, Qi, et al. [36] state that cloud architectures are typically deployed in datacenters. However, to support them, the network architecture should include important features, such as free VM migration, scalability of servers into a large quantity, resiliency since failures are common in large scales, and uniform high capacity in the network.

Hardware

(CPU, Memory, Disk, Networking) Infrastructure/Virtualization

(Virtual Machines) Platform (Software Framework)

Application (Web Services, Multimedia)

Datacenters Microsoft Azure Google AppEngine Amazon EC2 OpenStack Youtube Facebook Examples Infrastructure as a Service (IaaS) Platform as a Service (PaaS) Software as a Service (SaaS)

Service Models Layers

Figure 2.5: Representation of a Cloud Computing architecture

Frequently, cloud computing architectures are divided into four layers loosely coupled from each other, as illustrated in figure 2.5 [36]:

• Hardware Layer : this layer manages the physical resources. Hardware configuration, fault-tolerance, traffic management, power management, and cooling resource manage-ment are the most common issues in this layer;

• Infrastructure Layer : this layer, also known as the virtualization layer, creates a pool of storage and computing resources, which are divided by virtualization technologies (see section 2.2), making it an essential layer;

• Platform Layer : this layer comprises OSs and application frameworks designed to minimize the burden of deploying applications directly into VMs;

• Application Layer : this layer is where the actual cloud applications reside;

24

https://aws.amazon.com/

25https://azure.microsoft.com/en-us/ 26

(45)

2.3.1 OpenStack

The solution proposed in this thesis uses the OpenStack software, a cloud OS that controls large pools of computing, storage, and networking resources throughout a datacenter, all managed and provisioned through APIs with common authentication mechanisms [38]. In chapter 3 is detailed the implementation of an OpenStack project in the solution.

According to [39], the OpenStack platform was developed by the National Aeronautics and Space Administration (NASA) with all code written in Python and licensed under Apache 2 license, making it an open-source cloud platform. Its main characteristics are the capability to scale up to 1 million physical machines as well as to 60 million VMs, being an open-source platform, thus the code can be adapted to a specific need, and its ability to support most virtualization technologies on the market.

Cloud operators and user applications benefit from OpenStack, for it provides a flexible open-source cloud infrastructure that eases the deployment of VMs over existing resources, simplifies testing due to its modularity, and providing features that follow emerging open standards [39].

The OpenStack platform provides an IaaS functionality by creating VMs on top of existing resources, with additional components providing orchestration, management, and other types of services to manage and maintain the existing VMs [38]. For each OpenStack version, there are a set of design goals, specified in [40], the platform must fulfill:

• Basic Physical Datacenter Management: OpenStack does not assume the existence of a datacenter, so it provides the tools to operate one and make its resources available to consumers;

• Play Well with Others: by adding layers of abstraction between the platform and the end-user applications, OpenStack allows the integration of third-party open-source projects;

• Hardware Virtualisation: OpenStack provides vendor-independent APIs, giving the consumers software-defined control to allocate the resource in a multi-tenant environment; • Infinite, Continuous Scaling: OpenStack provides interfaces, allowing application devel-opers to scale their applications efficiently from very small workload to very large ones, without re-architecting their applications;

• Built-in Reliability and Durability: by providing primitives, such as reliable delivery of messages and durable storage, OpenStack allows developers to build reliable applications on top of these resources, making the application reliable;

• Customizable Integration: OpenStack does not impose any particular deployment model or architecture on applications since it allows the services to be wired together through public APIs;

• Abstract Specialised Operations: OpenStack abstracts the management of certain com-ponents behind an API, formalizing the communication between comcom-ponents;

• Graphical User Interface (GUI): it is more beneficial to get a broad overview of the state of cloud resources and to visualize relationships between them, as it is an easier way to new users approach cloud platforms;

(46)

OpenStack is divided into several services that allow the insertion of components depending on a user’s needs [38]. Each service focuses on a single functionality (e.g., networking, computing, orchestration) and provides APIs to access the infrastructure resources. Figure 2.6 depicts the various services that compose the OpenStack platform.

Figure 2.6: Openstack architecture [38] The most important OpenStack services are [41]:

• NOVA: Compute Service, i.e., provides computing resources;

• IRONIC : Bare Metal Provisioning Service, i.e., provides bare-metal resources; • SWIFT : Object Store, i.e., stores data as objects;

• CINDER: Block Storage service, that virtualizes the management of block storage devices;

• NEUTRON : Networking, i.e., delivers NaaS through SDN;

• KEYSTONE : Identity Service, i.e., provides API client authentication, service discovery and distributed multi-tenant authorization;

• PLACEMENT : Placement Service, i.e., provides a Hypertext Transfer Protocol (HTTP) API for tracking cloud resource inventories and usages;

• GLANCE : Image Service, i.e., discover, register and retrieve VM images; • HEAT : Orchestration Service that manages the infrastructure resources; • HORIZON : the canonical implementation of OpenStack’s Dashboard;

2.4 Software Defined Networking

Legacy networks are known for being vendor-locked thus, complicating the addition of new hardware and protocols, as well as the network configuration. The SDN approach eases these processes, since it is "the physical separation of the network control plane from the forwarding

(47)

plane and where a control plane controls several devices" [42] hence, removing the dependency on proprietary software.

The SDN aims to supply open interfaces that enable the development of software that can control network connectivity and flow of network traffic, along with inspection and modification of traffic in the network [43] [44]. By separating the control plane from the data plane, the switches become bare-metal devices that only forward packets, controlled by a logically centralized intelligence, known as the SDN controller, via software programs.

In [45] are listed the technology and operational concerns that lead to the development of the SDN approach. These concerns are the automation of functions, dynamic management of resources, orchestration of multiple network appliances, provision of multi-tenancy support, use of open APIs, ability to configure devices in real-time, integration of security devices as well as resource management services within the network fabric, ability to incorporate innovative traffic engineering solutions, use of network virtualization, and real-time monitoring of the network.

The development, standardization, and commercialization of SDN are at the hands of ONF 27, a non-profit operator-led consortium [46]. The ONF is involved in several projects involving SDN, such as Central Office Re-architected as a Datacenter (CORD)28, Mininet29, and Open Network Operating System (ONOS) 30.

Because the network intelligence is located in the SDN controller, the network devices on the data plane are simplified, forwarding the packets and following the controller’s instructions as their functions. Hence, the data plane becomes less complex, enhancing its performance and allowing the use of general-purpose hardware, thus eliminating vendor dependencies [5]. Since the switching devices are simplified and turned into low-cost solutions, a reduction in the cost of service delivery occurs [47].

Regardless, SDN presents challenges that were not resolved yet, such as missing one standardized Northbound Interface (NBI), the shift from traditional networks to SDN can be disruptive, issues while operating with legacy network devices, and demand for higher memory space and processing speed since switching devices need to store a great number of instructions and rapidly process the packets [46].

Google has designed and deployed a software-defined Wide Area Network (WAN) connect-ing its datacenters across the planet for three years. Accordconnect-ing to Google [48], the project was called B4 and resorted to several technologies such as OpenFlow and Quagga31. It was designed to withstand massive bandwidth requirements, elastic traffic demand, and provided full control over the edge servers and network. The project displayed higher service traffic and a higher growth rate than Google’s public-facing WAN, improved fault tolerance, and deployment of cost-effective WAN bandwidth.

27 https://www.opennetworking.org/ 28 https://opencord.org/ 29 http://mininet.org/ 30 https://onosproject.org/ 31 http://www.nongnu.org/quagga/

Referências

Documentos relacionados

Spatial cluster of high Salmonella 1,4,[5],12:i:- infection rates in mainland Portugal from 2000 to the first quarter of 2011.. districts located at the northwest, northeast, center

Se presenta información sobre la variación morfológica de la rana Phyllomedusa ecuatoriana, con la descrip‑ ción del renacuajo, vocalizaciones, anotaciones sobre la historia natural

As atividades metrológicas supracitadas nesse capítulo podem trazer vários benefícios para as empresas, mas apesar disso não basta apenas investir sem ter o mínimo de

Foram discutidos vários temas importantes, como: o estabelecimento pelos governos de políticas farmacêuticas adequadas; a importância e meios para regulamentação farmacêutica;

 Analisar correlação parcial entre valores (delta) da temperatura da pele, percepção subjetiva de esforço, sensação térmica, percepção de recuperação,

Correu de boca em boca entre as gerações e os mais idosos sabem que o curioso nome da Cidade de Vitória da Conquista está ligado à conquista imposta aos índios por João

Pós-Doutorado em Museologia – Universidade Lusófona de Humanidades e Tecnologias; Doutorado em História Social – PUC- São Paulo; Mestrado em Ciência da Informação e

Kelsen destaca que o labor interpretativo também se estende aos indivíduos igualmente intérpretes que necessitam compreender a lei para observá-la, evitando a sanção, assim como