• Nenhum resultado encontrado

Service building in dense service provider environments

N/A
N/A
Protected

Academic year: 2021

Share "Service building in dense service provider environments"

Copied!
174
0
0

Texto

(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)

1 Introduction 1 1.1 Motivation . . . 1 1.2 Thesis Structure . . . 4 1.3 Contributions . . . 5 1.4 Methodology . . . 6 2 Service Providers 7 2.1 The Concept Of SP: From Silos to Opportunistic Meshes . . . 7

2.1.1 Three Epochs . . . 8

Early-90s: Global, Heterogeneous, Open . . . 8

Late-2000s: User, Any(-where,-time), Scale . . . 10

2.1.2 Service Building In Dense Service Provisioning Environments . . . 11

2.2 Next-Generation Networks . . . 11

End-to-End Transport . . . 12

2.3 Literature Review . . . 14

2.3.1 End-to-End Resource Management . . . 15

2.3.2 Interdomain Topology . . . 19

2.3.3 Content-Delivery Networks . . . 21

(12)

2.4.2 Beyond Monolithic Models: Dense SP Composition . . . 29

Fragmented vs Monolithic Scenarios . . . 30

Two Use-Cases Of SP Topologies . . . 30

SP Topologies Using A Tecno-Economical Model . . . 31

3 Interdomain Topology 37 3.1 Introduction . . . 37

3.2 Network Model and Results . . . 38

3.2.1 Network Model . . . 39

Derivation Synopsis . . . 40

Network Model and Notation . . . 41

3.2.2 Model Outcomes . . . 43

Feasible Single-Pair, Directed Topologies . . . 44

Feasible End-to-end Topologies . . . 45

3.2.3 Topological Implications And Peering Models . . . 46

Peering Models . . . 47

A Graph Reduction Interpretation of Results . . . 47

Other Internet Properties . . . 48

3.3 Comparison with the Current Internet . . . 49

3.3.1 Methodology . . . 50

AS Classification Criteria . . . 50

Limitations Of The Analysis . . . 51

3.3.2 Routes From Topology . . . 52

(13)

4.1 End-To-End Inter-Domain Resource Management . . . 57

4.1.1 QoS and Supply Chains . . . 58

Two Realistic Use-Cases Yet Largely Undeployable . . . 59

QoS: The Missing Link Of The Chain . . . 60

Supplying What And How? . . . 60

4.1.2 Inter-Domain QoS: (Still) An Open Problem . . . 61

Intra-Domain QoS . . . 61

SLA Auditing . . . 62

Service Set . . . 62

4.1.3 Structure of this Chapter . . . 63

4.2 A Federation Plane . . . 63 4.2.1 Introduction . . . 63 4.2.2 Problem Discussion . . . 64 4.2.3 System View . . . 65 4.2.4 Domain Architecture . . . 66 Domain Model . . . 67 Domain Profiles . . . 69

Interdomain (Transport-) Service Discovery . . . 70

4.2.5 Offline Service Building . . . 70

4.2.6 Real-Time Operations and QoS Enforcement . . . 71

4.2.7 Evaluation of the Feedback Mechanism . . . 72

Efficiency of the congestion control . . . 73

(14)

4.3.1 Introduction . . . 75

The Need for Global Service Discovery . . . 76

4.3.2 Pointer Advertisement Strategies . . . 77

Only-Neighbors Strategy . . . 78

All-Listed Strategy . . . 79

Relay Strategy . . . 82

4.3.3 Integrated Comparison . . . 83

Results From Simulation . . . 84

4.4 Service Building and Discovery . . . 87

4.4.1 Introduction . . . 87

4.4.2 Terminology and Definitions . . . 88

Service Building Success . . . 89

Service Distribution . . . 90

Strictness of Requests . . . 91

4.4.3 Fast and Distributed Service Building . . . 91

4.4.4 Service Composition over Internet Topologies . . . 93

Illustrative Analytical Results . . . 93

Internet-alike Topologies . . . 95

4.5 Reaching Consensus . . . 98

4.5.1 Introduction . . . 98

4.5.2 A Framework For Service Selection . . . 98

4.5.3 Requirements and Algorithms For A Design . . . 102

(15)

5 CDN Peering 111

5.1 Introduction . . . 111

5.2 Interconnection of CDNs: The Efficiency Perspective . . . 112

5.2.1 Definitions . . . 113

5.2.2 Assumptions . . . 113

5.2.3 Problem Statement . . . 114

5.3 Methodology . . . 114

5.4 Experiments And Results . . . 115

5.4.1 Absolute cost with varying number of surrogates . . . 115

5.4.2 Inefficiency with varying number of surrogates . . . 117

5.4.3 With varying size of the underlying topology. . . 117

5.5 Conclusions . . . 117

6 Conclusions 121 6.1 Conclusions . . . 121

6.2 Future Work . . . 122

6.3 The Big Picture . . . 124

(16)
(17)

1.1 The operational context of a Service Provider. . . 3

2.1 Chain of value. . . 9

2.2 De-verticalization of networks. . . 10

2.3 Reference architecture of ITU-T NGN. . . 12

2.4 SP/NGN composition. . . 13

2.5 Components of a CDN. . . 22

2.6 CDNi preliminary architecture. . . 23

2.7 DAIDALOS interdomain architecture. . . 25

2.8 Daidalos access network. . . 27

2.9 Daidalos (SIP) service request with QoS. . . 28

2.10 SP composition. . . 31

2.11 Voice call in a fragmented scenario. . . 32

2.12 SP orchestration. . . 33

2.13 SP topologies. . . 35

3.1 Cascades of bilateral traffic agreements with reverse price offers. . . 39

3.2 Price cascading. . . 41

3.3 Example of an unfeasible Graph. . . 44

(18)

3.7 Feasible topologies after relaxing the equilibrium condition (sib means a sibling relation). 46

3.8 Cascading vs Peering For Three Levels. . . 47

3.9 Illustration of graph reduction: from the physical graph (left), to a business graph (middle) to a forwarding graph (right). Nodes belonging to the same tier have same colours and sizes. 48 3.10 Link "cuts" for the edge degree distribution of the Internet using log (top) and linear (bottom) scales). . . . 50

3.11 Route Structures From Topology. . . 54

3.12 Route Violation Proportion. . . 55

3.13 Routing Violations From BGP Announcements: performance of criteria per data set (top) and averages (bottom). . . . 55

3.14 BGP Routes Inflexions. . . 56

4.1 QoS mapped to a conventional product supply chain. . . 61

4.2 Illustration of peering models: cascaded (left) and meshed (right). . . 65

4.3 Timescales of operations. . . 67

4.4 Domain model. . . 67

4.5 Border router model. (Lgr: Logger; Mkr: Marker; S: scheduler) . . . 68

4.6 Access router model. (Shap: shaper; Mkr: marker; S: scheduler) . . . 69

4.7 Offline service building example from the perspective of domain 1 and for destinations belonging to domain 9. step 1: interface numbering; step 2: elimination of domains that do not provide BGP connectivity to domain 9; step 3: service matching; step 4: selection of the least-cost path. . . 71

4.8 Real-time congestion control. . . 72

4.9 Network topology used in simulations. In domains A, B, C, D and E (bottlenecks) an illustration of queues is also depicted. . . 73

4.10 Ratio of conformant packets with increasing load. . . 74

(19)

4.15 Only-Neighbor Strategy Example. Shown the physical topology. left: Two domains

up-dating peers. right: three ways of upup-dating the lower-right domain after an update. . . 78

4.16 "Only-Neighbors" Strategy: example with 5 domains. Symbol ∀ means all domains are known; an additional (*) means no updates will be sent by that domain. . . 79

4.17 "All-Listed" Strategy: example with 6 domains. For iteration n, it is shown, for each domain, the locally known domains at instant t = nT+ q , and the number of transactions the domain sends before t = (n + 1)Tq+. . . 80

4.18 "All-Listed" Generalization Diagram. . . 81

4.19 Transactions for N=23 in the "All-Listed" Strategy. . . 81

4.20 Relay Strategy. . . 82

4.21 3-level Relay Hierarchy . . . 83

4.22 Numerical Comparison of the Layered Scheme. . . 84

4.23 Only-Neighbors vs All-Listed Strategies (dashed). . . 85

4.24 String Topologies Evaluation by Simulation. . . 85

4.25 Scheme comparison - per domain overhead. . . 86

4.26 Scheme comparison - impact of edge degree. . . 86

4.27 Impact of increasing number of relays. . . 87

4.28 Service Building Example. S seeks a service path to D that can support more than 16 kbps and less than 50 ms one-way-delay. . . 90

4.29 Service Probability Density: η = p1/p0. . . 90

4.30 Example of Service Graph Discovery. . . 92

4.31 String analysis. left: Convolution of two uniform distributions. right: results (squares: additive; circles: convex). . . 94

4.32 Probability of Finding a Service Path with Increasing Strictness for various edge degrees. left: N=100; m=2, 3, 4; η = 1; right: N=100,200; m=3; η = 1. . . . 95

(20)

metrics overriding BGP, additive metrics using BGP AS-distance rule, additive metrics overriding BGP. right: with varying popularity. . . . 96 4.35 Transaction Count. . . 97 4.36 Illustration of the approach taken: local service functions sum to produce a global service

function. . . 99 4.37 An example of service selection when two domains propose one 2-dimensional service each. 100 4.38 Selecting a set of services by consensus in three phases. . . 104 4.39 Convergence speed varying with topology sizes (left) and service error versus convergence

speed for two modes ("only neighbours" and "all domains")(right). . . . 107 4.40 Quality of consensus evaluation. left: different service effort functions; right: with

diver-gent user demand. . . 108 4.41 Quality of consensus evaluation: increasing spread of user demand. . . 108

5.1 Simplified architecture of CDN interconnection. . . 112 5.2 Example illustrating the experimental approach. A topology is partitioned in two (gray

and white nodes) and the partitions are independently provisioned. . . 113 5.3 Cost, in terms of topological distance, for each topology size, with varying surrogate

num-ber (fraction to total nodes) and partition fractions. . . 116 5.4 Inefficiency cost related to the optimal provisioning cost for each topology size, with varying

surrogate number (fraction to total nodes) and partition fractions. . . 118 5.5 Inefficiency cost for two sets of topologies with different link density, m = 2 and m = 5. . 119 5.6 Inefficiency cost with varying topology sizes. In the three plots, and from top to bottom,

is it shown M = 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8. . . 119

(21)

2.1 Impact of Daidalos Federation Classes on Selected Cases (NP: Not Possible). . . 29

3.1 Symbols used. . . 40

3.2 Distribution Of Route Lengths. . . 52

4.1 Comparison of the four strategies. . . 84

4.2 Notation and symbols. . . 89

4.3 QoS metrics and composition examples . . . 89

4.4 Practical example of service functions. . . 107

(22)
(23)

3GPP 3rd Generation Partnership program 3P Third Party

AF Assured Forwarding AP Access Point

ARPU Average Revenue Per User AR Access Router

AS Autonomous System BGP Border Gateway Protocol c/p customer/provider

CAPEX Capital Expenditure CDNi CDN Interconnection CDN Content Delivery Network CDR Call Detail Record CN Correspondent Node CQoSB Core QoS Broker CS Circuit Switching CoA Care of Address

DHCP Dynamic Host Control Protocol DNS Dynamic Name Service

(24)

EF Expedited Forwarding EU European Union FTP File Transfer Protocol FWHM Full Width at Half Mean GBR Guaranteed Bit Rate

GII Global Information Infrastructure GWKS Globally Well-Known Services IETF Internet Engineering Task Force IGMP Internet Group Management Protocol IMS IP Multimedia Subsystem

IM Instant Messaging

INP Internet Network Provider

IPPM Internet Protocol Performance Metrics IPS IP Sphere

IP Internet Protocol

ITU International Telecommunications Union

ITU-T International Telecommunications Union – Telecommunication IdF Interdomain Function

LAN Local Area Network MBGP Multiprotocol BGP MBR Maximum Bit Rate MNO Mobile Network Operator MPLS Multi Protocol Label Switching MT Mobile Terminal

(25)

NGN Next Generation Network NO Network Operator

NSIS Next Steps In Signalling NSLP NSIS Signalling Layer Protocol NaaS Network As A Service

OPEX Operational Expenditure OSI Open Systems Interconnection OTT Over-The-Top

PDB Per-Domain Behaviour PDP Packet Data Protocol PHB Per-Hop Behaviour

POTS Plain Old Telephone System

PSTN Public Switching Telephone Network PS Packet Switching

QoSB QoS Broker QoS Quality of Service RFC Request For Comments RSVP ReSerVation Protocol RTT Round Trip Time

SDO Standards Developing/Development Organization SLA Service Level Agreement

SMTP Simple Mail Transfer Protocol SPM Service Provider Management

(26)

UE User Equipment VAS Value-Added Service VD Visited Domain VOIP Video Over IP

VPN Virtual Private Networks VfR Valley-free Routing VoD Video On Demand VoIP Voice over IP

WKSS Well-Known Services Scenario WLAN Wireless LAN

Wi-Fi Wireless Fidelity WiFi Wireless Fidelity

XML Extended Mark-up Language ZQoSB Zone QoS Broker

(27)
(28)
(29)

ATNOG group.

To Faroleiros, all colleagues from the Network and Architectures group, with whom I shared a space, a relaxed atmosphere and a couple of beers; to Prof. Susana Sargento for guidance and fruitful discussions in a large part of this work; and a special mention to Dr. Eduardo Estanqueiro Rocha, as we shared almost the same timings regarding the PhD, I thank the friendship, the support and the remote extra hand

To Prof. Peter Steenkiste, with a sharp, practical, focussed experience of research in computer science but also, and there’s no PhD without it, paper engineering and perseverance.

A special word to my supervisor, Prof. Rui Aguiar, with these amazing abilities of guiding without a trace of micro-management, and with whom, more than teaching me all I know about moving bits, I learned this particular way of thinking and questioning. Raising good questions is, indeed, the best way to find answers. I will soon understand whether it is a bless or a curse. At least it is fun. When you already know the answer. Or when it is really not that important. Because there’s a even better question. To Mother and Sister,

To Xica Miau,

To Ijsac and its misterious ways,

To Cabernet-Sauvignon at EUR 2.29 from Andaluzia,

To Adeline Haverland for all the support and without whom this thesis would not be possible, To Rhianon Gale, for giving me a reason to finish the PhD as fast as possible,

and everybody else I may shamefully be forgetting. It’s been a long while since 2006.

(30)
(31)
(32)
(33)

1

Introduction

This preparatory Chapter positions this thesis with respect to the (living) background of (inter-)network engineering, briefly presents its structure, gives a summary of the contributions of the underlying Ph.D. work and, finally, explains the methodology and tools used to arrive to results that make the contributions of this PhD work.

1.1. Motivation

By now, the Internet can hardly be called a new technology: it has been over 40 years since the Internet began with RFC 1 [1] and ARPANET, 30 years since e-mail was first specified, 20 years since the world-wide-web – an expression that nobody uses anymore – was sown in CERN by Tim Berners-Lee, 10 years since VoIP began replacing POTS (Plain Old Telephone System), 5 years since the cloud paradigm began drawing attentions, 3 years since Blockbuster saw steep declining lines because people cannot be bothered to move from a couch to rent a video when a download not only will do but will also be cheaper. On the other hand, the author still remembers a laptop (or something that looked like one) costinge7500, at current prices, some 20 years ago; nowadays, he owns a smartphone, absolutely not top of line, with far more computational power, fancy buttons (or lack of), all sorts of invisible links to the outside world for aboute100.

From another direction, in 3 decades, networks converged in a process that it is not yet completed because of market and regulatory forces – absolutely nothing to do with technology: Skype works adequately, most of the times at least, with a smartphone, and it is used to make international calls for a small fraction of the cost it would be if regular phones were used (which nowadays means a mobile set and cellular operator). E-mail, IM (Instant Messaging), immersive teleconferencing, moving very large files, etc., are now services that 30 years ago would either not be possible or were only in the imagination of a few. It needed the right technology, O(N2) critical mass1 and right market players, most notably ISPs (Internet Service Providers).

(34)

There are, of course, physical limits to be pushed by technology, e.g., faster links or coping with exafloods2. But, risking excessive optimism, these are all maintenance issues, not fundamental problems. Note also that the author is also calling "maintenance" to architectural problems of the current Internet. The Internet just works and what does not work is not missed that much3.

So the Internet, with everything around it, from bits running inside access links to diskless operating systems and single-sign-on conveniences is not a new technology. It has even passed the growing pains period at large, with the exception of a small number of issues4. It became a commodity. Currently, the Internet, from an academic perspective, consists mainly of commercially-driven problems. Such problems live between the following: (i) predicting users wishes and behaviours, or pushing new services into public acceptation; (ii) monetizing new services mainly by capturing other business areas, e.g., Blockbuster and Netflix, in order to drive up ARPU (Average Revenue Per User) of classical ISPs or, broadly speaking, dot-com enterprises.

The Internet nowadays, thus, covers much more than simply connecting remote computers. It became a network of services where people and machines are immersed, from social networks such as Facebook to watching at home a movie physically stored remotely, perhaps across an ocean. Underpinning the virtually infinite possibilities of end-users services that are enabled by always-on and global connectivity are Service Providers (SP) which are the real organizations that provide services. They are, in the end, the vehicle that brings the package to front doors. This also means that the Internet, as an academic niche or product, has long gone.

Still today, the word ISP is mostly associated with the company that gives the user means to connect to the Internet – which means that the underlying service is roughly connectivity. This thesis will somewhat retain this definition but will call Service Provider to any entity that is able to provide any type of service, regardless of its completeness, monetary cost, usefulness, etc. In this sense, Facebook provides a social networking service, the SMTP/POP server at a university provides services to send/receive/store emails, an OpenID provider mediates other services’ authentication steps, a Mobile IP Home Agent manages users’ connectivity, etc. Furthermore, services can be created from orchestrating other services, more atomic or not.

However, Service Providers have a problem that can be summarized in the following: someone has to do it. The importance of the organization behind a service is frequently underestimated or neglected which renders any solution unfit to be deployed in the real world. Besides a history and its own culture, every organization has a strategy, refined into practical policies and translated into day-to-day rules, that are further implemented in business processes. Quite frequently, perhaps the most dramatic of all policies, is secrecy at all cost: the internals of a service provider can never be assumed to be known and this fact has very few exceptions. Secrecy and self-administration is natural: organizations, be it for-profit or

2This is the common name given to the exponential growth of traffic as seen in backbones in the last few years. It is yet

to be proven that this is a real problem.

3The author realises that this seemingly simple sentence is not really so. 4A good example is the practical impossibility of a quick transition to IPv6.

(35)

Figure 1.1: The operational context of a Service Provider.

not, have their own goals and technology serves business goals and never the opposite – Technology and Engineering are always means to an end and never the end in itself5.

In a small sentence, this thesis discusses the impact of the business boundaries on service offerings. The central question, that will be applied to the fields covered by this thesis, is the following: what changes when the same service is delivered, not by one, but by more than one SP?

Consider two examples: GSM mobility and a CDN (Content Delivery Network). GSM mobility has a hierarchical-alike approach: always attempt handover between Base Stations controlled by the same BSC (Base Station Controllers). If not possible, handover also between BSCs. Above this level, one would need to handover between operators as it would be the case at a country border. Usually, this is not possible and the call is merely dropped. Note that the problem is not the technological challenge because this would be not much more than signalling between two network elements transferring information and collaborating in the reattachment of the mobile terminal. The challenge is administrative since it would require GSM operators to cooperate at a level they would not allow given the GSM business case. CDN is even simpler to explain and a full Chapter will be devoted to its analysis. Provisioning a CDN relies, roughly, on the p-median mathematical problem: given a set of servers, where to place them so that, given user demand and the content to be served, the consumption of network resources is minimized. Splitting a network in two halves and independently optimize each half is always suboptimal.

As it must, this thesis provides only a partial discussion given the adjacency matrix of the problem. As shown in Figure 1.1, a Service Provider is bordered by the limits of Technology, a Legal/Political/Regulatory context, Economical/Business vision and the Users (necessarily Consumers but not only).

It should now be realised that such a transversal perspective of SPs can be the subject of many Ph.D. plans. Therefore it is of utmost importance to scope the work here described. The services under the spotlight for this thesis are IP transport services, i.e., delivering IP packets from source to destination on time, such as Routing, Quality-of-Service, Mobility, Large-scale content delivery (coupled with, to a lesser extent, Group Communications). From a perspective of inter-SP service building, such services, become,

(36)

respectively, peering between Autonomous System, interdomain QoS (Quality of Service)6, interdomain mobility and CDN peering (with Real-Time Group Communications).

1.2. Thesis Structure

After the present introductory Chapter, Chapter 2 will discuss, from a architectural and abstract per-spective, the problem of inter-Service-Provider coordination, with a focus on IP transport services. It will overview standard service architectures and technologies, particularly those from ITU-T and IETF, and review related literature.

Chapter 3, using analysis and game-theory concepts, gives a rational to why the internet shows, at the interdomain routing level, an hierarchical and tiered topology. It will be argued that this is not (only) a consequence of technology (e.g., to improve scalability) but that business relationships between IP-carriers shape the underlying technical architecture. This discussion thus provides a strong case for the main topic of this thesis which is to show the implications of inter-SP relations on the network architecture, with business relations taking precedence, and particularly for scenarios of multiple players.

Chapter 4 contributes with a proposal to e2e resource management. It argues that, in the sense of e2e flow control at the IP layer, resource management is, in fact, a minor requirement when compared to business requirements, and that explains why, in effect, operators do not provide or make business cases of it, for example, QoS-assured services, except at very high level and specific business/premium cases. Having in previous chapters argued for the precedence of business over architecture, this Chapter will further argue that, if QoS, at the flow level, is to be a business case taken seriously by carriers, the missing piece of the puzzle is the service definition. It will be proposed an interdomain QoS architecture which tackles this specific topic that will here be called interdomain service alignment. In a nutshell, if true end-to-end QoS is desired, all ASes (or a large percentage of them), by regions, must support the same service under penalty of breaking e2e assurances. The proposed solution looks for a distributed framework for consensus, balancing business policies with user preferences, by which operators agree on a common set of deployed services, necessarily small given the scale of operations. The proposal uses several techniques, suitable for the interdomain environment, such as gossip protocols and multi-dimensional objective functions. Chapter 5 brings back analysis and centres on Content Delivery Networks (CDNs). A CDN is a concept that started in industry and academia has been using as a pillar to re-think the Internet architecture, e.g., as Content-Centric Networks or Name-Oriented Networking. The discussion will cover current CDN architectures, with a focus on the (re-) emerging topic of CDN peering. A key contribution of this Chapter is to quantify the loss of efficiency of CDN peering when compared to the case of a global CDN centrally

6Writing interdomain instead of inter-domain sometimes raises complaints which the author understands. Since this

word became so common, it earned a meaning of its own just like nobody now writes Inter-Net or multi-media. This thesis, will, then, predominantely use interdomain.

(37)

managed and provisioned. The inter -SP component will have a definitive number to make the case of this thesis.

Chapter 6 closes this thesis by reviewing the work done and pointing to directions in the future at a high level, including pinpointing high-level gaps of the contributions of this thesis that should be filled in the future.

1.3. Contributions

This thesis results from diverse work in the area of interdomain and several publications were produced. The most relevant are the following:

Vitor Jesus, Susana Sargento, Rui L Aguiar, A Scalable and Business-Oriented Framework for Inter-Domain Quality-of-Service, Intl Conference on Wireless Information Networks and Systems (Winsys), July 2007, Barcelona

Vitalis Ozianiy, Vitor Jesus, Susana Sargento, Rui L. Aguiar, Neco Ventura, Virtual Network Capacity Expansion Through Service Outsourcing, IEEE Wireless Communications & Network-ing Conference (WCNC) March, Las Vegas, USA, April 2008

Vitor Jesus, Rui L. Aguiar, Peter Steenkiste, A Stateless Architectural Approach to Inter-domain QoS, IEEE Symposium on Computers and Communications (ISCC), Marrakech, Morocco, July 2008

Vitor Jesus, Rui L. Aguiar, Peter Steenkiste, Supporting Dynamic Inter-Domain Network Com-position: Domain Discovery, IEEE International Conference on Communications (ICC), Dres-den, Germany, June 2009

Vitor Jesus, Rui L. Aguiar, Peter Steenkiste, Linking Interdomain Business Models to the Cur-rent Internet Topology, 3rd IFIP/IEEE Intl Workshop on Bandwidth on Demand and Federation Economics (BoD), Osaka, Japan, April 2010

Vitor Jesus, Rui L. Aguiar, Peter Steenkiste, Discovery and Composition of Per-Domain Be-haviours - a Service Abstraction Approach, IEEE International Conference on Communications (ICC), Cape Town, South Africa, May 2010

Vitor Jesus, Rui L. Aguiar, Peter Steenkiste, Topological Implications of Cascading Interdomain Bilateral Traffic Agreements, IEEE Journal on Selected Areas in Communications (JSAC), Special Issue on "Measurement of Internet Topologies", v.29, n.9, October 2011

(38)

Consen-sus, International Conference on Computing, Networking and Communications (ICNC), Maui, Hawaii, USA, February 2012

Vitor Jesus, Rui L. Aguiar, Figures of Merit For The Placement (In)Efficiency Of Intercon-nected CDNs, IEEE Symposium on Computers and Communications (ISCC), Cappadocia, Turkey, July 2012

1.4. Methodology

This thesis used a combination of tools to arrive to its results. Whereas some results were obtained from prototypes and testbeds, others were obtained strictly using mathematical analysis, as is the case of the work on interdomain SLA cascading of Chapter 3 that used concepts of Game-Theory. Computational tools, both on the system administration side (linux utilities such as awk, sed, scripting, etc.) and on the software development side (C/C++ and some Java), were also used. Simulations, an particularly the ns-2 simulator, were also an important tool.

(39)

2

Service Providers

This Chapter sets the context of this thesis by reviewing the concept of Service Provider as seen in society in general, including Standard Development Organizations and aca-demic literature. It will finish with a practical discussion on designing interdomain services and on scenarios where Service Providers become highly specialized and a sin-gle end-user service is the result of complex multi-party coordination. The focus is on end-to-end transport services.

2.1. The Concept Of SP: From Silos to Opportunistic Meshes

Practically speaking, the notion of Service Providers can be seen as having changed twice: at around mid-90s and at the late 2000s. It is very clear the difference between the first and the second epoch; however, it is less clear what is the current epoch. Some of its drivers, at least that, are well-known. This thesis fully builds on the first change of paradigm while heavily drawing on some, at least, of the requirements of the third paradigm.

The mid-90s saw the Internet massification and the profound influences of the political atmosphere of the 80s. SPs until the late 80s were mainly vertical and each network or SP provided one well-defined service. Telephony or TV are probably the best-known. Services were designed from bottom up, having in mind specific end user requirements: a phone call is everything but vague. From the -48V that powered copper lines until DTMF tones controlling a voice-controlled weather application, SPs were highly homogeneous and specific. Around the 90s, everything changed, when the Internet proof-proved the value of separating transport/service/from applications and the potential of an open network where virtually anyone could plug-in and offer any kind of service able to be carried over a generic packet-oriented network. Conventional Telecommunications SDOs, most notably ITU, took some time to embrace this new paradigm. However, when they did, they were able to combine its top-down design impetus with the new societal and technical background.

By the second half of the 2000s, a second change began to be acknowledged, although still undergoing. Technological advances such as the proliferation of fast and broad wireless links, multi-technology and

(40)

powerful terminals, or social networks, are indeed producing changes. Despite industry efforts, it is not clear if it is a new paradigm or just old requirements and services that are only now feasible, e.g., cloud computing. It is, nevertheless, a substantial improvement of freedom in requirements, as illustrated by the Always-Best Connected principles [2], where resources and computing power are (relatively) abundant.

2.1.1

Three Epochs

If three key words have to be chosen to characterize each border, in all the limitations incurred, whereas for the first change they may be "global, heterogeneous, open", for the second change they could be user, any(-where,-time), scale. This selection of keywords further shows that paradigm shifts are more of a generalization nature and not disruption: previous requirements become a special case of a more general set of requirements.

Early-90s: Global, Heterogeneous, Open

Global does not only mean independence of location, either for reception of transmission of data, but it also means anyone, both supplementary speaking, as two Telephone Operators in different countries, and complementary, as is the case of combining a TV Broadcaster with an Internet Data Carrier.

The second key word is heterogeneity – of services, of technologies, of entities, of applications, of busi-ness roles, of requirements, of users, of terminals, of location, etc, in a never ending list. The word convergence comes then to mind along with, to some extent, mobility that, if coupled with seamless, takes us seamlessly1 to the next paradigm. Heterogeneity is the main driver of partitioning networks in strata, in an attempt to group functionalities in families and explore commonalities while being flexible enough to support a broad set of requirements. Generalizing the concept of network layers, strata arise with three being the most common number: the transmission stratum (up to, roughly, the IP layer) the service stratum and the application stratum. Other strata will be discussed in this thesis, with an administrative stratum taking high relevance. A stratification of networks is also the answer to cope with heterogeneity of applications. As networks evolve from a bottom-up, single service approach, resources become abstractions to services and services become abstractions for requirements, until a unified view of networks around data arises. ITU, in the GII initiative, leading to NGN, presented later, uses the terms ’information’ – GII stands for Global Information Initiative. To some extent, ITU missed the point by using the term "information", even if it is meant "audio, text, data, image, video, etc." [3]. Data, or perhaps interaction would be probably better words.

The third key word has deep roots in societal changes occurred in the 80s, including the political back-ground in the western world2. A discussion about the political changes during that period lies outside this thesis, but it can be said that a belief in de-regulation and openness started to be dominant. Com-bining this frame of mind with the focus on heterogeneity, one obtain a de-verticalization of the telecom

1Pun intended.

(41)

Figure 2.1: Chain of value.

business: networks become as open as possible and the eco-system becomes dominated by many parties freely interacting. The notion of value chain, from the famous book of Michael Porter [4], in mid-80s, became central in networks: each party adds in value in a value chain that ends in the end-user but has no clear beginning or producer – see Figure 2.1. Business is ran by value, not exactly cash flows, and money is, in this sense, a mere approximation of value in a competitive market. Openness also means, then, the promotion of free competition and lower entry barriers so that, to create true and quickly value, many parties need to cooperate and, quite often, at the same time, compete. The idea of a Quality-of-Service value chain will be explored in Chapter 4, where, besides other conclusions, it will be more clear how complex coordination complexity is, and that, in the end, it has roots in openness. Note that, with-out openness, efficient and high-value value chains cannot exist since markets work with trial-and-error. Hence, openness was, and still is, central in public policies and in a large part of the economic literature. Interestingly, SDOs were in the middle of a major source of inconsistency: openness and de-regulation easily goes against strict standards, unless they begin standardizing everything that moves and is still. When it comes to physical interoperability, there is no way to escape this and the standards explosion is unavoidable. However, as one goes up in an architecture, the key is to, on one hand, standardize not applications but frameworks to support widely varying requirements and standardize only well-confined technologies, such as protocols, but leave open its use.

Coupled with openness, another concept that also directly links to a network stratification is de-verticalization. In order to support both abstraction (in terms of standardizing cooperation mechanisms and not ser-vices) and cope with openness while creating the conditions for fast creating of value, networks cannot be vertical – or, as will be called later, monolithic, in opposition to fragmented SP models. As Figure 2.2 shows, networks were deconstructed so that new configurations are possible for new services. SPs are then supposed to become, in one way or the other, and in different scales, highly specialized – somewhat fulfilling the vision of Adam Smith’s division of labour – so that, when recombined, dynamically and according to the business context, new high-value services can be quickly created and provisioned.

(42)

Figure 2.2: De-verticalization of networks.

Late-2000s: User, Any(-where,-time), Scale

When the Internet became a commodity, it mainly became the general purpose global network carrying anything that is possible to digitalize. New requirements were set. It may not exactly be a paradigm shift since most of the visible changes concern scale and/or enablers of long foreseeable services. For example, large scale video streaming is not exactly disruptive – but only now is feasible with homes connected to broad links and content brokers using content delivery networks. Note, however, that scale arises from new business models. Some 10 years ago Block Buster was a big name in rental video and Netflix digital distribution was only starting. Ten years after, Blockbuster ceased to exist and Netflix, one of the biggest traffic sources in North America, begins talking about discontinuing the delivery of movies using mail. Interestingly, whereas one of the most spoken sentences 10 years ago was "content is king", showing a really narrow vision of the Internet, one can say that content is king, indeed, but only now, at least judging from the scale of live traffic. Currently, what matters seems to be unlimited interaction, from social networks to true cloud computing where the effort is in an invisible backend. A any-where/any-time is also starting to be the rule with ubiquitous (and problematic for operators) broadband links and small powerful terminals. In fact, users are now mainly driving the research agenda [5] and SPs or vendors are finding increasingly hard to set their own agendas and lead consumer adoption. 3D TV failure, users "cutting the chord" and watching TV online, conventional wireless SPs being late in provisioning capacity because users are one step ahead (as the battle between AT&T and iPhones in New York shows) are all examples that users are driving requirements.

This state of things empowers the user in such a way that true technical limitations are frequently forgotten. Users are, in fact, getting used to have anything at anytime. Besides pushing the limits of Networks, this is starting to reconfigure service topologies – from a structured environment and managed services, one is facing probabilistic service deployments and a mesh of SPs. For example, true mobility dramatically complicates radio planning and the best a SP can do is to try to predict the shape of the time

(43)

function of the resource demand. As will be discussed later, the first approach to multi party services is the simple, scalable and naturally occurring hierarchy: smalls SPs aggregate to medium-sized SPs which aggregate to large SPs and so on, in a chain that typically is no more than 3-5 levels. Destroying this hierarchy, one is left with a short-lived mesh of SPs actively and dynamically composing their services on behalf of the user – and a progressively unpredictable one.

2.1.2

Service Building In Dense Service Provisioning Environments

This thesis builds on the previously described context: globalization, heterogeneity, openness, user-centricity, anywhere/anytime fast service provisioning and unprecedented scale. Whereas for some chap-ters the current state of art is an enabler, for others they are only unaccomplished visions that are visibly underway in some areas but clearly not present for others.

The remaining of this Chapter starts by briefing the reader on Next-Generation Networks and end-to-end transport services as see by SDOs, most notably ITU-T.Then related academic work is reviewed.

2.2. Next-Generation Networks

The never expiring term Next-Generation Network (NGN) is mostly associated to ITU-T since, in the beginning of the 2000s, the NGN iniciative was triggered with the Y.2000 specifications [6]. A main driver was the increasing popularity of the Internet as a general purpose network, in a moment when it was becoming mature enough which allowed ITU not only to combine its standards and perspective of telecommunications services, inherited from the Global Information Infrastructure standardization initiative [3], but also to think ahead and set a telecommunications agenda that is still quite up-to-date. NGN gives strong emphasis to open competition and to service/network convergence [7] thus align-ing/updating its standards with current market trends. Furthermore, being a vertical standardization body, it covers and stitches together areas, at least from an architectural perspective, that IETF, the main source of IP-related standards, leaves open, such as, as wil be covered in next sections, end-to-end QoS, generalized mobility and open service interoperability/creation.

Inherited from the GII concept [6], NGNs start by decoupling transport (e.g., QoS), services (e.g., VoIP) and Applications (e.g., 3rd-Party (3P) Value-Added Services (VAS)), while fully adopting the IP paradigm – see Figure 2.3. Three strata3are defined so to fully decouple packet transport, services and applications. Such decoupling of functions is also driven by service openness, in order to focus the architecture on the easiness of service creation, deployment and management, not only vertically (e.g., by non-telco 3rd parties, including end-users) but also horizontally (e.g. inter-SP cooperation).

3Technically, Recommendations use only two, Transport and Service. The author decided to promote the Application

(44)

Figure 2.3: Reference architecture of ITU-T NGN.

As seen in Figure 2.3, quite important for this thesis is the native interconnection interfaces that cover, besides the Transport stratum, the service components: on the horizontal axis, all components of the Service Stratum (the Network-Network Interface, NNI, and the Service Network Interface, SNI, a later addition) and, on the vertical axis, Applications (Application Network Interface, ANI) that 3P (and end-users) can use to have access to the infrastructure in partnership with the SP. The NNI/SNI, in particular, are the means by which NGNs can be partitioned into separate administrative domains. Another interesting idea in NGNs, increasingly strengthened in later additions, is the idea that a NGN may not need only be serving end users but could also be an intermediate point as would be the case of multiple NGNs performing media transcoding between two end-users served by two different domains. The notion of domain gained further granularity by allowing administrative sub-units of customer/access/core, and differentiating NNI from Internal-NNI (INNI) networks as would be the case of a WiFi micro-operator only operating a small access network – see Figure 2.4. In general, an NGN natively considers inter-SP interfaces, both vertical and horizontal decomposition, inherited from the GII framework.

The following section specifically reviews ITU-T’s approach to end-to-end Transport as it was used as a starting point in this work.

End-to-End Transport

The transport stratum is divided in two parts, the Transport Functions, composed by the functional elements that directly handle traffic (routers, bridges, wireless access points, midddleboxes, etc.) and

(45)

Figure 2.4: SP/NGN composition.

the Transport Control Functions that control the functional elements after several types of inputs. Also included in the Transport stratum are Mobility Management Functions. Note that transport entities also handle in-band signalling such as packets of RSVP [8] protocol (for on-path resource reservations) interface, or media transcoding needed by some applications.

The Transport layer is under full control of a Transport Control for which, in terms of QoS, the central element is the Resource and Admission Control Function (RACF). This function, that is basically an IETF’s Bandwidth Broker (later discussed), consists of two sub-modules: a Policy Decision module interfacing the Service stratum and the Transport layer and managing a Resource Control module, directly controlling transport functional entities. Quite important to inter-domain operations is the interface of the Policy Decision module which is part of the NNI and can be used to request resources from other administrative domains as in an e2e, multi-domain QoS-enabled path (the Ri interface combined with Rs interface [9]).

Interdomain resource reservations, and, to a large extent, multicast routes, can be done in two general ways. First, dedicated Resource Management Elements can communicate directly; second, which is probably the preferred way for operators, given that it avoids interacting directly with the infrastructure, resources can be reserved using the Service stratum and totally bypassing the Transport stratum. The problem, though, is that it leaves open what to do if, in between the two NGNs, there is a network that does not support QoS in this way or has a totally different QoS model (including none).

Coping with inter-network heterogeneity, at the physical layers, is a large body of work in NGN, most of which defined in Recommendations Y.1000-Y.1999 (Internet protocol aspects). Not only is defined a large set of adaptation procedures (e.g., adapting different technologies for a common set of services) but also a set of QoS classes are defined [10]. Such service classes are, at first glance, quite generic, even if only a few are defined, suggesting the coverage of most popular services such as VoIP, web browsing or video-on-demand. Classes are defined as an object whose components are traffic parameters such as delay and packet loss and applicable UNI to UNI. Parameters are detailed in Recommendation Y.1540 [11] which

(46)

also defines how to compose QoS parameters across network domains and how to engineer e2e paths4. With respect to this, Recommendation Y.1542 shows a number of methods for determining and avoiding e2e QoS budgets and impairments.

It is, however, noticeable the limitations of the methods proposed since, for example, topology discovery – most notably, the service topology5 – is not described and is mainly set to be beyond scope of the Recommendation. It is, though, proposed to use BGP, the de-facto inter-domain routing protocol to discovered (e.g., QoS parameters) but, as will be discussed later in this thesis, that is a method that leaves the hardest problems of interdomain QoS open. Recommendation Y.1543 [13] covers, from a high-level perspective, such problems by defining a measurements framework, tightly coupled to the former IETF group IPPM [14], that, among others, sets SLA management as a requirement (e.g., "be sufficiently accurate to enable SLAs with financial penalties to be administered"). However, only a high-level architecture and a set of requirements is defined and the author argues that the most important problem is left open. Such requirement could be stated as how to perform vinculative measurements over administratively closed domains and an unknown and highly dynamic topology.

2.3. Literature Review

If services are to be decomposed into the three NGN strata – network, end-user services and applications – , one finds that as it goes down it increasingly becomes harder to find a consistent and general frameworks for inter-SP service provisioning – in other words, federation solutions for the network stratum. At the network stratum it is virtually non-existent and the best one can find is local solutions for specific parts of a service. Inter-SP/interdomain resource management or QoS is a paradigmatic case but the same applies to other subsystems such as interdomain mobility or interdomain multicast. This thesis will not argue on the feasibility of such a general framework in either way (if is feasible or not); nevertheless, it shall be discussed, in this Chapter, the concept of SP-composition at the network level, with a focus on the e2e resource management.

Before anything, it should be noted the difficulty of reviewing the literature on these topics. It stems from the fact that concepts from several adjacent areas are applicable but, nevertheless, not designed for the operator scenarios of this thesis. For example, grids are networks of heterogeneous and administratively independent nodes that jointly provide a service such as distributed processing (see, for example, [15]). Dealing with independent nodes is an important component in inter-SP services; however, it is not the only one (support to business models is another). Therefore, reviewed literature will focus on operator-centred scenarios and with a focus on transport-related functions.

4This is a starting point of the contribution presented in Section 4.4 and published in [165]

5As will be discussed later, this thesis will define service topology as an e2e topology, most likely spanning multiple

(47)

As has been discussed before, even at the service stratum the problem is of high complexity mainly because it crosses the borders of technology into regulation and business models. Except for support areas that mostly touch the Application stratum, such as Identity federation [16], that are out of the scope of this thesis, the problem is typically approached from two angles: resource-abstraction architectures, notably Service-oriented Architectures, and specific parallel industry solutions with patchy hooks to the network plane, such as QoS by application-layer redirection to a Content Delivery Network. Charging for services has, though, efficient solutions, including for complex many-services composition, [17], which should not come as a surprise since it is an indispensable subsystem of any commercial telecom service. After these two general approaches, a literature review on specific network-based approaches will be done.

Trying to establish a taxonomy of interdomain service provisioning, with some focus on inter-domain QoS but also applicable to mobility and multicast, there are three main approaches to the problem: standards-based, network-based and using Service-oriented Architectures (SoA). The first one, intrinsically manual but, nevertheless, being the most common means, can be described as "follow the crowd". This means that, on one hand, small operators have no other option, for many reasons such as large operators protecting their market, than to follow large operators. On the other hand, large operators follow a commonly set agenda of offered services, usually highly conservative in terms of service complexity. This can be seen everywhere: operators have similar prices, similar technologies, similar services and avoid innovative, but risky, products giving priority to marketing conservativeness and mainstream products. This approach is out of scope of this thesis and is only used as a starting point for discussion. The following subsections discuss the remaining approaches.

2.3.1

End-to-End Resource Management

Quality-of-Service is a broad and complex topic that, as one is reminded of how it impacts end-user experience, shows remarkably little (complete) adoption. The inherent complexity of QoS is not only technical but nowadays mainly a problem that stems from the business interactions. The net result, after decades of research and proposals, the concept of Quality-of-Experience, of which QoS is only a subset, is the major bet with collateral and transversal solutions.

Resource Management, or, as frequently will be used interchangeably, QoS, arises from the set of trans-port techniques and related control interactions that, altogether, assure to some degree that end-users get services as expected at the traffic level. In practical terms, QoS assures that a flow of packets collectively arrive its destination within a window of delay. As such, it needs coordination at multiple levels: cross-layer (from L1 to Application), forwarding topology (from bridges to Application-Layer gateways), user and service planes (e.g., cross-stratum triggers), and administrative (domain policies, business models, pricing strategies, etc.). VoIP is a classical example to illustrate these interactions. Packets need to be "heard" no more than about 250ms after they have been "spoken" and they should arrive at a regular pace, typically with a jitter below 5%, for common codecs to work well and given the expectations of av-erage users. QoS results from the cooperation of electronics that handle sound (speakers, microphones), packetization and encapsulation of voice, link layer ordered and controlled transmission, L2/L3 QoS

(48)

mapping and execution (e.g., L1/L2 technology specific), L3 e2e routing, service management, triggering. Moreover, all these technical considerations are subject to multiple, administratively independent, con-catenated transport policies which map business policies and models to technical requirements [18] such as forwarding policies at all layers and e2e. To complete the picture, other network subsystems such as mobility [19][20][21] or multicast strongly interact with QoS [22]. Given the focus of this thesis, it shall only be discussed interdomain aspects of QoS architectures.

In the IP world, IETF’s IntServ [23] and DiffServ [24] are the de-facto QoS reference models for the Internet, although other proposals exists (e.g., [25]). IntServ aims at assuring e2e guarantees, with flexible requirements set by applications with per-flow granularity. It, therefore, needs at least two elements: complying routers that support per-flow queue management and an e2e signalling protocol for what RSVP [8] is well-known. Virtually any service is supported by IntServ since the signalling protocol is able to carry any relevant combination of QoS parameters. However, performing per-flow management hop-by-hop is (even today) highly unscalable because state increases with the number of managed flows – thus creating problems in forwarding elements – and lacks robustness since state needs to be coordinated e2e. DiffServ, on the other hand, alleviates these requirements by managing traffic at the aggregate level, allowing/predefining (very few) traffic classes that are signalled in packet headers and performing admission control at the domain level. In other words, given a few traffic classes, users or edge routers mark packets with a DiffServ Code Point (DSCP), edge routers in the domain perform admission control – possibly with the aid of Bandwidth Brokers [26] that keep records of and/or authorizes the resources requested by applications – and core routers handle packets at a class granularity (Per-Hop Behaviours) by reading DSCPs and mapping to the previously defined QoS classes. Therefore, DiffServ is much more scalable at expenses of a new network element (a Bandwidth Broker and/or heterogeneous routers) and much courser granularity in handling traffic. For example, IntServ defines highly deterministic services such as Controlled Load [27] (routers handle prioritization and a user sees always a lightly loaded network) and Guaranteed Service [28] that hard-reserves resources. On the other hand, DiffServ defines a set of PHBs shared by many flows that are statistical in nature (no matter how accurate admission control is [29]) such as Expedited Forwarding [30], for better than best-effort and Assured Forwarding [31], for Premium services.

In other words, the main difference between IntServ and DiffServ is that the latter trades simplicity in the data plane for complexity in the control plane. In a way, IntServ has no control plane since the signalling protocol basically activates resources along the data path. DiffServ sharply decouples data from control and brings to front the administrative domain borders – which raises very different problems that a standards-based straightforward IntServ/RSVP minimizes. In fact, a large part of the original work presented in Chapter 4 of this thesis starts exactly here.

Current QoS implementations use a mix of DiffServ and IntServ such as IntServ-over-DiffServ [32], as is the case of 3GPP, which, basically, assumes per-flow management at the user-network interface and DiffServ between the edges of each administrative domains. Research turned to problems connected to inter-domain cooperation, localized optimizations and integration of QoS with other subsystems. RSVP

(49)

is now mainly used for Traffic Engineering and MPLS [33], although an interdomain extension exists [34], and is planned to be progressively replaced by NSIS (Next Steps In Signalling) framework [35] in scenarios where on-path, e2e, signalling is required. NSIS is a generic signalling framework that, in principle, is compatible with many applications (e.g., measurements) and can be engineered to cope well with several QoS models, such as the e2e inter-domain mixed-model proposed by Monteiro et al. [36] and in the EuQoS project [37], where an e2e signalling packet behaves differently according to the QoS architecture at each router or domain. State aggregation techniques, such as BGRP [38] or SICAP [39] have also been proposed, minimizing (although not solving) scale problems by aggregating similar state in interdomain links where a huge number of flows mix.

With DiffServ, inter-domain negotiation and coordination became an open end. On one hand, architec-tural solutions were proposed. Combined with an inter-domain QoS-service, the Scavanger Service [40], SIBBS (Simple Interdomain Bandwidth Broker Signalling) [41] was an output of QBone/Internet2 project and provided a protocol for Bandwidth Broker (BB) communication and resource requests, a topic later developed [42][43]. Hence a structure of SLAs (Service Level Agreement), SLSs (Service Level Speci-fication) and TCS (Traffic Conditioning SpeciSpeci-fication) is derived. Such negotiation could use, for ex-ample [44], an extension of COPS (Common Open Policy Service) protocol, initially used to control a Policy-Enforcement Point by a Policy-Decision Point [45]. Economical frameworks, regulating interdo-main resource usage by pricing are a promising technique as long a wrapping business framework exists – such as iREX [46] that automates QoS services distribution and regulation using concepts such as price, reputation, offer and demand. Later chapters in this thesis will contribute to solutions for this problem. Cheliotis et al [47] and Hwang et al [48] have also approached the problem from an economical perspective by abstracting resources as tradeable goods and designing stock markets thus regulating capacity and usage at a global scale – very similarly to the (by then) trend of Bandwidth Markets. Other architectures further abstract resources and begin handling domains more as business entities that control resources than resources that need to be controlled by business policies – such as the Enthrone project [49] that pro-vides a clear separation between service and transport by creating a network of SLAs that is derived from the routing information and following a cascaded model in which the next domain physically connected is responsible for assuring guarantees for the whole path beyond that point. A cascade model, a key topic of Chapter 3, is also the basis of GSMA’s IPX/GRX, where SPs can interconnect to a privately managed common network thus controlling QoS and adaptation problems [50]. The SLA cascading concept is a realistic approach to QoS as Griffin et al [51] and Howarth et al [52] propose. If one assumes that all (perhaps most) domains use the same traffic classes, termed Global Well-Known Services (GWKS), an argument that will be later challenged, a set of QoS planes can be designed, where traffic is to be mapped to according to its desired QoS. Such planes can be advertised using extensions of BGP. Existing such QoS planes, resources can be more easily managed with small extensions (at the protocol level) [52][53] and Traffic Engineering techniques can be more easily used [54].

Inter-domain QoS has matured enough to depart from a pure resource problem to become a very specific problem, with clear formulation, that has now roots in business, namely, (QoS-) service discovery, e2e inter-domain composition, SLA auditing, network outsourcing, etc.. Such a techno-economical business

(50)

framework can be merged to other transport services and will be discussed in later sections. In fact, e2e QoS is still lacking nowadays networks not because of a lack of means to support intra-domain QoS but due to the complex problem of coordinating domains. The problem can be decomposed in a number of sub-problems – intra-domain QoS, SLA auditing, service set coordination – for which commercially proven solutions exist with the exception of the service set selection sub-problem. These three sub-problems are now discussed and integrated, preparing a solution presented in Chapter 4.

Intra-domain QoS can be done using a combination of capacity planning (e.g., overprovisioning or Traffic Engineering techniques), overlay routes as in MPLS or on-demand configuration of routing elements (from wireless access points to core routers) coupled with a user/network signalling protocol such as RSVP or NSIS. In general, intra-domain QoS has been object of research almost since the Internet was envisioned. Although there are still challenges [55], the problem has robust solutions as many research projects [56] and commercially deployed networks have proven. For example, INPs (Internet Network Providers) offering triple-play (VoIP, TV, Internet) typically use some form of QoS (and even multicast); advanced service architectures, such as 3GPP’s IP Multimedia Subsystem, explicitly supports QoS since services are designed on top of IP [57].

The SLA auditing problem is also an open problem and it involves two sub-problems. First, INPs need to verify e2e SLAs, on a regular and controllable basis, in order to take a valid action upon failures and complaints from customers. Since an e2e path involves several INPs, besides the accuracy of remote measurements, and the e2e coordination it involves, it is a challenging task to accurately locate the service impairment. The second problem consists in billing correctness. There must exist means for non-repudiation: upon a dispute, a domain must be able to prove that some remote domain has used its resources. The reverse must also be true: a domain must be protected against resource stealing, such as another domain using resources that other domain has payed for. Measurement and billing architectures including both combined, have been also extensively discussed, as independent research problem and solutions also exist [58]. For example, in terms of billing, domains already do it, to bill customers, e.g., metered billing, among themselves since interdomain peering always involve measuring traffic.

Specifying a QoS service is not, by itself, a complex problem [59] especially if one uses well-known and standardized QoS metrics ITU-T [62]. Third-party specialized solutions have also been proposed and validated in academia [60]. However, choosing a set of services in each domain is an open problem since each service set needs to integrate with every other domain for true e2e QoS [58][61]. To further complicate the problem, services are driven by user needs which change very fast and depend on time and space as the previous use-cases illustrate. Finally, the number of services per domain cannot be large for scalability reasons. Further, as discussed before with the use-cases, QoS encompasses several models of provisioning: the VoIP use case is a long-term service and needs global support but video streaming needs support for a short period in time and has a fixed location (the streaming server). In addition, QoS nearly always needs bidirectionality since users across the world use similar applications.

(51)

sup-porting the same set of services, perhaps with a few extra per region. Even if such set of services is not ideal for all domains, every domain will be better-off since, otherwise, inter-domain QoS may not evolve from the current situation. Therefore, a serious obstacle consists in the service alignment problem: how to choose, in a coordinated way, such set of services. Virtually all approaches to inter-domain QoS use what can be called Globally Well-Known Services (GWKS): all domains support the same services. Most often, it is implicit that some standardization organization or industry forum define GWKS. However, until now, it is not the case and it is here argued that it is an expected result since a GWKS based on standardization raises several problems:

• Although there are several service sets (e.g., IntServ, DiffServ, ITU-T, 3GPP, etc.), that are easy to extend, they are very different and typically architecture-dependent. A globally deployed service set cannot be dependent on any architecture under penalty of very low adherence.

• It is also debatable which organization would define the service set, how and controlled by who. Given the diversity of INPs, in terms of business goals, user-base, etc., the perspectives of success of reaching a consensus is debatable because choosing services depends on business models, a concept eminently local and private, and many times regulated by local laws. Letting a global forum recommend business models is probably a contradiction in itself.

• Finally, and probably the most important, whereas applications and requirements change by the day, standardization bodies work in a time frame of, at least, months. By the time a recommended set of services is issued, INPs have probably lost interest in them.

GWKS is a solution that, by itself, raises several obstacles. Note that, without well-defined services, business models as the ones this thesis will propose cannot be put in motion for the simple reason that, informally speaking, it is not clear what domains are selling besides the vague notion of resources.

2.3.2

Interdomain Topology

The Internet is currently composed of around 30 000 administrative domains, the Autonomous Systems (ASes), that are stitched together using the Border Gateway Protocol (BGP). This section discusses the inter-AS network that, more than technical, is business driven.

BGP is not technically a routing protocol but a reachability protocol. It shares algorithms that closely follow the area of gossip protocols (neighbour diffusion of data), later discussed. BGP is an exterior routing protocol (i.e., it negotiates routes with external domains), and is currently in its version 4 [63]. The protocol is complex, not only in terms of specification but also in terms of safety (i.e., global stability) [64] since it is a path-vector but policy-oriented. Policies regulate the external connectivity of the AS and control what and how is exported to neighbours, such as chunks of addresses. Frequently, inter-domain policies collide at a global scale generating instabilities that can disconnect countries for long periods of time. Besides providing the top level of the end-to-end routing hierarchy, with the AS

Referências

Documentos relacionados

While handling service requests from clients, the roaming component compares the contracted SDI service provider (home provider of clients) with the real providers in the

No entanto, como cada um dos componentes tem um propósito distinto (MgWeb e Prognos destinado aos cliente, MgManut e MgConfig utilizados pelo CS), importa salientar e descrever

Já que “o poder maior do mito é sua recorrência” (BARTHES, 2001, p. 156) e representa “fonte dos textos e tramas da cultura” (CONTRERA, 1996), nossa proposição em

Para os defensores dos métodos descendentes a habilidade na leitura depende muito do conhecimento que o leitor traz consigo e para compreender o texto ele

Therefore, The Research Unit and Development of Community Strength and Knowledge Management, Faculty of Education, Mahasarakham University, The Department of

service, i.e. development of systemic, scalable and replicable service offerings, as implemented by a multinational Consulting organization, engaged in the business

Deste modo, esperamos também contribuir para a capacitação dos interessados pois acreditamos que, para além da potencial consciencialização que a leitura possa promover,

systems control automation agent system manufacturing multi service approach industrial distributed engineering model production collaborative iec smart management networks