• Nenhum resultado encontrado

Parallel applications in the cloud

N/A
N/A
Protected

Academic year: 2022

Share "Parallel applications in the cloud"

Copied!
25
0
0

Texto

(1)

Parallel applications in the cloud

Diana Naranjo Pomalaya Computação Paralela e Distribuida

(2)

Agenda

● Introduction

● MapReduce

● Solutions

○ Haloop

○ iMapReduce

○ Pig

(3)

Global Data Center Traffic

Source: Cisco Global Cloud Index, 2013–2018

(4)

Data-intensive applications

● Sciences

○ Massive-scale simulations data analysis

○ Sensor deployments

○ High-throughput lab equipment

● Industry

○ Web-data analysis

○ Click-stream analysis

○ Network-monitoring log analysis

(5)

MapReduce

Map Local Sort

Local Sort

Map Combine

Combine

Shuffle

Combine/

Merge

Combine/

Merge Reduce

Reduce

(6)

MapReduce

○ Easy-to-use

programming model (2 functions)

○ Scalability

○ Fault-tolerance

○ Load balancing

○ Data locality-based optimization

○ Designed for Batch- oriented

computations (N-step dataflows)

○ Low-level abstraction (combined data sets, primitive operations)

(7)

Solutions

Haloop

Loop-aware scheduler Caching

mechanisms

iMapReduce

Persistent tasks

Input data loaded once Asynchronous execution

Pig

High-level data manipulation Hadoop

execution

(8)

Haloop

● Hadoop based framework

● Supports iterative programs

● Loop-aware scheduler and Caching

Mechanisms

(9)

Hadoop

Client Master

Node

Task Scheduler

Slave Node

Slave Node

Task Tracker

Slave Node

Slave Node submits jobs

schedules tasks

creates tasks

manages tasks’ execution

(10)

Haloop

Client Master

Node

Task Scheduler

Slave Node

Slave Node

Task Tracker

Slave Node

Slave Node submits jobs

schedules tasks

creates tasks

manages tasks’ execution Loop Control

Initiates map- reduce steps until termination

condition is met

Data locality By the means of caching and indexing

(11)

Haloop - Loop control

● Goal: place on same physical machine map/reduce tasks that occur in different iterations but access same data

● How:

○ Keep track of data partitions processed by each task on each physical machine

○ Map new tasks to slave nodes that have alreasy processed that data partition

○ If node full, then re-assign to other node

(12)

Haloop - Caching and indexing

● Reducer input cache: useful for repeated joins against large invariant data (wastes less time in shuffling)

Reducer output cache: reduce cost of fixpoint termination condition avaliation

Mapper input cache: useful in k-means similar applications (input data does not vary)

Cache reloading: if node is full, copy all required data to new assigned node

(13)

iMapReduce

● Based on Hadoop

● Framework for iterative algorithms

● Concept of persistent tasks, input data loaded to persistent tasks once and facilitates asynchronous execution of tasks within iteration

(14)

iMapReduce - Restrictions

● Map and reduce operations use same key (one-to-one mapping)

● Each iteration contains only one MapReduce job

Graph-based iterative

algorithms

(15)

iMapReduce - Persistent tasks

● Tasks keep alive during whole iteration process (dormat as data is parsed/processed)

● Depends on available task slots (problem with balancing load - strangles/leaders nodes)

DFS

DFS

DFS DFS

DFS Map Reduce

Map . . .

Map

Reduce Job 1

Job 2 . . .

Job

(16)

iMapReduce - Data management

● Input data becomes: static data (invariant) and state data (variant)

● State data is passed from reduce to map tasks through socket connections

● Static data is partitioned with the same hash function used to shuffle state data

● Map and reduce tasks (one-to-one due to key restriction) are scheduled to same worker

(17)

iMapReduce - Asynchronous execution

● Map tasks can start execution as soon as its state data arrives

● No need to wait for other map tasks

● Fault-tolerance problem: use buffer to save results from reduce tasks (return to last iteration)

(18)

Pig

● Provides constructs that allow high-level data manipulation

● Allows employment of user-provided executables

● Compiles data-flow programs (pig latin) into sets of MapReduce jobs and coordinates its execution

(Hadoop)

(19)

Pig - Compilation and execution stages

Parser Logical

optimizer

MapReduce Compiler

MapReduce Optimizer

Hadoop Job Manager Type

check Schema inference

Ex. Filter pushdown (minimize amount of data

scanned and

processed) Logical

Plan (DAG)

Physical Plan (DAG)

Mapping from logical to physical

Distributive/

algebraic operations to map, combine and reduce steps

Monitor

(20)

Pig - Memory Management

● Pig is implemented in JAVA

● Memory overflow situations when large bags of tuples are materialized between and inside operators

● Solution: List of bags ordered in descending order (estimated size), spill bags when

threshold is reached

(21)

iMapReduce - Streaming

● User-defined functions are supported in JAVA and are synchronous

● Streaming executables allow other languages to be used (scripts/compiled binaries)

● Streaming executables are asynchronous (queues)

(22)

Pig - Performance

Source: India Hadoop Summit - Feb, 2011 17 December, 2010

6 June, 2015: release 0.15.0 available

(23)

PigPen

● MapReduce language that looks and behaves like Clojure.core

● Supports unit tests and iterative development

● Used in Netflix

(24)

References

Cisco and/or its affiliates. Cisco global cloud index: Forecast and methodology 20132018 white paper, 2014.

Yingyi Bu, Bill Howe, Magdalena Balazinska, and Michael D. Ernst. Haloop: Efficient iterative data processing on large clusters.

Yanfeng Zhang, Qixin Gao, Lixin Gao, and Cuirong Wang. C.: Imapreduce: a distributed computing framework for iterative computation. In In: Proceedings 8 of the 1st International Workshop on Data Intensive Computing in the Clouds (DataCloud, page 1112, 2011.

Thilina Gunarathne, Bingjing Zhang, Tak lon Wu, and Judy Qiu. Portable parallel programming on cloud and hpc: Scientific applications of twister4azure,” presented at the portable. In Parallel Programming on Cloud and HPC: Scientific Applications of Twister4Azure, 2011.

Jaliya Ekanayake, Xiaohong Qiu, Thilina Gunarathne, Scott Beason, and Geoffrey Fox. High performance parallel computing with cloud and cloud technologies.

(25)

Questions?

Referências

Documentos relacionados

No contexto da necessidade de estudos exploratórios sobre a incidência e mortalidade por sepse em unidade de Pronto Socorro, em especial de hospitais público da região Norte

Foi possível identificar através de técnicas de sensoriamento remoto as tipologias da cobertura vegetal da RPPN, onde na reserva apresenta um grau mais elevado

tas฀ vezes,฀ simultaneamente.฀ Por฀ exemplo,฀ com฀ o฀ objetivo฀ de฀reabastecer฀a฀despensa,฀verificam฀os฀produtos฀em฀falta;฀ enquanto฀ decidem฀ sobre฀ o฀ cardápio฀ do฀

In our approach the self-organization of services comprises the execution of the following tasks, discovery, in a decentralized manner, the desired dynamic

Com efeito, como o grupo de pessoas acima de 85 anos tem revelado uma taxa de crescimento mais alta; os idosos não são apenas mais numerosos como cada vez mais velhos.. As

We observed that the percentage of sites that hold a large number of persistent contents tends to slightly decrease with time but the general distribution of the site sizes does

Segundo Matias (2014), salvo as exceções previstas na lei e na Constituição, devidamente fundamentadas, os estrangeiros e os apátridas gozam dos mesmos direitos e têm os

The value of the business is obtained through the discount of operational cash flows, deducting the investment needs at a rate that reflects the company’s risk, the weight