• Nenhum resultado encontrado

Alleviating the Impact of Churn in Dynamic Grids

N/A
N/A
Protected

Academic year: 2017

Share "Alleviating the Impact of Churn in Dynamic Grids"

Copied!
6
0
0

Texto

(1)

Alleviating the Impact of Churn in Dynamic Grids

K. Abdelkader, and J. Broeckhove

Abstract—The utilization of desktop grid computing in large-scale computational applications is an important issue at present for solving compute-intensive problems. However, such large-scale distributed systems are subject to churn, i.e., continuous hosts arrival, leaving and failure. In this paper we address the problem of churn in dynamic grids, and evaluate the impact of reliability-aware resource allocation on the performance of the system.

Index Terms—auctions market, grids, grid economics, re-source management, spot markets.

I. INTRODUCTION

T

HE utilization of desktop grid computing in large-scale computational applications is an important issue at present. Platforms such as BOINC [1], [2] and SZTAKI [3], [4] that used to provide large-scale intensive computing capability have attracted recent research interest. Such plat-forms are referred to as volunteer grids or public resource grids [5], [2] where the hosts or providers are typically end-users’ public PCs (e.g. homes, offices, universities or institutions) located at the edge of the Internet. Studies aimed at evaluating host availability have shown that providers connect to and disconnect from the grid without any prior notification. This effect which is called churn [6]. To opti-mize the performance of the system that is subject to churn, we shall consider the churn characteristics of providers i.e. the rate of connection/disconnection and the duration of the corresponding time periods. Also, it is significant from the perspective of the grid user, to consider the number of jobs failing and succeeding without resubmission being required [7]. These effects have to be taken into account when devising an appropriate allocation policy for such systems.

Our main contribution in this paper is to focus on how to reduce the effect of churn by reducing jobs failure due to downtime prior to job completion. We look for allocation strategies that are economically based, but that also take into account the apparent reliability of the providers. We aim to do this in a manner as simple as possible, by taking into account historical information of the providers and using this to screen the bids they submit to the market.

II. MODELING CHURN

In this paper the model adopted for the churn play a key role. Basically one models the distribution of the time durations during which a resource is available or unable. When the computing system is the resource avail-able/unavailable translate to machine uptime/downtime. This is similar to the system-based churn model as described in

Manuscript received March 8, 2012; revised April 01, 2012.

K. Abdelkader is with the Department of Computer Sciences, Higher Institute of Comprehensive Professions, Ghadames, Libya .

J. Broeckhove and K. Abdelkader are with the Department of Math-ematics and Computer Sciences, University of Antwerp, Middelheimlaan 1, 2020, Antwerp, Belgium e-mail: (Jan.Broechove@ua.ac.be, Abdelka-der.Khalid@gmail.com).

[8]. One can also look at the availability of the CPU, which might differ, due to the provider policies or preferences, from the machine availability.

There exists quite a bit of literature on the subject. The authors in [9] have tackled the problem of modeling machine availability in enterprise-area and wide-area distributed com-puting settings. They have used three different data traces to measure resource availability, namely the UCSB SCIL1 data set, the Condor [10], [11] data set and the Long-Muir-Golding data set. The results indicate that either a hyperexpo-nential or Weibull model most effectively represents machine availability periods.

The authors in [12] focus onCPUavailability. This differs from the previous work that looks at machine availability. Here one includes the case where, without the machine going down, the user or some monitoring daemon makes the CPU unavailable for grid work. The authors have measured and characterized thetime dynamics of availabilityin large-scale Internet distributed systems with over 110,000 hosts. Their characterization has focused on identifying patterns of correlated availability. They instrumented the BOINC [1] client to record start and stop times CPU availability. The goal was to detect patterns of availability using K-mean algorithm [13] for clustering resources by their availability traces.

In [14], the authors studied the availability of resources, including CPU. They relied on trace data gathered from over 48,000 Internet hosts participating in SETI@home project under BOINC [2]. They attempted to show that a deployment of enterprise services in a pool of volatile resources is possible and incurs reasonable overhead.

The authors in [15] have analyzed the Failure Trace Archive (FTA) that comprises public availability traces of parallel and distributed systems. They inspected the basic statistics of the traces, and, they fit the distributions for modeling failures in terms of probability distributions of availability and unavailability intervals. They implemented a toolbox to facilitate the comparative analysis of failure traces. This toolbox was implemented in Matlab, enhanced with several open source Matlab packages. Using the toolbox, they made a uniform and global statistical analysis of failure in nine distributed systems. One key finding was that the Weibull and Gamma distributions are often the best candi-dates for availability and unavailability distributions.

In this work we consider provider availability, a binary value that indicates whether a provider is reachable and responsive. This corresponds to the definition of availability in [16], [17]. By contrast, the authors in [18], [19] addressed the problem ofCPU availability instead of provider or host availability. Of course provider unavailability implies CPU

1At the University of California, Santa Barbara (UCSB) they gathered

(2)

Fig. 1. Provider availability and unavailability time periods indicated in theuptimeanddowntimeintervals.

unavailability, but the converse is not true, particularly when multi-processor and or multi-core machine are involved. The state transition of provider availability to unavailability and back is depicted in figure 1 that represents a timeline with the following time

intervals:-• uptime stage: a provider is available and when a job is

allocated to the provider it uses all the CPU power of that provider

• downtimestage: a provider has withdrawn from the grid

system because of policy decision, shutdown, . . . etc Churn is modeled with two provider-level characteriza-tions. Firstly, theuptimelength distribution, which is one of the most basic properties of churn. It captures the period of time that the providers remain active in the grid system each time they appear. Secondly, the downtimecan be defined as the interval between the moment a provider departs from the grid system and the moment of its next arrival in the system (see Figure 1). Most churn studies use these two distributions to model a churn. A provider’s lifetime is the time between the moment that a provider first participates in the grid system and the time that a provider finally and permanently exits from the grid system.

We will in the remainder of this paper refer to the provider’s reliability over an interval of elapsed time[ta, tb]. Given uptime periodsuptime(k)for k= 0,1,2, ... starting respectively at time ti

k and ending at t f

k, the reliability is defined as the aggregate of the uptime periods within that time interval, divided by the elapsed time

R=

P

kmax(tb, t f

k)−min(ta, tik)

tb−ta

(1)

The reliability index R thus represents the fraction of the elapsed time that the provider was available in the time interval from ta to tb. Obviously, the higher R the more available the provider. This is a narrow definition that equates reliability with provider availability.

III. GESMODEL

In this section we present the model and scenarios used in the Grid Economic Simulator to analyse methods for alleviating the effects of churn in dynamic grid systems.

The simulation model consists of three key elements. Firstly, a set of N geographically distributed grid providers ”resource owners” denoted byP1, P2, ..., PN, each of which is committed to deliver computational power. In addition, there is a group of potential providers, providers that are in a waiting state but that are ready to join and deliver computational resources in the grid.

Secondly, a market for resource allocation and job schedul-ing. All resource owners follow the same pricing strategy for determining the outcome of the bidding process in the market.

Fig. 2. An overview of the auction market architecture in GES.

Thirdly, M resource consumers or ”users” denoted by

U1, U2, ..., UM, that are also geographically distributed and

each has a queue of jobs to be executed. A job is specified by its job length and budget and is used to acquire resources for the job execution. In our simulation, the job lengths are randomly selected from a uniform distribution. The jobs are a CPU-bound computational task. The consumers are subdivided into four groups such that each has different deadlines for the jobs to be finished. That is, the jobs of each group have to be completed before the deadline of that particular group with the initial budget that has been allocated to the job.

The consumers interact with resource brokers that hide the complexities of grids by transforming user requirements into schedules for jobs on the appropriate computing resources. The brokers also manage the job submission and collect the results when they are finished. For instance, the consumer Um, where (m = 1,2, ..., M) sends the job Jj,m with its bidding valuebj,mto the broker. In accordance to consumer’s request, one of the available resource providers Pn, where

(n = 1,2, ..., N) will receive this request. The broker

plays a complex role of a wide range of tasks, i.e. it is an intermediary between resource providers and consumers. Beside this functionality, the broker provides information about the status of CPU usage in the grid system.

Finally, an entity that functions as a bank is used. When a job failure occurs, the broker in this case will send a report to the bank. The bank refund the money that had already been prepaid byUm to the account number of Pn. This enables the consumer to recover the money and use it to resubmit the failed jobs.

(3)

A. Decentralized marketplace

As mentioned previously, GES applies market-based prin-ciples for resource allocation to application scheduling. In this section we describe how resources are priced. In effect, we adopt an auction market for the pricing of computational resources. In contrast to the previous work [20], where the motivation was focused on price stability using a commodity market, the auction market has been engineered to be more realistic in geographically distributed systems.

The decentralized auction is a first-price sealed-bid (FPSB) auction with no reserve price (see the diagram 2), where the auctioneer collects all received bids and subsequently computes the outcome. The bidders can submit only one bid. The resource is allocated to the highest bidder at the end of the auction. Consumer Um typically, has its own valuationv(Ri,n)for resourceRi,nto bid. The consumerUm communicates its willingness to pay(bj,m)for resourceRi,n and the required processing time for job,Jj,m. The resource information concerning resourceRi,nof providerPnconsists of information about the CP U such as theCP U speed,µi. The information on jobJj consists of the job length,lj, the job deadlinedj and the budgetBjavailable for execution of the job.

When the auction ends by awarding the highest bid, the auctioneer charges the winner an amount of cn (i.e. cn = (bj,m)) per time step of the job for resource usage. That is, the charges are determined at allocation time and remain fixed throughout job execution. The required time for the Jj,mto execute onRi,nand the associated cost are computed using the equations 2 and 3 respectively.

T(Jj,m, Ri,n) =

lj,m

µi

(2)

B(Jj,m, Ri,n) =cm∗T(Jj,m, Ri,n) (3)

B. The role of churn

When churn is incorporated in the above model, leading to a dynamic grid system, jobs may fail because the provider withdraws from the grid system. Also users can withdraw from the grid system. The churn itself is modeled through system uptimes and downtimes as explained in section II. As a matter of fact, we have only included provider churn in the model. In case a consumer withdraws from the market, their results do not get recovered by them and they will have to resubmit their jobs. There is however no impact on the functioning of resource allocation of the grid system as a whole.

When providers withdraw from the market, jobs execution fails. Consequently, consumers have to be reimbursed; they have to resubmit their jobs. In an effort to meet their deadlines the consumer may have to increase their bids. Thus it is necessary to try and submit the jobs to a provider that is not likely to fail. In the next section we propose such an approach.

C. Alleviating the role of churn

In order to alleviate the adverse effect of churn on the usefulness of the grid system we propose a simple, straightforward algorithm that only exploits the historical information on uptimes and downtimes. This information is

maintained in the grid information system and is certainly not privileged information. The basic quantity that is used in the algorithm is the reliability index introduced previously (equation 1) for provider Pi in the time interval from ta to

tb

R(Pi) =

P

kmax(tb, tfk)−min(ta, tik)

tb−ta

(4)

where tik and t f

k are the start and end times respectively of the kth uptime period of Pi in that time interval. The reliability index thus represents the fraction of the elapsed time that the provider was available in the time interval from tatotb, and characterizes each provider individually. We also consider a threshold number which we denote γ and refer to as the reliability threshold. This threshold is the applied in a very straightforward manner: consumer will only bid with providersPi such that

R(Pi)≥γ (5)

That is, providers whose reliability index exceeds gamma. This has the effect of screening the less reliable providers. It is of course a simple heuristic rule, based on information of the provider’s track records. It does not involve any statistical evaluation nor does it evolve any quality of service elements. It does however lead to quantitative improvements of the job failures rates due to churn as we shall see from the simulation results in the next section.

IV. SIMULATION RESULTS

In this section, we evaluate the effect of churn in the above model through two simulation scenarios. The key parameters that apply in all scenarios discussed in this section are listed in table I. The relative workloads on the grid system that are assigned are roughly the same for those scenarios that we compare with and without churn. The relative workload, φ, can be defined as the ratio of aggregate workload, l, to the aggregate computational capacity,α.

φ=

P mlm P

nαn

(6)

This relative workload φ highly affects the system perfor-mance. Because of this, scenarios where we directly compare the effects of churn by executing the scenario with and without churn have been designed to run under the same relative workloads.

In all experiments we have divided the users into four groups, each characterized by a different range of deadlines to be met for the jobs that the user needs to have executed (see table I). In some of the experiments we have also sub-divided the providers into groups with different parameters for the uptime churn distribution. Simulation parameters that differ per scenario are reported in the appropriate subsection.

A. Performance metric

(4)

TABLE I SIMULATION PARAMETERS.

Simulation steps 2000s

Number of consumers 6000

Number of providers 1000

Initial budget 1000000

di(Group1) {1,· · ·,100}

di(Group2) {1,· · ·,1300}

di(Group3) {1,· · ·,1700}

di(Group4) {1,· · ·,2000}

microsoft99 Scenario K λ

uptime 0.55 35.30

downtime 0.60 9.34

pl05 Scenario K λ

uptime 0.33 19.35

downtime 0.36 5.59

TABLE II

SCENARIOISIMULATION PARAMETERS WITHP L05DISTRIBUTION.

pl05 case K λ

Job duration in time steps {100· · ·,110}

Nr. of jobs per user at injection step 3

uptime 1 0.33 14.51

uptime 2 0.33 19.35

uptime 3 0.33 24.19

the necessary computational resources within deadline and within their budget constraint. We are interested in studying the former effect.

In order to discriminate between both effects we define the failure rate(Frate) with the following equations:

Frate =

F J

T J −FratenoChurn (7) F J : number of jobs fail (8) T J : number of total jobs (9) FratenoChurn : failure rate with no churn (10)

Thus, the failure rate represents the fraction of jobs that fail, but we do not count those that would fail even if there were no churn in the system. That is to say, the failure rate measures the job failures directly attributable to provider churn in the grid system.

B. Scenario I: The effect of the mean uptime

In this scenario we want to explore how is sensitive the evolution of the grid system, the providers, user, and executed jobs, with respect to parameters of the churn model. We have done a number of experiments in this respect and report here on one such experiment that is representative of this type of exploration.

We have the simulation with the parameters as in ta-bles I and II. We use the pl05 churn model with the parameter determining the uptime at three different values: λ = 14.51,19.35,24.19. The middle value is the actual

pl05value, the other two are an increase and decrease by25 percent respectively. An increase ofλrepresents an increase of the average provider uptime.

As the mean uptime increases or decreases for differentλ the number of providers churned out of the grid system in-creases and dein-creases respectively. This inverse relationship confirms what one would expect a priori.

Fig. 3. Scenario I: the number of jobs resubmitted at least once, at least twice, etc. with churn of thepl05distribution.

Figure 3 indicates the number of jobs that need to be resubmitted because of failure at least once, at least twice, . . . . Again we show the results for simulations based on the three different λ parameters. The total number of jobs initially in the system for this case is18000.

The results of Figure 3 lead to a number of observations. Apparently this system has a heavy workload and churn. Approximately one in two jobs needs to be resubmitted at least once, one in five at least twice. The tail of multiple jobs submissions is quite long.

The differences between the different λ cases is as one would expect: smaller λ means shorter uptimes and leads to more resubmissions, larger λ means longer uptimes and leads to fewer resubmissions. There is no noticeable effect on the multiplicity of the resubmissions.

TABLE III

SCENARIOIISIMULATION PARAMETERS.

microsoft99 case

Nr. of jobs per user at injection step 7

Job duration in time steps {40· · ·,50}

pl05 case

Nr. of jobs per user at injection step 3

Job duration in time steps {100· · ·,110}

C. Scenario II: The threshold algorithm

In this scenario, the experiments are effected using the parameters listed in tables I and II. In the experiments we explore the effect of applying the threshold algorithm. The differences in job duration between themicrosoft99and

pl05 cases are related to the differences in the average uptime in the two churn distributions. They have been set to generate roughly identical relative workloads in the grid system in both cases.

For each provider, at every point in time, the reliability index is computed. Users the screen providers i.e. they submit bids for computational resources only with those providers whose reliability index exceeds a preset threshold value. We explore the effect of changing the threshold on the job failure rate observed in the grid system.

(5)

35 40 45 50 55 60 65 70

0 20 40 60 80 100

Failure rate (%)

Reliability threshold (%) microsoft99 data set

pl05 data set

Fig. 4. Scenario II: the job failure rate for different thresholdsγ.

The first observation is somewhat disappointing because it basically indicates that the threshold algorithm does not really improve the failure rate compared to the situation with γ = 0% i.e. when no reliability screening is applied. The second observation is rather easy to understand: when one increasesγ, the pool of providers available for bidding shrinks. Thus, the higher γ the smaller the set of available providers. Even though these might be the more reliable ones, the computational capacity shrinks in such a manner these jobs simply to find computational resources to run on and fail to meet deadline.

In the panels in Figure 5 the results of the same exper-iments are shown, but now differentiated per user group. Four subgroups of users have been introduced that each differs in the deadlines that are set for their jobs. The relevant parameters are listed in see table I. The deadlines

forGroup1are the most stringent, those ofGroup4are the

least stringent.

The results in these figures indicate that the failure rate in the pl05 case is more responsive to application of threshold algorithm than that in the microsoft99 case. This observation is consistent with the aggregate (over the user groups) failure rate shown in Figure 4.

The results also indicate that the improvement in the failure rate, even if slight, is most pronounced when the deadline is the least stringent and vice versa. We have no clear, unambiguous indications as to the cause of this relation because of the complexity of the system. However, we conjecture that the effect is most probably due to the fact that the more stringent the deadline, the less effective the mechanism of resubmissions and because of that also the less effective the screening of unreliable providers.

TABLE IV

SCENARIOIVSIMULATION PARAMETERS.

microsoft99 case

Nr. of jobs per user for load 1 7

Nr. of jobs per user for load 2 4

Nr. of jobs per user for load 3 2

Job duration in time steps {40· · ·,50}

pl05 case

Nr. of jobs per user for load 1 3

Nr. of jobs per user for load 2 2

Nr. of jobs per user for load 3 1

Job duration in time steps {100· · ·,110}

8 10 12 14 16 18

0 20 40 60 80 100

Failure rate (%)

Reliability threshold (%) Group1

Group2 Group3 Group4

8 10 12 14 16 18

0 20 40 60 80 100

Failure rate (%)

Reliability threshold (%) Group1

Group2 Group3 Group4

Fig. 5. Scenario II: job failure rates per consumer group for differentγ

formicrosoft99(top panel) andp105(bottom panel) distributions.

D. Scenario III: Relative workload

In this scenario we study the effect of the system workload on the failure rate. To do so, we run the same experiments under different workload conditions by changing the number of jobs per consumer (see table IV for a listing the simulation parameters).

Figure 6 shows the failure rates as a function of γ, the reliability threshold, for each of the workload conditions, both for themicrosoft99 (top panel) and pl05 (lower panel) churn distributions. It is obvious here that the lighter the effective workload on the grid system, the more pro-nounced the improvement of the failure rate due to the threshold algorithm. This can be understood in the following sense. The application of the threshold criterion has two inherent, counteracting effects: the higher the threshold is set, the more reliable the providers satisfying the criterion but also the fewer the number of providers passing it. The first effect will lower the failure rate because providers are less likely to churn out and cause the jobs to fail. The second effect will increase the failure rate because there are fewer providers available, which lowers the total computational capacity in the grid and causes jobs not to finish before deadline or not be run at all. At a certain threshold value γan optimum equilibrium between these opposing effects is achieved and the failure rate is at its lowest value, given the other parameters of the problem.

(6)

25 30 35 40 45 50 55 60 65 70

0 20 40 60 80 100

Failure rate (%)

Reliability threshold (%) The number of jobs per consumer = 7 The number of jobs per consumer = 4 The number of jobs per consumer = 2

25 30 35 40 45 50 55 60 65 70

0 20 40 60 80 100

Failure rate (%)

Reliability threshold (%) The number of jobs per consumer = 3 The number of jobs per consumer = 2 The number of jobs per consumer = 1

Fig. 6. Scenario III: The effect of workload on the failure rate for different (γ). Themicrosoft99(top panel) and p105(bottom panel) churn distributions and 5 provider groups.

V. FUTURE WORK

Future work in this area of study includes introduction more refined definitions of reliability, more refined criteria for screening the providers that also take into account the lapsed time in the uptime interval. In addition the formulation of quality of service guarantees needs to be investigated.

VI. CONCLUSION

In desktop and peer-to-peer dynamic grids job failure due to churn in the grid system are inevitable. There is a need to minimize the impact of these job failures on the quality of service provided by such grids. In order to find effective approaches that can alleviate the impact of churn we need to model churn. In our work we have used models that are available in the literature.

In the context of the Grid Economics simulator framework we have programmed these churn models and developed resource allocation scheme based on first-price-sealed-bid auctions. We have then used this to investigate an algorithm, which we refer to as the threshold algorithm, to alleviate the impact of churn. The algorithm is based on a simple criterion which limits the bids only to reliable providers, where reliability is defined as having a reliability index above a certain threshold. The reliability index is a simple measure of availability based on average aggregate uptime.

We analyse experiments in a number of scenarios and arrive at the conclusions that firstly the effect of the threshold algorithm is fairly transparent and secondly it has the poten-tial of improving failure rates significantly if the effective workload in the grid system is not too high.

REFERENCES

[1] D. P. Anderson, “A system for public-resource computing and storage,” in Proc. of the 5th IEEE/ACM International Workshop on Grid Computing, 2004, pp. 4–10. [Online]. Available: http://boinc.berkeley.edu/grid-paper-04.pdf

[2] ——, “BOINC: A system for public-resource computing and storage,” inProceedings of the Fifth IEEE/ACM International Workshop on Grid Computing. IEEE Computer Society, 2004, pp. 4–10.

[3] P. Kacsuk and G. Terstyanszky, “Special section: Grid-enabling legacy applications and supporting end users,” Future Generation Comp. Syst., vol. 24, no. 7, pp. 709–710, 2008.

[4] P. Kacsuk, A. C. Marosi, J. Kovacs, Z. Balaton, G. Gombs, G. Vida, and A. Kornafeld, “Sztaki desktop grid: A hierarchical desktop grid system,” inProceedings of the Cracow Grid Workshop 2006, Cracow (Poland), 2007.

[5] D. P. Anderson, J. Cobb, E. Korpela, M. Lebofsky, and D. Werthimer, “SETI@home: An experiment in public-resource computing,” Com-munications of the ACM, vol. 45, no. 11, pp. 56–61, 2002.

[6] D. Stutzbach and R. Rejaie, “Understanding churn in peer-to-peer networks,” in IMC ’06: Proceedings of the 6th ACM SIGCOMM conference on Internet measurement. New York, NY, USA: ACM, 2006, pp. 189–202.

[7] ——, “Characterizing churn in peer-to-peer networks,” Univ. of Ore-gon, Tech. Rep. CIS-TR-2005-03, May 2005.

[8] S. Y. Ko, I. Hoque, and I. Gupta, “Using tractable and realistic churn models to analyze quiescence behavior of distributed protocols,” in

SRDS ’08: Proceedings of the 2008 Symposium on Reliable Distributed Systems. Washington, DC, USA: IEEE Computer Society, 2008, pp. 259–268.

[9] D. Nurmi, J. Brevik, and R. Wolski, “Modeling machine availability in enterprise and wide-area distributed computing environments,” in

In Euro-Par05, 2003, pp. 432–441.

[10] D. Thain, T. Tannenbaum, and M. Livny, “Distributed computing in practice: the condor experience,” Concurrency and Computation: Practice and Experience, vol. 17, pp. 323 – 356, 2005.

[11] Hawkeye, “Monitoring and management tool for distributed systems,” 2006, associated with the Condor Project. [Online]. Available: http://www.cs.wisc.edu/condor/hawkeye/

[12] D. Kondo, A. Andrzejak, and D. P. Anderson, “On correlated availability in internet-distributed systems,” in Proceedings of the 2008 9th IEEE/ACM International Conference on Grid Computing, ser. GRID ’08. Washington, DC, USA: IEEE Computer Society, 2008, pp. 276–283. [Online]. Available: http://dx.doi.org/10.1109/GRID.2008.4662809

[13] C. Elkan, “Using the triangle inequality to accelerate k-means,” in

ICML, 2003, pp. 147–153.

[14] A. Andrzejak, D. Kondo, and D. P. Anderson, “Ensuring collective availability in volatile resource pools via forecasting,” in 19th IEEE/IFIP Distributed Systems: Operations and Management (DSOM 2008), Samos Island, Greece, September 2008, pp. 149–161. [Online]. Available: http://www.zib.de/andrzejak/

[15] D. Kondo, B. Javadi, A. Iosup, and D. Epema, “The failure trace archive: Enabling comparative analysis of failures in diverse distributed systems,” in Proceedings of the 2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing, ser. CCGRID ’10. Washington, DC, USA: IEEE Computer Society, 2010, pp. 398–407. [Online]. Available: http://dx.doi.org/10.1109/CCGRID.2010.71

[16] D. Kondo, M. Taufer, C. L. B. III, H. Casanova, and A. A. Chien, “Characterizing and evaluating desktop grids: An empirical study,” in Proceedings of the 18th International Parallel and Distributed Processing Symposium (IPDPS?04). IEEE Computer Society, 2004, pp. 1 – 10.

[17] B. Ranjita, S. Stefan, and V. Geoffrey, “Understand-ing availability,” 2003, pp. 256–267. [Online]. Available: http://www.springerlink.com/content/ehcfgw36n3j1ypr6

[18] R. H. Arpaci, A. C. Dusseau, A. M. Vahdat, L. T. Liu, T. E. Anderson, and D. A. Patterson, “The interaction of parallel and sequential workloads on a network of workstations,” SIGMETRICS Perform. Eval. Rev., vol. 23, no. 1, pp. 267–278, 1995.

[19] W. Rich, S. Neil, and H. Jim, “Predicting the cpu availability of time-shared unix systems on the computational grid,” in HPDC ’99: Proceedings of the 8th IEEE International Symposium on High Performance Distributed Computing. Washington, DC, USA: IEEE Computer Society, 1999, p. 12.

Referências

Documentos relacionados

i) A condutividade da matriz vítrea diminui com o aumento do tempo de tratamento térmico (Fig.. 241 pequena quantidade de cristais existentes na amostra já provoca um efeito

didático e resolva as ​listas de exercícios (disponíveis no ​Classroom​) referentes às obras de Carlos Drummond de Andrade, João Guimarães Rosa, Machado de Assis,

Ousasse apontar algumas hipóteses para a solução desse problema público a partir do exposto dos autores usados como base para fundamentação teórica, da análise dos dados

A infestação da praga foi medida mediante a contagem de castanhas com orificio de saída do adulto, aberto pela larva no final do seu desenvolvimento, na parte distal da castanha,

The probability of attending school four our group of interest in this region increased by 6.5 percentage points after the expansion of the Bolsa Família program in 2007 and

Despercebido: não visto, não notado, não observado, ignorado.. Não me passou despercebido

The fourth generation of sinkholes is connected with the older Đulin ponor-Medvedica cave system and collects the water which appears deeper in the cave as permanent