• Nenhum resultado encontrado

INTERFACE PARALLEL

No documento The CSP Approach (páginas 65-70)

Indexed interleaving

The interleaving parallel operator is associative, in the sense that P1jjj

(

P2jjjP3

)

has exactly the same execution possibilities as

(

P1 jjj P2

)

jj j P3; and commutative, in the sense that P1 jjjP2and P2j jjP1have the same execution possibilities. It can therefore be generalized to finite combinations of processes. The generalization takes the form

jjji2IPi

where I is a finite set of indexes, and Piis defined for each i2I. An alternative way of writing an indexed interleaving in the special case where the indexing set is an interval of integers

fijm6i6ngis

jjj

n i=mPi

Example 2.19

A node similar to the NODE process of Example 2.17, but which can hold a maximum of n messages, could be described as an interleaved combination of n versions of COPY:

n NODE

=

jj j06i<nC

(

i

)

where each C

(

i

)

is defined to be COPY, for

0

6i<n. To describe an interleaved combination of n copies of the same process P there is a convenient shorthand

jjj

06i<nP

so an alternative description of n NODE would be given by n NODE

=

jj j06i<nCOPY

2

The operational semantics is straightforward:

P1 ?!a P01

P2 ?!a P02

[

a2AX

]

P1k

AP2 ?!a P01 k

AP02

P1 ?! P01

[

62AX

]

P1k

A

P2 ?! P01 k

A

P2 P2k

A

P1 ?! P2k

A

P01

P1and P2co-operate on any event drawn from A, and interleave on events not in A.

Example 2.20

A runner in a race engages in two events, start and finish:

RUNNER

=

start!finish!STOP

Two runners should synchronize on the start event, but they finish independently.

RACE

=

RUNNER k

fstartg

RUNNER

2

Indexed interface parallel

This parallel operator is associative provided the same interface set is used throughout: P1 k

(

P2k A

AP3

)

has the same executions as

(

P1k

AP2

)

k

AP3. It is also commutative. This allows the operator to be generalized as follows:

k

Ai2I

Pi

where I is a finite indexing set, and Piis defined for each i2I. It describes the process where any occurrence of an event from A must involve all of the Pi. An occurrence of any event not in A involves exactly one of those processes.

Example 2.21

A marathon involving

30

;

000

runners could be described as

MARATHON

=

k

fstartg

30;000

i=1

RUNNER

All runners start at the same time, but each of them finishes independently. 2

Example 2.22

A function applied to a particular argument can be computed in two ways:

using algorithm g and using algorithm h. These two functions should agree on the value they compute for any particular input x, so the intention is that g

(

x

) =

h

(

x

)

for any input x.

A module is written for each algorithm. The communication pattern of the modules is written as

G

=

in

?

x

:

T !out

!

g

(

x

)

!SKIP

H

=

in

?

x

:

T !out

!

h

(

x

)

!SKIP

These modules can be run concurrently, but there are a number of ways in which this may be accomplished.

1. A fault-tolerant approach would run G and H in parallel, synchronizing on input and output. The combination G kH accepts one input which is received by both G and H, and also synchronizes on output. This means that an output can occur only if both modules agree on its value. If the modules disagree, then a deadlock occurs and successful termination cannot occur.

2. To receive the result of the fastest calculation, an independent approach could be adopted, interleaving G and H. The combination G jjj H has to accept the input twice, since each module accepts its input independently of the other. If only one input is provided, then only one of the modules is executed, though the user has no control over which.

Furthermore, the combination does not ensure that the same input is provided to each module.

3. The combination G k

in:T H allows a single input to be received by both modules, but allows for independent output, so a result can be obtained after the first module has completed its calculation. It cannot terminate until both outputs have occurred.

Different flavours of concurrency are appropriate for different requirements. 2

Exercises

Exercise 2.1

Give the transition graph for MACHINE of Example 2.4.

Exercise 2.2

Give the transition graph for CUSTkATT of Example 2.3. What behaviour does this parallel combination exhibit that you would not expect to find in a real cloakroom system? Amend the descriptions of the interacting parties CUST and ATT appropriately to remove this possibility.

Exercise 2.3

Give the transition graph for PAINTING of Example 2.8, and use it to identify all the ways in which deadlocks can occur.

Exercise 2.4

The book shop of Example 2.18 does not contain sufficient detail to prevent fraud: it allows any book to be claimed with any receipt. Adapt the description to keep track of the identity of the book that has been lodged throughout the payment procedure, so that customers can only take the books that have been paid for.

Exercise 2.5

A dishonest shopper will select an item, and will then either leave without paying, or else will pay if the circumstances in the shop make the first course of action infeasible. This can be described by the following process:

DCUSTOMER

=

enter!select!

(

pay!leave!DCUSTOMER

2leave!DCUSTOMER

)

What is the expected behaviour of this customer in parallel with the SHOP process of Ex-ample 2.7? What difference does it make to the expected behaviour if the external choice is replaced with an internal one?

Exercise 2.6

Draw the hypercube in the case where n

= 4

. How many ways are there for a message to get from

(1

;

0

;

0

;

1)

to

(0

;

0

;

1

;

0)

using the message routing algorithm?

Exercise 2.7

Consider the hypercube network of Example 2.13.

1. Not all of the channel names in Figure 2.6 have been given their subscripts. Give the subscripts for the remaining channels.

2. Is MAILER deadlock-free?

3. Is it deadlock-free if the next destination for a message is chosen internally by nodes, rather than by an external choice, as follows:

Nl

(

hi

) =

2k2adj(l)ck;l

?(

d:m

)

!Nl

(

h

(

d;m

)

i

)

2inl

?(

d:m

)

!Nl

(

h

(

d;m

)

i

)

Nl

(

h

(

d;m

)

ias

)

=

outl

!

m!Nl

(

s

)

if d

=

l

(

uk2next(l;d)cl;k

!(

d:m

)

!Nl

(

s

))

otherwise

2

2k2adj(l) ck;l

?(

d0:m0

)

!

Nl

(

h

(

d;m

)

iasah

(

d0;m0

)

i

)

4. Is it deadlock-free if nodes can hold at most one message, i.e. they block input when they hold a message (rather than being able to hold arbitrarily many), as follows:

Nl

(

hi

) =

2k2adj(l)ck;l

?(

d:m

)

!Nl

(

h

(

d;m

)

i

)

=

2inl

?(

d:m

)

!Nl

(

h

(

d;m

)

i

)

Nl

(

h

(

d;m

)

i

) =

outl

!

m!Nl

(

hi

)

if d

=

l

(

2k

2next(l;d)cl;k

!(

d:m

)

!Nl

(

hi

))

otherwise

5. What is the maximum number of nodes a message will pass through in a network of

2

n

nodes?

Exercise 2.8

Consider the array SORTER of Example 2.12:

1. Is it deadlock-free?

2. Is it deadlock-free if the order of each cell’s output is reversed (so output occurs on the h channel before the v channel) as follows:

Ci;j

=

hi;j

?

x!vi;j

?

y!hi+1;j

!max

fx;yg!vi;j+1

!min

fx;yg!Ci;j

3. Is it deadlock-free if the order of both inputs and outputs for a cell is reversed as follows:

Ci;j

=

vi;j

?

y!hi;j

?

x!hi+1;j

!max

fx;yg!vi;j+1

!min

fx;yg!Ci;j

Which other orders of inputs and outputs avoid deadlock?

Exercise 2.9

Show that interface parallel is not associative in general when the event sets are different, by finding processes P1, P2, and P3and sets A and B such that P1k

A

(

P2k

BP3

)

is different from

(

P1k

AP2

)

k

BP3.

3

Abstraction and control flow

No documento The CSP Approach (páginas 65-70)