• Nenhum resultado encontrado

Case study: Insurance Claim Service

This section presents the case study which illustrates a BPEL process that deals with the handling of an insurance claim. This case study is based on and extends the insurance claim process that can be found in [15] and [8] along with more detailed information. The service orchestration is depicted in Figure2.6. The BPEL process file and the WSDL file are available in theAppendix.

Initially, a validation of the data input is performed by theReportsAndQuotations service. Except for this step, the rest of the steps of the workflow are depicted in Figure2.7. The clerk service checks if police reports, witness reports, and quotations are available, and creates a claim form which contains information about the type andvalueof the claim. Then, the assessor service(5)uses the produced claim form and passes an expert opinion on the physical evidence of the claim (e.g. a wrecked motorcycle). In parallel, depending on the value of the claim(2), the branches(6)or (3)follow. Based on the type of the claim (i.e ’vehicle’ or ’household’) there is a XOR split, so(7)/(8)or(9)/(10)come next. Finally, the claim manager service approves the claim based on the results of the previous activities(14).

Chapter 2. Secure BIP 20

FIGURE2.6: ’Insurance claim service’ — BPEL process.

FIGURE2.7: A part of the ’Insurance claim service’ workflow – textual form. [15]

Chapter 2. Secure BIP 21

Data items and data flow. The main data items used by the process are eleven:

(d0, ...,d10). These data items are shown in Table2.1. A police report d0, witness reportsd1and quotationsd2are the data input items for both theReportsAndQuota- tionsand theclerkservice. Data output items of theclerkservice are the claim form d3, the claim typed5and claim valued6. The value ofd6is decisive for the execution of theclerksequence. Consequently, the value ofd5 is decisive for the execution of theClerkVehicleorClerkHouseholdsequence. Furthermore, theassessorservice needs d3to produced4. Similarly, in theclerksequence theclerkservice needsd3to produce d7. Table2.2gives the data input and output items for each service:

d0 police report d1 witness reports d2 quotations d3 claim form

d4 assessed claim form d5 claim type

d6 claim value

d7 completed claim form

d8 validated vehicle/household quotations d9 validated vehicle accident details/

household incidence d10 approved claim

TABLE2.1: ’Insurance claim process’ — primary data items

Service Din Dout

ReportsAndQuotations d0,d1,d2 d0,d1,d2 Clerk d0,d1,d2 d3,d5,d6

Assessor d3 d4

Clerk d3 d7

ClerkVehicle d2 d8

ClerkVehicle d0,d1 d9

ClerkHousehold d2 d8

ClerkHousehold d0,d1 d9

ClaimManager d3,d4,d7,d8,d9 d10

TABLE2.2: ’Insurance claim process’ — input and output data items for each service

22

Chapter 3

Information Integrity policies

In this chapter, thedecentralized label model(DLM) is presented. Firstly, the model is described for expressing confidentiality policies. Afterwards, the model is presented for supporting integrity policies. Finally, it is shown how the DLM can be enforced insecBIP.

3.1 The DLM policy model

Thedecentralized label model(DLM) was first introduced by Myers and Liskov [20]. A model’s key feature is that it supports computation in an environment with mutual distrust.

In a decentralized environment no central authority can decide the security poli- cies of the whole system. Every individual participant in the system must be able to define and control its own security policies. Thus, during the composition, the sys- tem may enforce a behavior that is in accordance with all of the security policies that have been previously defined by all the participants. The concept of the "ownership"

of the labels that are used to annotate the data makes this behavior possible. These labels are expressed using a set of security policies. Each principal (participant) can define the security labels of its own data, but not those of others. The DLM allows information security policies to be expressed in terms of principals representing au- thorities.

Principals are authority entities (e.g groups or roles) that own, read and write information. Principals can also release information to andact forother principals.

For example, if a principal p canact foranother principal q, then p inherits all the privileges ofq. The statement "p acts for q" is written formally asp q. This relation

Chapter 3. Information Integrity policies 23

is reflexive and transitive, defining a partial order of the principals [20]. Intuitively, theacts forrelation is used to model groups and roles. In Figure3.1an example of a principal hierarchy is presented.

FIGURE3.1: Example of a principal hierarchy.

A group namedgroup is modeled by introducing the acts for relationships be- tween the group members (i.e chris and mary) and thegroupprincipal. These rela- tionships allow chris and mary to read data readable by the group and to manipu- late data controlled by the group. The principal namedmanageris able toact forboth chris and mary because it represents their manager. Nick has two separate roles in this hierarchy (i.e manager and lawyer). This principal can be used to prevent acci- dental leakage of information between data stores associated with its different roles.

Generally, theacts forrelation permits delegation of all of a principal’s privileges or none.

Principal hierarchy plays an important role in the complete security policy of the system. Moreover, it can be managed in a decentralized manner. A relationship pqcan be an addition to the principal hierarchy if and only if the adding process has sufficient privileges toact forthe principal q. This relationship only empowers principal p. Hence, there is no need to centralize the mechanism that controls the changes to the principal hierarchy.

Labels are used by principals to annotate their data. A label contains a set of policies defined by various principals. Apolicycontains two parts: an owner and a set of readers. This is written asowner: readers. Owners and readers denote princi- pals. The owner is the source of the data and the readers are possible destinations for the data. Thus, principals that are not defined as readers in the policy cannot

Chapter 3. Information Integrity policies 24

read the data. A label can contain more than one policies. In this case, the policies are separated by semicolons. For example, label L1 = {o1 : r1; o2 : r1,r3} states that data is owned by botho1 ando2. Owner o1 permits only readerr1 to read the data and ownero2 permits readersr1andr3to read it. In general, a user may read the data if and only if his principal canact fora reader for each policy in the label.

Thus, to satisfy both policies, only users whose principals canact for readerr1 are permitted to read the data labeled byL1.

As mentioned earlier, labels are used to control information-flow by annotating variables contained in the program with labels. As data flows through the system during computation, these labels can only become more restrictive. That is, labels have more owners or specific owners allow fewer readers. For example, labelL1 = {o1 :}is stricter than label L2 = {o1 : r1,}, becauseo1allowsr1 to read the data in L2 but does not inL1. This can be expressed as L2 v L1and is read as L1 is more restrictive than L2. When the program computes two values labeled with L1 and L2respectively, the result should have the least restrictive label Lthat enforces the policies defined in L1 andL2. Namely, L1 v L,L2 v L and∀L0 such that L1 v L0 andL2 vL0we haveLvL0. This least restrictive label is theleast upper boundorjoin ofL1andL2, defined asL1tL2. On the other hand, thegreatest lower boundormeet ofL1andL2, written as L1uL2is the largest label less than bothL1andL2[16]. The rule for the join of the two labels can be written formally by consideringL1andL2 as set of policies:

Definition 6 (labels for derived values (LC1 tLC2))1

LC1 tLC2 =LC1 ∪LC2

Relabelings. During the system’s computations, a value can be assigned to a variable only if the relabeling that occurs at that moment is more restrictive than the original label. Thus, if the relabeling is a restriction, it is considered to be safe.

Myers and Liskov [20] classified the ways that a label can be safely changed into four categories:

1where "C" stands forconfidentiality

Chapter 3. Information Integrity policies 25

Remove a reader: A reader can be safely removed from the policy in the label if the reader that is removed does not make the policy anylessrestrictive. For example, a relabeling from L1 = {chris :mary,nick}toL2 = {chris: mary}is considered to be safe because the set of readers allowed inL1becomes smaller.

Hence, it is a restriction.

Add a policy: If all the previous stated policies are enforced, it is safe to add a new policy to the label because the label becomes more restrictive.

Add a reader: A reader r0 can be added to the policy if the policy already permits a readerrthat readerr0acts for.

Replace an owner: A policy owner o can be safely replaced with another owner o0 that acts for o. This means that only principals that act for o0 can weaken the original policy throughdeclassificationand principals with weaker authorities thanocandeclassifyit.

The relationships between labels and policies can be defined formally with the use of some additional mathematical notations. The notationo(I)denotes the owner of a policyI, and notationr(I)denotes the set of readers ofI. If a principalp1acts for a principalp2in the principal hierarchy , then p1 p2. Given a policy I, we define a functionRthat defines the set of principals implicitly allowed as readers by that policy:

R(I) ={p| ∃p0r(I) p p0}

This function can be used to define when one label is at most as restrictive as another (L1v L2) and when one policy is at most as restrictive as another (I v J):

Definition 7 (complete relabeling rule (v))[20]

Chapter 3. Information Integrity policies 26

L1v L2 ≡ ∀IL1JL2 I v J

I v J≡o(J)o(I) ∧ R(J)⊆R(I)

o(J)o(I) ∧ ∀p0r(J)pr(I) p0 p

The rule depicted above, defines when label L1 can be safely relabeled to label L2. The rule is also proven to be sound and complete [19].

Declassification. As previously mentioned, when the system’s computations occur the labels can only become more restrictive. This increasing restriction makes the data unreadable. Hence, the principals than own the data may need to relax their policies so other principals can read it. This form of relabeling is calleddeclassification [31].

Declassification is allowed only when a process is authorized to act on behalf of some sets of principals. The process cannot declassify policies of owners it does not act for. Because this action applies on each of the owners in the set of principals, no centralized declassification process is needed. This makes the DLM sufficiently suitable for distributed systems. Hereafter, declassification is formally defined. A process may weaken or remove policies owned by principals that are part of its authority. LabelL1may be relabeled toL2only if L1 v L2tLA, whereLAis a label containing policies of the form{p :}for every principal pin the current authority.

Thus, the rule for declassification can be expressed as:

Definition 8 (relabeling by declassification)[20]

LA=t(p in current authority){p :} L1 v L2tLA

L1 may be declassi f ied to L2

This rule builds on the rule for relabeling by restriction. The subset rule for re- labeling L1 to L2 states that for all policies J in L1, there must be a policy K in L2

Chapter 3. Information Integrity policies 27

that is at least as restrictive. For policies J in L1 that are owned by a principal pin the current authority, a more restrictive policyKis found inLA. For other policiesJ, the corresponding policyKmust be found inL2since the current authority does not have the power to weaken them. Intuitively, a labelL1 may be always declassified to a label that it could be related to by restriction, because the relabeling condition L1v L2implies the declassification conditionL1v L2tLA.

3.2 The DLM for information integrity

The DLM was previously introduced for supporting labels that contain confidential- ity policies. It has been shown that integrity policies are the dual of confidentiality policies [5]. Intuitively, when a confidentiality policy is enforced on the system, it specifies who can readthe data and in a way where the data will flow to. On the other hand, when it enforces an integrity policy it specifies who can write to the data and keeps track of the principals that may have modified the data. The DLM is a more complex model than various ones previously introduced in the literature.

Henceforth, simpler integrity models are presented before introducing the DLM for information integrity.

Binary Model. This is the simplest model of all. In this model, the labels can be expressed as either {tainted} or {untainted}. Their relation can be formally defined as {untainted}v{tainted}. In terms of confidentiality the correspond- ing word for "tainted" is "private" and for "untainted" the corresponding word is "public".

Writer Model. In this model, labels are represented as sets of principals:

{p1,p2, ....,pn}. This means that every principal pi in the label may have mod- ified the data and principals that are not included in the set have not written to the data. In this case, the partial order relation is defined as: L1 v L2 iff L1 ⊆ L2. For example,L1={Chris}andL2 ={Chris,Mary},L1 vL2because data labeled withL2could have been modified by Mary, who is not a writer in L1.

Chapter 3. Information Integrity policies 28

Trust Model. In the trust model, labels are also represented as sets of princi- pals: {p1,p2, ....,pn}. The difference between this model and the writer model is that, here, a principal trusts the data with the label iff it is in the label set.

The partial order relation is defined as: L1v L2iffL2⊆ L1.

Distrust Model. The sets of principals in the distrust model have the opposite meaning to those in the trust model. Here, the sets of principals contained in the label do not trust the data. Thus, the partial order relation is defined as:

L1 v L2iffL1 ⊆L2.

In the DLM, the structure of an integrity policy is the same as the structure of a confidentiality policy. It has anowner and a set ofwriters. It is written as owner : writers. Also, if there are more than one owners, the policies in the label can be separated by semicolons. For example, a label L1 = {o1 : w1,w2; o2 : w2}shows that both ownero1and ownero2own the data. Thus, principalo1believes that only writersw1 and w2may have modified the data and principal o2 believes that only writerw2may have written to the data. In this case, to satisfy both policies contained in labelL1, only writerw2is permitted to modify the data.

As in confidentiality labels, it is necessary to define theleast upper boundorjoint and thegreatest lower boundormeetufor integrity labels. These operations are used in label computation. Because of the dual relation between the labels, we formally define that:

R(LI1tLI2) =R(LC1 uLC2) and R(L1I)uL2I =R(LC1 tLC2)

Thegreatest lower boundormeetcan be computed by taking theunionof the integrity policies. This rule can be formally defined as:

Definition 9 (labels for derived values (LI1uL2I))2

L1I uL2I =LI1∪LI2

2where "I" stands forintegrity

Chapter 3. Information Integrity policies 29

Relabelingsoccur during the computation of the values of the variables and in order to avoid integrity violations the new label cannot belessrestrictive than the original label. These incremental relabelings must always besafe. So, for integrity labels there are also four categories of safe relabelings:

Add a writer. The addition of a writer is safe because it only makes the value more restrictive in subsequent use.

Remove a policy. The removal of a policy contained in the label is considered to be safe because it reduces the set of writers that may have affected the data and restricts subsequent use of the value.

Replace a writer. A writer w0 may be replaced by a writer wthat it acts for.

Becausew0has the ability toact for w, a policy permittingwas a writer permits bothw0 andwas writers, whereas a policy permittingw0does not, in general, permitw. Therefore, replacingw0 bywbasically adds writers, a change that is considered to be safe.

Add a policy. If the addition of a policy to a label offers a weaker integrity guarantee than the existing one, the value of the label does not become any less restrictive by this addition.

These kinds of relabelings turn out to be exactly the inverse of the relabelings described in Section 3.1. As it turns out, the relabeling rule for integrity labels is dual to the relabeling rule for confidentiality labels. For integrity labelsL01 andL02 and corresponding confidentiality labelsL1andL2we formally state that:

L1 vL2 ←→ L20 v L01

The need fordeclassificationof the data also exists for integrity labels. In situations where data has higher integrity than it is suggested, the principals can add policies to the labels or remove writers from the policies in order to subsequently allow the data to be used more freely. The new policy may be added only if the current user

Chapter 3. Information Integrity policies 30

canact forthe owner of the policy. Hence, a declassification of an integrity labelL1to a labelL2is allowed whenL2uLIA vL1, whereLIAis an integrity label that contains a policy for every principal in the authority of the current process.

3.3 The DLM in secBIP

As described in previous sections, the DLM uses labels to annotate sensitive data.

These labels contain one or more policies intuitively given by the system’s designer.

ThesecBIPtool takes as inputs asecBIP modeland anactsforfile (.txt) that contains theacts for() relations of the components (principals). ThesecBIP modelis basically the original BIP model of the system with the addition of initial security annotations for some of the system’s data.

FIGURE3.2: Example of data annotated by DLM security labels in a secBIPmodel.

FIGURE3.3: Example ofactsfor.txt

The tool processes the model along with the actsforfile and creates the depen- dency graphs of the components of the system. Then, it runs the security synthesis algorithm to produce the complete security configuration. If a complete security configuration exists and the tool successfully generates it, it is considered to be op- timal and the less restrictive labels are applied. Also, in this case the system is con- sidered to be secure. If no configuration is found, the system is not secure and a diagnostic containing the locations of the security errors is generated. An overview

Chapter 3. Information Integrity policies 31

of the security annotation synthesis that is performed within thesecBIPtool is de- picted in Figure3.4.

FIGURE3.4: Security annotation synthesis [25].

32

Chapter 4

Verification of Information Integrity for Web Service compositions

4.1 The WS-BPEL Web Service composition language and a BIP model for BPEL processes

BPEL process implementations are based on web services (partner links) whose in- terfaces exposeservice operations written in the WSDL 1.1 language. Synchronous operations accept an input and block the invoker for the output, or a fault, to be returned. On the contrary, in asynchronous operations the invoker dispatches the input and forgets it. Thus, through the use of two asynchronous operations it is possible to apply a request-response interaction pattern that does not block the in- voker. In this approach, a service is invoked with the first operation and the response is returned with a second operation, referred to ascallback, exposed by the invoker.

The use of asynchronous operations generally allows for complex service interaction patterns, such as parallel operation invocations, but it raises the need to effectively manage communicationsessions, i.e. the stateful chains of dual service interactions.

The assignment of messages to the correct session takes place by messagecorrelation.

Atomic behavior in processes is realized withbasic activities, such as theinvoke, receive, andreply, which are used respectively to (i) invoke, (ii) receive input, and (iii) send output (or fault), with respect to specific service operations. Figures4.1 (a) and4.1(b) show the client-side and server-side activities used for a synchronous

Documentos relacionados