• Nenhum resultado encontrado

Disparity conditional displacement model

No documento Random Fields (páginas 83-87)

4.2 Joint disparity and displacement model

4.2.2 Disparity conditional displacement model

From the above discussion we can now write the interaction functionVd, using equations (4.8), (4.9) and (4.10). As we can see, Vd involves the displacement field A through the active neighbourhood field H(A), for all a∈ M × E, anddx, dy∈ L2 as follows,

Vd(dx, dy,a) =





min(λ2|dx−dy|, T2) ifx∈Hy(a) &y∈Hx(a) 0 ifx6∈Hy(a) &y6∈Hx(a) T2 1−1{Dx=Dy}(dx, dy)

otherwise

(4.11)

The corresponding MRF is then defined over the standard eight-neighbourhood system N, although it behaves as one built on the active neighbourhood system given by H(a). The definition in (4.11) has the advantage that it allows all active neighbourhood systems, even those not satisfying the reciprocity condition (4.4). This idea of building an additional random field to deal with sets of active neighbours is similar to that in Le Hégarat-Mascle et al. [2007]. While they consider non-stationary neighbourhoods, Le Hégarat-Mascle et al.

defined a neighbourhood system that does not necessarily satisfy the reciprocity condition.

d

Disparity Map Extracted

Disparity Discontinuity chains

c

1(D)

c

2(D)

c

3(D)

c

4(D)

(a) (b)

d

1

2

Figure 4.4: Cartoon example showing the formation of discontinuity chain set C(D). (a) represents a disparity map with two objects, one at disparity d1 and the otherd2 (shaded regions). (b) shows the extracted disparity discontinuity chainsCt(D) of different lengths.

In this example C(D) ={C1(D),C2(D),C3(D),C4(D)(D)}.

S(D) =∪Tt=1(D)St(D) (4.14) and

W(D) =∪Tt=1(D)Wt(D), (4.15) for x ∈ S(D), we will denote by wx ∈ W(D) the normal at location x. Thus Ct(D) provides the positions and normals for all the points in the tth chain. figure 4.5illustrates the construction of the discontinuity chains.

discontinuity chain

x

k+1

x

x

k-1

w

w

k-1

w

k+1

c

t(D)

t

t

t

t t

t

Figure 4.5: Illustration of discontinuity chain construction

The conditional distribution of fieldA given Dand I is defined using the discontinuity chainsC(D): ∀a= (m,e)∈ M × E,d∈ D,

p(a|d,I)∝1{Ex=wx,∀x∈S(d)}(e)

| {z }

normals

p(m|d,I)

| {z }

displacement

(4.16) In (4.16), the first term (callednormalsin the equation) indicates that, at the discontinuity locations, the displacement field normals are the same as the discontinuity chains normals, with probability one conditionally to D = d. Once the normals are fixed, we are now interested in finding the direction along this normal that the chains should be moved in order to align with the object boundary. The probability distribution p(m|d,I) in (4.16) encodes exactly this information. As the corrections need to be applied only at the disparity discontinuities and not on the entire gridS, the distributionp(m|d,I)is defined as follows:

p(m|d,I)∝1{Mx=0,x∈S(d)}(m)

T(d)

Y

t=1

p mt|Ct(d),I

(4.17) whereS(d)is the complement of setS(d)andmt={mx|x∈ St(d)}. However, for the sake of clarity, the displacement at locationxtk is denoted bymtksuch that mt={mt1, . . . , mtK}.

The first term in (4.17) ensures that the non-zero displacements can occur only at dis- continuity locations with probability one. The second term in the right-hand-side of (4.17), is the product of probabilities defined on the discontinuity chains. These discontinuity chains are assumed to be independent and therefore the probability distribution of each of these chains can be individually expressed as a second order Markov chain as follows:

∀mt∈ {−1,0,1}K,d∈ D,

p(mt|Ct(d),I) =

K

Y

k=3

p mtk

mtk−1, mtk−2,Ct(d),I

P

mt1, mt2

Ct(d),I

(4.18) Until now we have shown how the displacement field is reduced to a Markov chain. We will now describe the chain distributions for each of these chains. As each of the chain distributions will be defined in a similar manner, from now on we drop the superscript t. The first terms in the right-hand-side of (4.18) are defined using a data term and an interaction term as specified below,

p(mk|mk−1, mk−2,C(d),I)∝exp

−Uc mk,C(d),|∇IL|

−βcVc mk, mk−1, mk−2,C(d) (4.19) where βc is an interaction parameter acting as a weight between the two terms Uc and Vc. The data term Uc tries to move the chains towards the highest gradients in the image, whereas the interaction terms Vc, enforces the chains to be smooth. These two terms Uc and Vc are described in detail in what follows.

∆ I

x

Normal

e

k Data term

| |

(a) Data term

θ

u

u x

xk-1

xk-2

Smoothness term

k-1 k

(b) Interaction term

Figure 4.6: The figure4.6(a)indicates how the data term favours a location in direction of gradient maximum. The figure4.6(b) shows the vectors~uk−1 and~uk. The interaction term assigns a score based on the angle between the vectors~uk−1 and ~uk.

Data term.

The data term Ucassociates a cost for moving the point xkon either side of the normal wk. In order to determine this cost, we first choose points on both sides of the normal using a range of locations distant from − to where is an integer value to be fixed (in the figure 4.6(a) = 2 ). Then, we determine the difference in the gradient magnitude between the current position and the points chosen along the normal. If this difference is strictly negative, it means that moving the current chain point xk to that position along the normal leads it a higher gradient position. Hence, the direction of motion towards this position is favoured. For example, in the figure4.6(a) this positions of higher gradient are xk + wk where=−1 or −2 and therefore the preferred direction of motion is negative i.e., mk=−1 is preferred. This can be written as,

Uc

mk,C(d),

∇IL

= 1−21

Mk=sk |∇IL|,wk (mk) (4.20) where

sk

∇IL ,wk

= sgn

arg min

`∈[−,]

|∇IL(xk)| −

∇IL(y`)

(4.21) wherey`=xk+`wk and |∇IL|is the gradient magnitude in the reference imageIL. Also, sgn denotes the function that is 1 when its argument is strictly positive, −1 when it is

strictly negative and 0 otherwise. The last term in (4.18) is evaluated based only on the data term.

Interaction term

The interaction term enforces smoothness by favouring those m-values which make the angle between the vectors defined by the positions (xk−2,xk−1) and (xk−1,xk) as close to zero as possible. The position vectors defined byxk−2,xk−1,xk ∈ C(d) are denoted by~uk−1

and~uk as shown in figure 4.6(b) and expressed as follows:

~uk−1 = (xk−1+mk−1wk−1)−(xk−2+mk−2wk−2) (4.22)

~uk = (xk+mkwk)−(xk−1+mk−1wk−1). (4.23) The Vc term therefore assigns a score based on the angle between vectors ~u and ~v and is defined as,

Vc

mk, mk−1, mk−2,C(d)

=−h~uk−1, ~uki

|~uk−1||~uk| (4.24)

where h . , . i denotes the scalar product (see figure 4.6(b)). Note that (4.19) defined a second-order Markov chain which can be easily turned into a first-order Markov chain, so that optimal displacement values can be found using the standard Viterbi algorithm.

Furthermore, we consider an additional heuristic that prevents any discontinuity chain point already at a gradient peak to move from its position. This ensures that the contour chains do not move inside the object boundaries. This could happen as there may be positions with higher gradient within the object depending how textured the image is. As termination of the overall algorithm (Alternation Maximization, section4.4) is determined by the number of moving points, i.e., non-zerom-values, this heuristic aids in attaining faster convergence. The displacement labels obtained at all chain positions are then embedded back in to image. This means that every pixel location is given a displacement label of zero except at the location where the discontinuity chain was formed. The pixel locations now consist of the following information: direction of movement, which is the normal to the discontinuity-chain at that point and magnitude set to 1 or 0, in order to curb fast movement to wrong gradient maxima. This completes the description our coupled-MRF model for the disparity and displacement fields. We have shown how each field (D or A) uses the information from the other within their model. In the next section, we propose an optimization scheme to find estimates of the unknown fieldsD and Awhich are consistent with the observed image set I.

No documento Random Fields (páginas 83-87)