• Nenhum resultado encontrado

Quadratic Interpolation Algorithm for Minimizing Tabulated Function

N/A
N/A
Protected

Academic year: 2017

Share "Quadratic Interpolation Algorithm for Minimizing Tabulated Function"

Copied!
5
0
0

Texto

(1)

Journal of Mathematics and Statistics 4 (4): 217-221, 2008 ISSN 1549-3644

© 2008 Science Publications

Corresponding Author: Youness, E.A., Department of Mathematics Faculty of Science, Tanta University, Tanta, Egypt

Quadratic Interpolation Algorithm for Minimizing Tabulated Function

1

Youness, E.A.,

1

S.Z. Hassan and

2

Y.A. El-Rewaily

1

Department of Mathematics Faculty of Science, Tanta University, Tanta, Egypt

2

Department of Mathematics Faculty of Education for Girls, Faisal University, KSA.,

Abstract: Problem statement: The problem of finding the minimum value of objective function, when we know only some values of it, is needed in more practical fields. Quadratic interpolation algorithms are the famous tools deal with this kind of these problems. These algorithms interested with the polynomial space in which the objective function is approximated. Approach: In this study we approximated the objective function by a one dimensional quadratic polynomial. This approach saved the time and the effort to get the best point at which the objective is minimized. Results: The quadratic polynomial in each one of the steps of the proposed algorithm, accelerate the convergent to the best value of the objective function without taking into account all points of the interpolation set. Conclusion: Any n-dimensional problem of finding a minimal value of a function, given by some values, can be converted to one dimensional problem easier in deal.

Key words: Quadratic interpolation, tabulated function, trust region, derivative free optimization INTRODUCTION

Many optimization problems can be occur in practice, for example, in most of labs, one obtains data for certain phenomena and wants know that the point at which this phenomena is minimized. Also the problems which they arise in economic, social fields, engineering field and many others fields.

Many researchers had been construct several useful algorithms for dealing with this kind of problems. These algorithms, are classified as, algorithms use finite difference approximations of the objective functions derivatives[2-4], the second class is pattern search methods which are based on the exploration of the variables’ space using a well specified geometric pattern, typically a simplex[8]. Finally, algorithms are based on the progressive building and updating of a model of the objective function[9,10].

There is also a related class of “global modeling methods” that use design of interpolation models[1,6,7].

In the following discussion we present analysis on which the proposed algorithm for finding the minimal point of tabulated function is based.

RESULTS AND DISCUSSION

Let I be denote to the set of points n

x∈X⊂R at which the function n

f : X⊂R →R is given in the Table 1.

Table: 1 Initial interpolated set

x x0 x1 …….. xj

f(x) f(x0) f(x1) ……. f(xj)

Choose k

x ∈I such that

( )

k

( )

f x ≤f x , for each x∈I and consider real quadratic convex function φ α

( )

such that

( )

α =f

(

xk +αp

)

ϕ ,

Where:

n

p∈B⊂R , B is the trust region with radius , i.e,

{

}

=

x

R

n

:

x

x

k

B

, p is chosen such that

( )

( )

k

f p ≥f x and f p

( )

<f x

( )

for each x∈I, x≠xk.

The quadratic convex functionφ α

( )

is constructed by interpolating the pointsα α α ∈0, 1, 2 R such that the

function f is given, at least, at two of α,s. If f is known for each α α α1, 2, 3, then we can easy to construct φ α

( )

.

(2)

is calculated from the knowledge of differentiability of

φ. If f is known only at one point of α, s, then we can consider one of the other two values (say ϕ( )α2 ) is less than φ α

( )

1 and φ α

( )

3

Main results: The basic idea of this study is based on constructing a one dimensional real valued quadratic function φ α

( )

by interpolating. The function φ α

( )

is constructed such that

( )

(

k

)

f x p

φ α = + α where k

x is the point in the set I at which the least value of f and the point p is chosen as indicated in above.

The following results show that the direction p is the descent direction and how we can build our algorithm.

Proposition 1: If α is a minimal solution of the problem.

R

( )

min

α∈ ϕ α and p∈I is chosen as indicated before. Then

(

k 1

)

(

k

)

f x + f x

Where k 1 k

x+ =x + αp, i.e., p is the descent direction .

Proof: Since α is a minimal solution of the problem min ϕ(α)

α∈R So

( )

α

ϕ

( )

α

, ∀

α

∈R

ϕ

.

Thus

(

k

)

(

k

)

f x + αP ≤f x + αP , ∀α ∈R.

Hence

(

x P

) ( )

f x

f k +α ≤ k

But

( )

k

( )

f x ≤f x for each x∈Ι, therefore the direction p is the decreasing direction of f.

Proposition 2: If k 1 k k

x + =x + αp=x , then there is no other point ˆx k ˆ

x p I

= + α ∈ such that:

k ˆ k

f ( x + αp )≤ f ( x ).

Proof: Since k 1 k

x+ =x , so

( )

R

arg min α∈

α = φ α .

( )

(

k k

)

f x p

φ α = + α . Also on the direction pk there is no ˆx such that

( )

f

(

xk pk

)

f < +α because if there is such as that point,

then there is αˆ such that

k k

p ˆ x

xˆ= +

α

and

(

xk ˆpk

) (

fxk pk

)

f +α < +α i.e., φ α < φ α

( )

ˆ

( )

which is a

contradiction.

Now, let ˆp be another descent direction, ˆp in the set I such that

( )

ˆ

( )

k

( )

k

f p > f p > f x .

Assume ˆx k ˆ ˆ

x p

= + α is such that

( )

ˆ k ˆ ˆ

(

k k

)

f x =f (x + αp)<f x + αp .

By constructing a function Ψ α

( )

such that

( )

α =f

(

xk+αpˆ

)

Ψ we get

( )

( )

ˆ ˆ

( )

k 2 ˆ 2

( )

k ˆ

(

k k

)

0 p f x p f x p f x p

2

α

Ψ α = Ψ + α ∇ + ∇ < + α

Since ˆpis the descent direction and Ψ α

( )

is convex,

( )

k

( )

(

k k

)

f x = Ψ 0 <f x + αp

Which is a contradiction.

Theorem: The sequence generated by

k 1 k k

x + =x + αp

is convergent, where arg minR

( )

α∈

α = φ α and Pk is in the trust region Bk

Proof: Since the trust region Bk is defined as

n k

k

1

B x R : x x

k

= ∈ − ≤ ∆ .

This set is closed and bounded. Furthermore

k k k 1

k x p B

(3)

J. Math. & Stat., 4 (4): 217-221, 2008

Also,

k 2 k 1 k 1

k 1

ˆ

x+ =x + + αp+ ∈B + ,

Where Bk 1+ is also closed and

ϕ

≠ ∩

+1 k

k B

B .

By repeating this process, we find the sequence

{ }

k

x is contained

m

k k

k 1

B B

=

. Since Bk

is closed, it

contains the limit point of

{ }

k

x , thus

{ }

k

x is convergent.

The Algorithm: From the previous discussion we can seek the following algorithm:

Initial step: 0.1: Set k=1

0.2: Choose x1∈I such that f x

( )

1 ≤f x

( )

for each x∈I

0.3: Let be the radius of the trust region B1,

{

∈ − ≤∆

}

= n 1

1 x R : x x

B

0.4: Choose p1∈B1 such that p1∈I andf (p )1 ≥f (x )1 ,

( )

1

( )

1

f p <f x ,∀ ∈x I, x≠x

0.5: Set

ϕ

( ) (

α

=

f

x

1

+

α

p

)

,

α

R

The first Step:

1.1: Choose α α α ∈1, 2, 3 R such that

( )

α1 f

(

x1 α1p1

)

ϕ = +

( )

α2 f

(

x1 α2p1

)

ϕ = +

( )

α

3 f

(

x1

α

3p1

)

ϕ

= + .

1.2: If x1+α1p1 , x1+α2p1 , x1+α3p1 ∈ I ,then interpolate the points α α α1, 2, 3 to obtain a quadratic

function φ α

( )

.

1.3: Determine

( )

min

arg

R

ϕ

α

α

α∈

=

and go to step 2. 1.4: If one of the points x1+ αip1 ∉I,i=1, 2,3 then put it

equals z,

i.e., φ α

( )

j =f x

(

1+ αj 1p

)

=z.

1.5: Determine z that makes ϕ′

( )

0 =0 and hence substitute z in φ to obtain φ α

( )

and go to step 1-2.

1.6: If two of the points

(

say x p , x p I

)

I p

x1i1 1l1 1m1, then choose

l, m

α α such that

( )

l

( )

i ,

φ α < φ α ϕ

( )

αl <ϕ

(

αm

)

, ϕ

(

αm

)

( )

αi or

( ) ( )

m l

( )

i ,

( )

m

( )

i

φ α < φ α <φ α φ α =φ α

and go to step 1.2. The second step :

2.1: If no improvement in value of f, then opposite the direction of P 1to obtain φ α =

( )

f x

(

1− αp1

)

and go to step 1. Otherwise go to step 3.

2.2: If x1− αp1 does not improve the value of f, then let

1 1 1

( ) f(x (p x ))

ϕ α = +α −

Such that

(

p1−x1

)

∈B1 and go to step 1.

Otherwise go to step 3.

2.3: If

(

p1−x1

)

∉B1 , then extend the radius of B1 to

contain p1−x1 and go to step 1.

The third step:

3.1: Inter

(

)

(

)

1

1 1 1 1 1 1 1 1

x = x + αp or x − αp or x + α p −x in the set I and reduce the radius of the trust region to become

∆ ≤ − ∈ =

2 1 x x : R x

B2 n 11

. Hence go to step 0.

3.2: If

1 1 2

1

x

x

=

, then stop and the minimal point is in

the ball

( )

2 1

B x with radius 2

. Otherwise go to step 0.

Example: Determine the minimal point of the function f(x) given by the Table 2.

Table 2: Initial set values of f

x (-2, 0) (1, 1) (0, 1) (1, 0) (0, 2) (1, 2)

(4)

Choose x1=

(

0, 1

)

and ∆ =2 , then

(

) (

) ( )

{

}

1

B = x, y : x, y − 0, 1 ≤2

Choose p1=

(

1, 0

)

Let:

( )

α =f

[

(

0,1

)

(

1,0

)

]

=f

(

α,1

)

ϕ .

Takeα =1 0,α =2 1, α = −3 1, then

( )

α1 =1 , ϕ

( )

α2 =2 . set ϕ

( )

α3 =z

ϕ

By using Maple Package we can obtain

( )

α =α 2 +1

ϕ and α = 0 = arg αmin∈ R ϕ( )α .

Thus f

(

α, 1

)

α =0=f 0, 1

(

)

=1, i.e., no improvement in the value of f, then consider the opposite direction

(

)

p 1, 0

− = −

Therefore,

( )

f

(

, 1

)

φ α = −α

Which by interpolating the points

: 1 2 3 j( ) : 1 2 z

α α α α

α

and by using Maple Package we get

( )

2

1

φ α = α +

Which it has α =0 as the minimal point and no improvement in the value of f. Therefore we will consider the direction p1−x1, such that p1−x1∈B1.

Since p1−x1=

(

1, 1−

)

∉B1, so extend the radius of B1 to

be =3 . In this case we option

( )

f

(

, 1

)

φ α = α − α .

Since α =1 0 implies

(

0, 1

)

∈I , then choose 2

1 2

α = and

α3 = 1 with letting corresponding values of ϕ(α) as 0,1 respectively.

By interpolating ϕ(α) we obtain

( )

2

4 4 1

φ α = α − α +

Which it has 1 2

α = as the minimal point. Then inter the point 2 1 1

x ,

2 2

= in the set I with a corresponding

value 1 0 2

φ = , i.e., the set I becomes

( ) ( ) ( ) 1 1 ( ) ( ) ( ) x : 2 , 0 1, 1 0, 1 , 1 , 0 0 , 2 1 , 2

2 2

f(x ) : 4 2 1 0 1 4 5 −

Reduce the trust region to become

( ) ( )− ≤ ∆= =

2 3 2 1 2 1 , 2 1 y , x : y , x B2

And choose 2 2 2 2

1 1

p B , p I , , p (1, 0)

2 2

∈ ∈ ∪ = . By

using Maple we get

( )

f 1, 1

2 2

φ α = α + , φ α = α

( )

2

, which has the minimal point α=0.

Now no improvement, then consider the opposite direction –p = (-1,0) for which

( )

f 1 , 1

2 2

φ α = − α

and for the points

: 1 0 1

( ) : 1 0 1

α −

ϕ α

the function 2

( )

ϕ α = α and also no improvement. Hence consider the direction 2 2

1 1

p x ,

2 2

− = . By

using Maple we get

( )

2

φ α = α and z = 1 and no improvement in the consider three direction, then stop and the minimal point is in the ball B 1 1,

2 2 with radius 3

2.

CONCLUSION

(5)

J. Math. & Stat., 4 (4): 217-221, 2008

REFERENCES

1. Conn, A.R. and P.L. Toint, 1996. An Algorithm Using Quadratic Interpolation for Unconstrained Derivative Free Optimization. In: Nonlinear Optimization and Applications, Giannessi, F., (Ed.). Plenum Publishing, Dipillo, Gianni, pp: 27-47.

2. Dennis, J.E. and R.B. Schnabel, 1983. Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice-Hall, Englewood Cliffs, USA.,

3. Gill, P.E., W. Murray, and M.H. Wright, 1981. Practical Optimization. Academic press, London and New York, pp: 402.

4. Gill, P.E., W. Murray, and M.A. Saunders and M. Wright, 1983. Computing forward difference intervals for numerical optimizations. SIAM J. Sci.

Stat. Comput., 4: 310-321.

http://dx.doi.org/10.1137/0904025.

5. Kolda, T.G., 2003. Optimization by direct search: New perspective on some classical and modern methods. SIAM Review, 45: 385-482.

DOI: 10.1137/S003614450242889.

6. Mitchell, T.J., J. Saks, W.J. Welch and H.P. Wynn, 1998. Design and analysis of computer experiments. Stat. Sci., 4: 409-435. http://www.cant.ua.ac.be/modelbenchmarks/sacks welchmitchelwynn.pdf.

7. Morris, M., C. Currin, T.J. Mitchell and D. Ylvisaker, 1991. Bayesian prediction of deterministic functions with application to the design and analysis of computer experiments. J. Am. Stat. Associat., 86: 953-963.

citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1 .8.1552

8. Nelder, J.A. and R. Mead, 1965. A simplex method for function minimization. Comput. J., 7: 308-313. http://doi.acm.org/10.1145/203082.203090. 9. Powell, M.J.D., 1994. A Direct Search

Optimization Method that Models the Objective and Constraint Functions by Linear Interpolation. In: Advances in Optimization and Numerical Analysis, Gomez, S. and J.P. Hennart, (Eds.). Kluwer Academic Publishers, pp: 275.

10. Powell, M.J.D., 2001. On the lagrange functions of quadratic models that are defined by interpolation: Optim. Methods Softw., 16: 289-309. DOI: 10.1016/S0096-3003(01)00073.

Referências

Documentos relacionados

Corresponding Author: Samruam Baupradist, Department of Mathematics, Chulalongkorn University, Bangkok 10330,

1 Department of Chemistry, Royal Institute of Technology KTH, Sweden, 2 Department of Chemistry, Faculty of Science, El-Menoufia University, Egypt, 3 Chemistry of Natural and

a Department of Pharmaceutical Technology, Faculty of Pharmacy, International Islamic University Malaysia, Pahang, Malaysia b Department of Chemistry, Faculty of Science,

Department of Engineering Mathematics, Faculty of Engineering Cairo University, Orman, Giza 12221, Egypt..

Department of Engineering Mathematics, Faculty of Engineering, Cairo University, Orman, Giza 12221, Egypt. e-mail:

Abstract — Using the Lagrange interpolation the number of conditions determines the interpolation space, however, the k- algebraic Lagrange interpolation only

*Corresponding Author: Mansoureh Movahedin, Professor, Anatomical Sciences Department, Faculty of Medical Sciences, Tarbiat Modares University,

There is another way to solve quadratic programming problems based on the idea of effective selection of constraints and the solution at every step quadratic programming problems