• Nenhum resultado encontrado

Smoothed Lower Order Penalty Function for Constrained Optimization Problems

N/A
N/A
Protected

Academic year: 2017

Share "Smoothed Lower Order Penalty Function for Constrained Optimization Problems"

Copied!
6
0
0

Texto

(1)

Smoothed Lower Order Penalty Function for

Constrained Optimization Problems

Nguyen Thanh Binh

Abstract—The paper introduces a smoothing method to the lower order penalty function for constrained optimization prob-lems. It is shown that, under some mild conditions, an optimal solution of the smoothed penalty problem is an approximate optimal solution of the original problem. Based on the smoothed penalty function, an algorithm is presented and its convergence is proved under some mild assumptions. Numerical examples show that the presented algorithm is effective.

Index Terms—constrained optimization problem, penalty function method, exact penalty function, smoothing method.

I. INTRODUCTION

C

ONSIDER the constrained optimization problem: (P) min f(x)

s.t. gi(x)≤0, i= 1,2, . . . , m,

x∈Rn,

where f, gi : Rn → R, i ∈ I = {1,2, . . . , m} are real

valued functions, and X0 ={x∈ Rn | g

i(x)≤0, i∈ I}

is the feasible set to (P). Constrained optimization problems have been applied in many fields, such as including optimal control, engineering, national defence, economy, and finance etc. Up to now, the methods of solving the constrained optimization problems have been well studied in the liter-atures. The use of penalty functions to solve constrained optimization problems is generally attributed to Courant. The significant progress in solving many practical nonlinear constrained optimization problems by using exact penalty function methods have been introduced in literatures [1], [2], [3], [4], [5], [6], [7]. Zangwill [7] proposed the classic l1

exact penalty function as follows:

Fσ(x) =f(x) +σ m

i=1 g+

i (x), (1)

where g+

i (x) = max{0, gi(x)}, i ∈ I and σ > 0 is a

penalty parameter. It is known from the theory of ordinary constrained optimization that the l1 exact penalty function is a better candidate for penalization. However, it is not a smooth function and causes some numerical instability problems in its implementation when the value of the penalty parameter σ becomes larger. Thus, smoothing methods for smoothing the exact penalty function attract much attention, see, e.g., [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18]. In recent years, the lower order penalty function

Fσk(x) =f(x) +σ m

i=1 [gi+(x)]

k

, (2)

Manuscript received July 17, 2015; revised October 20, 2015. This work is supported by National Natural Science Foundation of China (Grant No. 11371242).

N. T. Binh is with the Department of Mathematics, Shanghai University, Shanghai 200444, P. R. China, e-mail: bingbongyb@gmail.com

wherek ∈(0,1) andσ > 0 have been introduced in [12], [13], [15]. Obviously, when k= 1, the lower order penalty function (2) is reduced to the exact penalty function (1). Meng et al. [12] discussed two smoothing approximations to the lower order penalty function for inequality constrained optimization. Meng et al. [13] discussed a smoothing method of lower order penalty function and gave a robust SQP method for (P) by integrating the smoothed penalty function with the SQP method. Wu et al. [15] proposed the ϵ -smoothing of (2), and got a modified exact penalty function under some mild conditions. Error estimates of the optimal value of the smoothed penalty problem and that of the original penalty problem are obtained.

Based on the lower order penalty function, this paper introduces a smoothing lower order penalty function method which differs from the smoothing penalty function methods in [12], [13], [15], and proposes a corresponding algorithm for solving (P). Numerical results show that this algorithm is of good convergence and is efficient in solving some constrained optimization problems.

The rest of this paper is organized as follows. In Section II, we propose a new smoothing function to the exact penalty function (2), and discuss its some fundamental properties. In Section III, an algorithm based on the smoothing lower order penalty function is proposed and its global convergence presented, with some numerical examples are given.

II. SMOOTHED LOWER ORDER METHOD

Considerpk(v) :RR:

pk(v) =

{

0 if v <0, vk

if v≥0,

wherek∈(0,1). Then,

Fk

σ(x) =f(x) +σ m

i=1 pk(g

i(x)). (3)

The corresponding penalty problem to (3) is defined as follows,

(Pσ) min Fσk(x)

s.t. x∈Rn.

Fork∈(0,1) and anyϵ >0, we define functionpk ϵ,σ(v)

as

pk

ϵ,σ(v) =

          

0 ifv <0, m2σ2v3k

6ϵ2 if0≤v < ( ϵ

)k1,

vk+ ϵ 2v−k

2m2σ2 − 4ϵ

3mσ ifv≥ ( ϵ

)1k,

(2)

−0.1 −0.05 0 0.05 0.1 0.15 0.2 0

0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4

pk(v) (k=2/3) pk

0.01,5(v) (m=4, k=2/3)

pk

0.5,5(v) (m=4, k=2/3)

pk

1.0,5(v) (m=4, k=2/3)

Fig. 1. The behavior ofpk(v)andpk ϵ,σ(v).

Remark 2.1.Obviously,pk

ϵ,σ(v)has the following attractive

properties: (i) pk

ϵ,σ(v) is twice continuously differentiable for

2 3 < k <1 onR, where

[pk ϵ,σ(v)]

=

          

0 ifv <0, km2σ2v3k−1

2ϵ2 if0≤v < ( ϵ

)k1,

kvk−1

−kϵ

2v−k−1

2m2σ2 ifv≥ ( ϵ

)1k,

and

[pk ϵ,σ(v)]′′

=           

0 if v <0, k(3k−1)m2σ2v3k−2

2ϵ2 if0≤v < ( ϵ

mσ )1k

,

k(k−1)vk−2

+k(k+ 1)ϵ 2v−k−2

2m2σ2 ifv≥ ( ϵ

mσ )k1

.

(ii) lim

ϵ→0p

k

ϵ,σ(v) =pk(v).

(iii) pk(v)pk

ϵ,σ(v), ∀v∈R.

Figure 1 shows the behavior ofpk(v)andpk

ϵ,σ(v). Assume

that f, gi(i∈I)are continuously differentiable functions.

Let

Fϵ,σk (x) =f(x) +σ m

i=1

pkϵ,σ(gi(x)). (4)

Clearly,Fk

ϵ,σ(x)is continuously differentiable at anyx∈Rn.

Consider the following smoothed penalty problem:

(SPϵ,σ) min Fϵ,σk (x) s.t. x∈R n.

Lemma 2.1.We have

0≤Fk σ(x)−F

k ϵ,σ(x)≤

3 (5)

for any givenx∈Rn,ϵ >0 andσ > 0, whereFk σ(x)and

Fk

ϵ,σ(x)are given in (3) and (4) respectively.

Proof For anyx∈Rn, ϵ >0, σ >0, we have

Fk σ(x)−F

k

ϵ,σ(x) =σ m

i=1 (

pk(g

i(x))−pkϵ,σ(gi(x))

) .

Note that

pk(g

i(x))−pkϵ,σ(gi(x))

=                         

0 if gi(x)<0,

0≤(gi(x))k−

m2σ2(g

i(x))3k

6ϵ2 < ϵ mσ

if 0≤gi(x)<

( ϵ mσ

)k1

,

0< 4ϵ 3mσ −

ϵ2(g

i(x))−k

2m2σ2 ≤ 4ϵ 3mσ

if gi(x)≥

( ϵ mσ

)1k

.

for anyi= 1,2, . . . , m. That is,

0≤pk(gi(x))−pkϵ,σ(gi(x))≤

4ϵ 3mσ.

Thus,

0≤ m

i=1 (pk(g

i(x))−pkϵ,σ(gi(x)))≤

4ϵ 3σ,

which implies

0≤σ

m

i=1 (pk(g

i(x))−pkϵ,σ(gi(x)))≤

4ϵ 3 .

Therefore,

0≤Fk σ(x)−F

k ϵ,σ(x)≤

4ϵ 3 .

The proof is completed.

Based on the Lemma 2.1, we have the following two theorems.

Theorem 2.1. Let {ϵj} → 0, ∀ϵj > 0, and assume xj

is a solution to (SPϵj,σ) for some σ > 0. Let x ′ be an

accumulation point of the sequence {xj}. Then x′ is an

optimal solution to(Pσ).

Proof Sincexj is a solution to problem(SPϵj,σ), we have

Fk

ϵj,σ(xj)≤F

k ϵj,σ(x).

By Lemma 2.1, we have

Fk

σ(xj)≤Fϵkj,σ(xj) +

4ϵj

3 , Fk

ϵj,σ(x)≤F

k σ(x).

It follows

Fσk(xj)≤Fϵkj,σ(x) +

4ϵj

3

≤Fk σ(x) +

4ϵj

3 .

Since {ϵj} → 0 and x′ be an accumulation point of the

sequence{xj}, we have

Fk

σ(x′)≤Fσk(x).

The proof is completed.

Theorem 2.2. For some σ > 0 and ϵ > 0, let x∗

σ be an

optimal solution to(Pσ)andx∗ϵ,σ be an optimal solution to

(SPϵ,σ). Then,

0≤Fk σ(x

∗ σ)−F

k ϵ,σ(x

∗ ϵ,σ)≤

3. (6)

If bothx∗

σ andx∗ϵ,σ are feasible to (P), then

f(x∗

ϵ,σ)≤f(x∗σ)≤f(x∗ϵ,σ) +

(3)

Proof By Lemma 2.1, forσ >0 andϵ >0, we have that

0≤Fk σ(x

∗ σ)−F

k ϵ,σ(x

∗ σ) ≤Fk

σ(x ∗ σ)−F

k ϵ,σ(x

∗ ϵ,σ) ≤Fσk(x∗ϵ,σ)−F

k ϵ,σ(x∗ϵ,σ)

≤ 4ϵ

3.

That is,

0≤Fk

σ(x∗σ)−F k

ϵ,σ(x∗ϵ,σ)≤

4ϵ 3

and

0≤

{

f(x∗σ) +σ m

i=1 pk(g

i(x∗σ))

}

{

f(x∗ϵ,σ) +σ m

i=1 pk

ϵ,σ(gi(x∗ϵ,σ))

}

≤4ϵ

3.

Furthermore, ifx∗

σ andx∗ϵ,σ are feasible to (P), then

m

i=1 pk(g

i(x∗σ)) = m

i=1 pk

ϵ,σ(gi(x∗ϵ,σ)) = 0.

Therefore,

0≤f(x∗σ)−f(x ∗ ϵ,σ)≤

4ϵ 3 .

That is,

f(x∗

ϵ,σ)≤f(x∗σ)≤f(x∗ϵ,σ) +

4ϵ 3 .

The proof is completed.

Definition 2.1.A pointx∗

ϵ ∈Rn is calledϵ-feasible solution

to (P) if

gi(x∗ϵ)≤ϵ, i= 1,2, . . . , m.

Definition 2.2. For x ∈ Rn

, a point y ∈ Rm

is called a Lagrange multiplier vector corresponding to x if x and y

satisfy that

∇f(x) =− m

i=1

yi∇gi(x), (8)

yigi(x) = 0, gi(x)≤0, yi≥0, i∈I. (9)

Theorem 2.3.Suppose thatf, gi (i∈I)in (P) are convex.

Let xbe an optimal solution to (P) andx∗

ϵ,σ be an optimal

solution to (SPϵ,σ). If x∗ϵ,σ is feasible to (P), and y ∈Rm

be a Lagrange multiplier vector corresponding tox∗ ϵ,σ, then

f(x)≤f(x∗ϵ,σ)≤f(x) +

3 (10)

for anyϵ >0.

Proof By the convexity off, gi (i∈I), we have

f(x)≥f(x∗

ϵ,σ) +∇f(x∗ϵ,σ)

T(xx

ϵ,σ), (11)

gi(x)≥gi(x∗ϵ,σ) +∇gi(x∗ϵ,σ)

T(xx

ϵ,σ). (12)

By (3), (8), (9), (11) and (12), we have

Fk

σ(x) =f(x) +σ m

i=1 pk(g

i(x))

≥f(x∗ϵ,σ) +∇f(x ∗ ϵ,σ)

T(xx∗ ϵ,σ)

=f(x∗ϵ,σ)− m

i=1

yi∇gi(x∗ϵ,σ)

T(xx∗ ϵ,σ)

≥f(x∗ϵ,σ)− m

i=1 yi

[

gi(x)−gi(x∗ϵ,σ)

]

=f(x∗ ϵ,σ)−

m

i=1 yigi(x)

≥f(x∗ϵ,σ).

By Lemma 2.1, we have

Fk

σ(x)≤F k ϵ,σ(x) +

4ϵ 3.

It follows

f(x∗ϵ,σ)≤Fϵ,σk (x) +

4ϵ 3

=f(x) +σ

m

i=1 pk

ϵ,σ(gi(x)) +

4ϵ 3

=f(x) +4ϵ 3 .

Sincex∗

ϵ,σ is feasible to problem (P), that

f(x)≤f(x∗ϵ,σ).

Thus,

f(x)≤f(x∗ϵ,σ)≤f(x) +

4ϵ 3.

The proof is completed.

Theorem 2.2 show that an approximate solution to(SPϵ,σ)

is also an approximate solution to(Pσ)when the errorϵis

small enough. By Theorem 2.3, under some mild conditions, an optimal solution to (SPϵ,σ) becomes an approximately

optimal solution to (P). Therefore, we may obtain an approx-imately optimal solution to (P) by solving problem(SPϵ,σ).

III. ALGORITHM AND NUMERICAL EXAMPLES

In this section, we propose an algorithm to solve problem (P) via solving the problem(SPϵ,σ), defined as Algorithm I.

Algorithm I

Step 1: Given a point x0

1 ∈ Rn. Choose ϵ1 > 0, σ1 > 0, 0< γ <1, β >1 andϵ >0. Let j= 1, and go to Step 2.

Step 2:Solve the following problem:

(SPϵj,σj) xmin ∈Rn F

k

ϵj,σj(x) =f(x) +σj

m

i=1 pk

ϵj,σj(gi(x))

starting from the pointx0

j. Letx∗ϵj,σj be an optimal solution

obtained (x∗

ϵj,σj is obtained by the BFGS method given in

[19]).

Step 3: If x∗

ϵj,σj is ϵ-feasible to (P), then stop. Otherwise,

letσj+1=βσj, ϵj+1=γϵj, xj0+1=x∗ϵj,σj andj=j+ 1,

(4)

Remark 3.1.In this Algorithm I, from 0 < γ <1, β >1, the sequence {ϵj} →0 and the sequence {σj} →+∞, as

j→+∞.

Under mild conditions, we now prove the convergence of the Algorithm I.

Theorem 3.1.For 13 < k <1, suppose that the set

arg min

x∈RnF

k

ϵ,σ(x)̸=∅ (13)

for any ϵ ∈ (0, ϵ1], and σ ∈ [σ1,+∞). Let {x′ j} be a

sequence generated by Algorithm I. If{x′

j} has limit point,

then any limit point of {x′

j} is an optimal solution to (P).

Proof Letx′ be any limit point of{x

j}, then there exists

a natural number set J ⊂N, such thatx′

j →x′, j ∈J. It

is clear that, if we prove that(i)x′X0, and (ii)f(x)

infx∈X0f(x), thenx

is an optimal solution to (P).

(i) Suppose to the contrary thatx′/X0, then there exists

θ0>0and the subsetJ′J, there exists someiIsuch

that

gi′(x′j)≥θ0

for anyj ∈J′.

When ( ϵj

mσj

)1k

> gi′(x′j) ≥ θ0, from the definition of pk

ϵ,σ(v)and Step 2 in Algorithm I, we have

f(x′j) +

mσ3

3k

0 6ϵ2

j

≤Fk ϵj,σj(x

′ j)

≤Fk

ϵj,σj(x) =f(x)

for any x ∈ X0, which contradicts with σj → +∞, and

ϵj →0.

Whengi′(x′j)≥θ0 ≥ ( ϵ

j

mσj

)k1

or gi′(x′j)≥ ( ϵ

j

mσj

)1k

θ0, from the definition ofpk

ϵ,σ(v)and Step 2 in Algorithm I,

we have

f(x′j) +σjθk0+ ϵ2

jθ −k

0 2m2σ

j −4ϵj

3m ≤F

k ϵj,σj(x

′ j)

≤Fk

ϵj,σj(x) =f(x)

for any x ∈ X0, which contradicts with σj → +∞, and

ϵj →0.

Thus,x′X0.

(ii) For any x∈X0, from the definition ofpk

ϵ,σ(v)and

Step 2 in Algorithm I, we have

f(x′j)≤F k ϵj,σj(x

′ j)≤F

k

ϵj,σj(x) =f(x),

thus,f(x′)inf

x∈X0f(x)holds.

The proof is completed.

Theorem 3.2. For 1

3 < k <1, let{x

j} be a sequence

gen-erated by Algorithm I. Suppose that lim

∥x∥→+∞f(x) = +∞

and the sequence{Fk ϵj,σj(x

j)} is bounded. Then,

(i) the sequence{x′

j} is bounded,

(ii) any limit point x′ of{x

j}is feasible to (P),

(iii) any limit pointx′of{x

j}, and there existsκi≥0 (i∈

I), such that

∇f(x′) +∑

i∈I

κi∇gi(x′) = 0. (14)

Proof (i) First, we will prove that{x′

j} is bounded. Note

that

Fk ϵj,σj(x

j) =f(x ′ j)+σj

m

i=1 pk

ϵj,σj(gi(x ′

j)), j= 0,1,2, . . . ,

(15) and by the definition ofpk

ϵ,σ(v), we have m

i=1

pkϵj,σj(gi(x ′

j))≥0. (16)

Suppose to the contrary that{x′

j}is unbounded and without

loss of generality, ∥x′

j∥ → +∞ as j → +∞. Then,

lim

j→+∞f(x ′

j) = +∞, and from (15) and (16), we have

Fk ϵj,σj(x

j)≥f(x′j)→+∞, σj >0,

which contradicts with{Fk ϵj,σj(x

j)}is bounded. Thus,{x′j}

is bounded.

(ii) Next, we will prove that any limit pointx′ of {x′ j}

is feasible to (P). Forx∈Rn, let

I− ϵ =

{

i| 0≤gi(x)<

( ϵ mσ

)1k

, i= 1,2, . . . , m }

,

I+

ϵ =

{

i| gi(x)≥

( ϵ mσ

)k1

, i= 1,2, . . . , m }

.

Without loss of generality, assume that lim

j→+∞x ′

j = x′.

Suppose to the contrary thatx′/X0, then there exist some

i∈I, such thatgi(x′)≥α >0. Note that

Fk ϵj,σj(x

j) =f(x′j) +σj

i∈I−

ϵj

m2σ2

j(gi(x′j))

3k

6ϵ2

j

+σj

i∈Iϵj+

( (gi(x′j))

k+ϵ

2

j(gi(x′j))−k

2m2σ2

j

− 4ϵj

3mσj

) . (17)

Since gi(x′) ≥ α > 0, then for any sufficiently large j,

the set {i | gi(x′j) ≥ α} is not empty. Then there exists

an i′ I that satisfies g

i′(x′j) ≥ α. If j → +∞, σj →

+∞, ϵj →0, it follows from (17) thatFϵkj,σj(x ′

j)→+∞,

which contradicts with{Fk ϵj,σj(x

j)} is bounded. Therefore,

x′ is feasible to (P).

(iii) Finally, we prove that (14) holds. By Step 2 in Algorithm I, we have∇Fk

ϵj,σj(x ′

j) = 0, that is

∇f(x′j) +σj

i∈I−

ϵj

km2σ2

j(gi(x′j))

3k−1

2ϵ2

j

∇gi(x′j)

+σj

i∈Iϵj+

(

k(gi(x′j)) k−1

−kϵ

2

j(gi(x′j))−k−

1

2m2σ2

j

)

∇gi(x′j) = 0.

which implies

∇f(x′j) +σj

i∈I−

ϵj

km2 σ2

j[pk(gi(x′j))]

3k−1 k

2ϵ2

j

∇gi(x′j)

+σj

i∈I+ ϵj

{k[pk(g i(x′j))]

k−1 k

−kϵ

2

j[pk(gi(x′j))]

−k−1 k

2m2σ2

j

(5)

Let

αj = 1 +σj

i∈I−

ϵj

km2σ2

j[pk(gi(x′j))]

3k−1 k

2ϵ2

j

+σj

i∈Iϵj+

( k[pk(g

i(x′j))]

k−1

k −kϵ

2

j[pk(gi(x′j))]

−k−1 k

2m2σ2

j

) ,

j= 0,1,2, . . . ,thenαj >1. From (18), we have

1 αj

∇f(x′j) +

i∈I−

ϵj

km2σ3

j[pk(gi(x′j))]

3k−1 k

2ϵ2

jαj

∇gi(x′j)

+ ∑

i∈Iϵj+ {σjk[p

k(g i(x′j))]

k−1 k

αj

−kϵ

2

j[pk(gi(x′j))]

−k−1 k

2m2σ

jαj

}∇gi(x′j) = 0. (19)

Let

κj= 1

αj

,

νij=

km2 σ3

j[pk(gi(x′j))]

3k−1 k

2ϵ2

jαj

, i∈I− ϵj,

νij= σjk[p

k(g i(x′j))]

k−1 k

αj

−kϵ

2

j[p k(g

i(x′j))]

−k−1 k

2m2σ

jαj

, i∈I+

ϵj,

νij= 0, i∈I\

( I+

ϵj ∪I − ϵj

) .

Then,

κj+∑

i∈I

νij = 1, j= 0,1,2, . . . , (20)

νij ≥0, i∈I, j= 0,1,2, . . . .

Clearly, as j→ ∞, κj κ >0, νj

i →νi≥0, ∀i∈I. By

(19) and (20), as j→+∞, we have

κ∇f(x′) +∑

i∈I

νi∇gi(x′) = 0,

which implies

∇f(x′) +

i∈I

νi

κ∇gi(x

) = 0.

Letκi= νκi, it follows

∇f(x′) +∑

i∈I

κi∇gi(x′) = 0, κi≥0.

The proof is completed.

The rest of this paper we solve three numerical examples to illustrate the efficiency of Algorithm I. In each example we let k= 3

4 andϵ= 10

−6

, then the numerical results with Algorithm I on MATLAB are given as follows.

Example 3.1. Consider ([12], Example 4.2)

(P1) minf(x) =x2 1+x

2 2+ 2x

2 3+x

2

4−5x1−5x2

−21x3+ 7x4

s.t. g1(x) =2x2 1+x

2 2+x

2

3+ 2x1+x2+x4−5≤0 g2(x) =x2

1+x 2 2+x

2 3+x

2

4+x1−x2+x3

−x4−8≤0 g3(x) =x2

1+ 2x 2 2+x

2 3+ 2x

2

4−x1−x4−10≤0

Let x0

1 = (0,0,0,0), σ1 = 10, β = 2, ϵ1 = 1.0, γ = 0.05. The results of Algorithm I for solving (P1) are given in Table I.

As shown in Table I, it is found that Algorithm I yields an approximate optimal solution

x∗2= (0.170446,0.834248,2.008753,−0.964559)

to (P1) at the 2’th iteration with objective function value

f(x∗

2) = −44.233627. It is easy to check that x∗2 is a

feasible solution to (P1). From [12], we know that x∗ =

(0.169234,0.835656,2.008690,−0.964901) is an approxi-mate optimal solution of (P1) with objective function value

f(x∗) =44.233582. The solution we obtained is slightly

better than the solution obtained in the 4’th iteration by method in [12].

Example 3.2.Consider ([20], Example 4.1)

(P2) min f(x) =x2 1+x

2

2−cos(17x1)−cos(17x2) + 3

s.t.g1(x) = (x1−2)2 +x2

2−1.6 2

≤0 g2(x) =x2

1+ (x2−3) 2

2.72 0 0≤x1≤2

0≤x2≤2

Letx0

1= (0,0), σ1 = 10, β= 2, ϵ1= 0.01, γ = 0.01.

The results of Algorithm I for solving (P2) are given in Table II.

As shown in Table II, it is found that Algorithm I yields an approximate optimal solutionx∗= (0.725379,0.399267)

to (P2) at the 2’th iteration with objective function val-ue f(x∗) = 1.837574. From [20], we know that x=

(0.7255,0.3993) is a global solution of (P2) with global optimal value f(x∗) = 1.8376. The solution we obtained

is slightly better than the solution obtained by method in [20].

Example 3.3.Consider ([16], Example 3.3)

(P3) min f(x) =−2x1−6x2+x2

1−2x1x2+ 2x 2 2

s.t. g1(x) =x1+x2−2≤0 g2(x) =−x1+ 2x2−2≤0 x1, x2≥0

Letx0

1= (0,0), σ1 = 10, β= 8, ϵ1= 0.01, γ = 0.02.

The results of Algorithm I for solving (P3) are given in Table III.

As shown in Table III, it is found that Algorithm I yields an approximate optimal solutionx∗= (0.800003,1.199997)

to (P3) at the 3’th iteration with objective function value

f(x∗) = 7.200000. From [16], we know that x=

(0.8000,1.2000) is a global solution of (P3) with global optimal value f(x∗) =7.2000. The solution we obtained

is similar with the solution obtained in the 4’th iteration by method in [16].

ACKNOWLEDGMENT

(6)

TABLE I

NUMERICAL RESULTS FOR EXAMPLE3.1

j σj ϵj f(x∗j) g1(x∗j) g2(x∗j) g3(x∗j) x∗j

1 10 1.0 -44.239897 0.001192 0.002604 -1.880268 (0.169634, 0.835563, 2.008941, -0.965199) 2 20 0.05 -44.233627 -0.000254 -0.000005 -1.889059 (0.170446, 0.834248, 2.008753, -0.964559)

TABLE II

NUMERICAL RESULTS FOR EXAMPLE3.2

j σj ϵj f(x∗j) g1(x∗j) g2(x∗j) x∗j 1 10 0.01 1.810439 -0.780258 0.016341 (0.726166, 0.396344) 2 20 0.0001 1.837574 -0.775927 -0.000015 (0.725379, 0.399267)

TABLE III

NUMERICAL RESULTS FOR EXAMPLE3.3

j σj ϵj f(x∗j) g1(x∗j) g2(x∗j) x∗j 1 10 0.01 -8.314990 0.410570 -0.277811 (1.032984, 1.377586) 2 80 0.0002 -7.766379 0.320111 0.410338 (0.743295, 1.576816) 3 640 0.000004 -7.200000 -0.000000 -0.400009 (0.800003, 1.199997)

REFERENCES

[1] M. S. Bazaraa and J. J. Goode, “Sufficient conditions for a globally exact penalty function without convexity,”Mathematical Programming Study,vol. 19, no. 1, pp. 1-15, 1982.

[2] G. Di Pillo and L. Grippo, “An exact penalty function method with global conergence properties for nonlinear programming problems,”

Mathematical Programming,vol. 36, no. 1, pp. 1-18, 1986.

[3] G. Di Pillo and L. Grippo, “Exact penalty functions in constrained optimization,”SIAM Journal on Control and Optimization,vol. 27, no. 6, pp. 1333-1360, 1989.

[4] S. P. Han and O. L. Mangasrian, “Exact penalty function in nonlinear programming,”Mathematical Programming,vol. 17, no. 1, pp. 251-269, 1979.

[5] J. B. Lasserre, “A globally convergent algorithm for exact penalty functions,”European Journal of Opterational Research,vol. 7, no. 4, pp. 389-395, 1981.

[6] E. Rosenberg, “Exact penalty functions and stability in locally Lipschitz programming,”Mathematical Programming,vol. 30, no. 3, pp. 340-356, 1984.

[7] W. I. Zangwill, “Nonlinear programming via penalty function,” Man-agement Science,vol. 13, no. 5, pp. 334-358, 1967.

[8] N. T. Binh, “Smoothing approximation tol1exact penalty function for constrained optimization problems,”Journal of Applied Mathematics and Informatics,vol. 33, no. 3-4, pp. 387-399, 2015.

[9] N. T. Binh, “Second-order smoothing approximation tol1exact penalty function for nonlinear constrained optimization problems,”Theoretical Mathematics and Applications,vol. 5, no. 3, pp. 1-17, 2015. [10] N. T. Binh and W. L. Yan, “Smoothing approximation to thek-th

pow-er nonlinear penalty function for constrained optimization problems,”

Journal of Applied Mathematics and Bioinformatics,vol. 5, no. 2, pp. 1-19, 2015.

[11] C. H. Chen and O. L. Mangasarian, “Smoothing methods for con-vex inequalities and linear complementarity problems,”Mathematical Programming,vol. 71, no. 1, pp. 51-69, 1995.

[12] Z. Q. Meng, C. Y. Dang and X. Q. Yang, “On the smoothing of the square-root exact penalty function for inequality constrained optimization,”Computational Optimization and Applications, vol. 35, no. 3, pp. 375-398, 2006.

[13] K. W. Meng, S. J. Li and X. Q. Yang, “A robust SQP method based on a smoothing lower order penalty function,”Optimization, vol. 58, no. 1, pp. 23-38, 2009.

[14] M. C. Pinar and S. A. Zenios, “On smoothing exact penalty function for convex constrained optimization,”SIAM Journal on Optimization,

vol. 4, no. 3, pp. 486-511, 1994.

[15] Z. Y. Wu, F. S. Bai, X. Q. Yang and L. S. Zhang, “An exact lower order penalty function and its smoothing in nonlinear programming,”

Optimization,vol. 53, no. 1, pp. 51-68, 2004.

[16] Z. Y. Wu, H. W. J. Lee, F. S. Bai, L. S. Zhang and X. M. Yang, “Quadratic smoothing approximation tol1 exact penalty function in

global optimization,”Journal of Industrial and Management Optimiza-tion,vol. 1, no. 4, pp. 533-547, 2005.

[17] X. S. Xu, Z. Q. Meng, J. W. Sun, L. Q. Huang and R. Shen, “A second-order smooth penalty function algorithm for constrained optimization problems,”Computational Optimization and Applications,vol. 55, no. 1, pp. 155-172, 2013.

[18] S. A. Zenios, M. C. Pinar and R. S. Dembo, “A smooth penalty function algorithm for network-structured problems,”European Journal of Opterational Research,vol. 83, no. 1, pp. 220-236, 1995. [19] J. Nocedal and S. T. Wright,Numerical Optimization, Springer-Verlag,

New York, 1999.

[20] X. L. Sun and D. Li, “Value-estimation function method for con-strained global optimization,” Journal of Optimization Theory and Applications,vol. 102, no. 2, pp. 385-409, 1999.

[21] P. Moengin, “Polynomial penalty method for solving linear program-ming problems,”IAENG International Journal of Applied mathematics,

Imagem

Figure 1 shows the behavior of p k (v) and p k ϵ,σ (v). Assume that f, g i (i ∈ I) are continuously differentiable functions.

Referências

Documentos relacionados

T: Não, nós estamos a abrir o espaço para isso acontecer no futuro, mas o que nós não queremos é que venha alguém de fora com 10 anos de murais e muita técnica, fazer um mural

The use of pharmacological analgesia modifies the outcome of childbirth, increasing the chances of instrumental vaginal delivery, and can increase the duration of labor,

39: Also at Adiyaman University, Adiyaman, Turkey 40: Also at Izmir Institute of Technology, Izmir, Turkey 41: Also at The University of Iowa, Iowa City, USA 42: Also at

RESUMO Tendo em vista o enfoque que está sendo dado à “Operação Lava Jato”, o presente trabalho se debruça no estudo dessa grande operação, passando pela análise do princípio

Liver function and lipid metabolism parameters, as well as the levels of vitamin D, haemoglobin, transferrin and ferritin tented to improve as therapy

Nessa pesquisa, observou-se que as situações mais representam estresse aos docentes universitários são a deficiência na divulgação de informações sobre as

§2º - As queixas relacionadas aos membros da Diretoria do SCODB serão recebidas e processadas em relação a legalidade e fundamento do pedido pelo STJD, devendo o processo

Soroprevalência da infecção pelo vírus da Artrite-Encefalite Caprina (CAEV) no rebanho de caprinos leiteiros da região da grande Fortaleza, Ceará, Brasil... Prevalência