• Nenhum resultado encontrado

BIRGIN† AND J

N/A
N/A
Protected

Academic year: 2022

Share "BIRGIN† AND J"

Copied!
26
0
0

Texto

(1)

THE USE OF QUADRATIC REGULARIZATION WITH A CUBIC DESCENT CONDITION FOR UNCONSTRAINED OPTIMIZATION

E. G. BIRGIN AND J. M. MART´INEZ

Abstract. Cubic-regularization and trust-region methods with worst-case first-order complex- ityO(ε−3/2) and worst-case second-order complexityO(ε−3) have been developed in the last few years. In this paper it is proved that the same complexities are achieved by means of a quadratic- regularization method with a cubic sufficient-descent condition instead of the more usual predicted- reduction based descent. Asymptotic convergence and order of convergence results are also presented.

Finally, some numerical experiments comparing the new algorithm with a well-established quadratic regularization method are shown.

Key words. unconstrained minimization, quadratic regularization, cubic descent, complexity AMS subject classifications. 90C30, 65K05, 49M37, 90C60, 68Q25

DOI. 10.1137/16M110280X

1. Introduction. Assume that f :Rn →R is possibly nonconvex and smooth for allx∈Rn. We will consider the unconstrained minimization problem given by

(1) Minimizef(x).

In the last decade, many works have been devoted to analyze iterative algorithms for solving (1) from the point of view of their time complexity. See, for example, [2, 4, 5, 6, 8, 11, 14, 19, 21]. A review of complexity results for the convex case, in addition to novel techniques, can be found in [12].

Given arbitrary tolerancesεg>0 andεh >0, the question is about the amount of iterations and functional and derivative evaluations that are necessary to achieve an approximate solution defined byk∇f(x)k ≤εgor byk∇f(x)k ≤εgplusλ1(∇2f(x))≥

−εh, whereλ1(∇2f(x)) represents the leftmost eigenvalue of∇2f(x).

In general, gradient-based methods exhibit complexity O(ε−2g ) [4], which means that there exists a constantcthat only depends on the characteristics of the problem, algorithmic parameters, and, of course, the initial approximation, such that the effort required to achievek∇f(x)k ≤εgfor a bounded-below objective functionf is at most c/ε2g. This bound is sharp for all gradient-based methods [4]. Complexity results for modified Newton’s methods are available in [14]. Surprisingly, Newton’s method with the classical trust-region strategy does not exhibit better complexity than O(ε−2g ) either [4]. The same example used in [4] to prove this fact can be applied to Newton’s method with standard quadratic regularization. On the other hand, Newton’s method employing cubic regularization [15] for obtaining sufficient descent at each iteration exhibits the better complexityO(ε−3/2g ) (see [5, 6, 19, 21]).

Received by the editors November 8, 2016; accepted for publication (in revised form) March 15, 2017; published electronically June 6, 2017.

http://www.siam.org/journals/siopt/27-2/M110280.html

Funding: Supported by FAPESP (grants 2013/03447-6, 2013/05475-7, 2013/07375-0, and 2014/18711-3) and CNPq (grants 309517/2014-1 and 303750/2014-6).

Department of Computer Science, Institute of Mathematics and Statistics, University of S˜ao Paulo, Rua do Mat˜ao, 1010, Cidade Universit´aria, 05508-090, S˜ao Paulo, SP, Brazil (egbirgin@ime.

usp.br).

Department of Applied Mathematics, Institute of Mathematics, Statistics, and Scientific Com- puting, State University of Campinas, 13083-859, Campinas, SP, Brazil (martinez@ime.unicamp.br).

1049

(2)

The best known practical algorithm for unconstrained optimization with worst- case evaluation complexityO(ε−3/2g ) to achieve first-order stationarity and complexity O(ε−3/2g−3h ) to achieve second-order stationarity, defined by Cartis, Gould, and Toint in [5] and [6], uses cubic regularization and a descent criterion based on the com- parison of the actual reduction of the objective function and the reduction predicted by a quadratic model. A nonstandard trust-region method with the same complexity properties due to Curtis, Robinson, and Samadi [8] employs a cubic descent criterion for accepting trial increments. In [2], the essential ideas of ARC [5, 6] were extended in order to introduce high-order methods in which apth Taylor approximation (p≥2) plus a (p+ 1)th regularization term is minimized at each iteration. In these methods, O(ε−(p+1)/pg ) evaluation complexity for first-order stationarity is obtained also using the actual-versus-predicted-reduction descent criterion. However, it is rather straight- forward to show that this criterion can be replaced by a (p+1)th descent criterion (i.e., f(xk+1)≤f(xk)−αkxk+1−xkkp+1) in order to obtain the same complexity results.

Moreover, the (p+ 1)th descent criterion (cubic descent in the casep= 2) seems to be more naturally connected with the Taylor approximation properties that are used to prove complexity. Cubic descent was also used in [19] in a variable metric method that seeks to achieve good practical global convergence behavior. In the trust-region ex- ample exhibited in [4], the unitary Newtonian step is accepted at every iteration since it satisfies the adopted sufficient descent criterion. This criterion requires that the function descent (actual reduction) should be better than a fraction of the predicted descent provided by the quadratic model (predicted reduction). However, if, instead of this condition, one requires functional descent proportional toksk3, wheresis the in- crement given by the model minimization, the given example does not stand anymore.

This state of facts led us to the following theoretical question: Would it be possible to obtain worst-case evaluation complexitiesO(ε−3/2g ) andO(ε−3/2g−3h ) using cubic de- scent to accept trial increments but only quadratic regularization in the subproblems?

In this paper, we provide an affirmative answer to this question by incorporat- ing cubic descent into a quadratic regularization framework. Iterative regularization is a classical idea in unconstrained optimization originated in the seminal works of Levenberg [17] and Marquardt [18] for nonlinear least-squares. It relies upon the Levenberg–Marquardt path, which is the set of solutions of regularized subproblems varying the regularization parameter, both in the case of quadratic and cubic regular- ized subproblems. It is worth mentioning that this path is also the set of solutions of Euclidean trust-region subproblems for different trust-region radii. The explicit con- sideration of the so-called hard case (where the Hessian is not positive definite and the gradient is orthogonal to the eigenspace related to the leftmost Hessian’s eigenvalue) and the employment of spectral computations to handle it are in the core of every careful trust-region implementation [8, 20, 22, 23]. Our new method explicitly deals with the hard case and uses a regularization parameter with adequate safeguards in order to guarantee the classical complexity results of cubic regularization and related methods [8]. The new method has been implemented and compared against a well es- tablished quadratic regularization method for unconstrained optimization introduced in [16].

The rest of this paper is organized as follows. A model algorithm with cubic descent is described in section 2. An implementable version of the algorithm is intro- duced in section 3. Well-definiteness and complexity results are presented in sections 4 and 5, respectively. Local convergence results are given in section 6. Numerical ex- periments are presented in section 7, while final remarks are given in section 8.

(3)

Notation. The symbol k · kdenotes the Euclidean norm of vectors and the sub- ordinate matricial norm. We denote g(x) = ∇f(x), H(x) = ∇2f(x), and, some- times, gk = g(xk) and Hk = H(xk). If a ∈ R, [a]+ = max{a,0}. If a1, . . . , an ∈ R, diag(a1, . . . , an) denotes the n×n diagonal matrix whose diagonal entries are a1, . . . , an. If A ∈ Rn×n, A denotes the Moore–Penrose pseudoinverse of A. The notation [x]j denotes thejth component of a vectorxwhenever the simpler notation xj might lead to confusion.

2. Model algorithm. The following algorithm establishes a general framework for minimization schemes that use cubic descent. At each iterationk, we compute an incrementsksuch thatf(xk+sk)≤f(xk)−αkskk3. In principle, this is not very useful because even sk = 0 satisfies this descent condition. However, in Theorem 2.1, we show that under the additional condition (3), the algorithm satisfies suitable stopping criteria. As a consequence, practical algorithms should aim to achieve (2) and (3) simultaneously.

Algorithm2.1. Let x0∈Rn andα >0 be given. Initializek←0.

Step 1. Computesk such that

(2) f xk+sk

≤f xk

−α sk

3.

Step 2. Definexk+1=xk+sk, setk←k+ 1, and go to Step1.

The theorems below establish that, under suitable assumptions, every limit point of the sequence generated by Algorithm 2.1 is second-order stationary and provide an upper bound on the number of iterations that Algorithm 2.1 requires to achieve a target objective functional value or to find an approximate first- or second-order stationary point.

Lemma 2.1. Assume that the objective functionf is twice continuously differen- tiable and that there existγg>0andγh>0such that, for allk∈N, the incrementsk computed at Step1 of Algorithm 2.1satisfies

(3)

skgk+1k γg

≤ kskk and [−λ1,k]+

γh ≤ sk

,

whereλ1,k stands for the leftmost eigenvalue ofHk. Then, it follows that

f(xk+1)≤f(xk)−max ( α

γg3/2

! gk+1

3/2, α

γh3

[−λ1,k]3+ )

.

Proof. The result follows trivially from (2), (3), and the fact that, at Step 2 of Algorithm 2.1,xk+1 is defined asxk+1=xk+sk.

Theorem 2.1. Let fmin ∈ R, εg > 0, and εh > 0 be given constants, assume that the hypothesis of Lemma 2.1hold, and let{xk}k=0 be the sequence generated by Algorithm 2.1. Then, the cardinality of the set of indices

(4) Kg=

k∈N|f xk

> fminand gk+1

> εg

(4)

is, at most,

(5)

$1 α

f(x0)−fmin

gg)3/2

!%

,

while the cardinality of the set of indices

(6) Kh=

k∈N|f(xk)> fmin andλ1,k<−εh is, at most,

(7)

$1 α

f(x0)−fmin

hh)3

!%

.

Proof. From Lemma 2.1, it follows that at every time an iteratexk is such that kgk+1k > εg the value of f decreases at least α(εgg)3/2, while at every time an iterate xk is such that λ1,k <−εh the value of f decrease at least α(εhh)3. The thesis follows from the fact that, by (2),{f(xk)}k=0 is a nonincreasing sequence.

Corollary 2.1. Letfmin∈R,εg>0, andεh >0be given constants and assume that the hypothesis of Lemma 2.1hold. Algorithm 2.1 requiresO(ε−3/2g )iterations to compute xk such that

f(xk)≤fmin or gk+1

≤εg, it requiresO(ε−3h )iterations to computexk such that

f(xk)≤fmin orλ1,k≥ −εh,

and it requiresO(ε−3/2g−3h )iterations to computexk such that f(xk)≤fmin or

gk+1

≤εg andλ1,k≥ −εh .

Corollary 2.2. Assume that the hypothesis of Lemma 2.1hold and let{xk}k=0 be the sequence generated by Algorithm 2.1. Then, if the objective function f is bounded below, we have that

k→∞lim

g xk

= 0and lim

k→∞[−λ1,k]+= 0.

Proof. Assume that limk→∞kg(xk)k 6= 0. This means that there exists ε > 0 and K, an infinite subsequence of N, such thatkgkk > ε for all k ∈ K. Since f is bounded below, this contradicts Theorem 2.1. The second part is analogous.

Corollary 2.3. Assume that the hypothesis of Lemma 2.1 hold. Then, if the objective function f is bounded below, every limit point x of the sequence {xk}k=0 generated by Algorithm 2.1is such that∇f(x) = 0and∇2f(x)is positive semidef- inite.

Proof. This corollary follows from Corollary 2.2 by continuity of∇f and∇2f. 3. Implementable algorithm. Algorithm 2.1 presented in the previous section is a “model algorithm” in the sense that it does not prescribe a way to compute the step sk satisfying (2) and (3). This will be the subject of the present section.

Algorithm 3.1 is almost identical to Algorithm 2.1 with the sole difference that it uses Algorithm 3.2 to computesk. Lemma 4.1 shows that Algorithm 3.2 is well defined, and

(5)

Lemma 4.4 shows that the stepskcomputed by Algorithm 3.2 satisfies hypothesis (3) of Lemma 2.1. In the following section, it will be shown that Algorithm 3.2 computes sk using O(1) evaluations of f (and a single evaluation of g and H at the current iterate xk). This implies that the complexity results on the number of iterations of the model Algorithm 2.1 also apply to the number of iterations and evaluations of f and its first- and second-order derivatives performed by Algorithms 3.1 and 3.2.

Algorithm3.1. Let x0∈Rn,α >0, andM >0 be given. Initializek←0.

Step 1. Use Algorithm 3.2 to computes∈Rn satisfying

(8) f xk+s

≤f xk

−αksk3 and define sk=s.

Step 2. Definexk+1=xk+sk, setk←k+ 1, and go to Step1.

Algorithm 3.2 below describes the way in which the increment sk is computed.

For that purpose, different trial increments are tried along the set of solutions (9) s(µ) := argmin

gk, s +1

2sT Hk+ [−λ1,k]+I s+µ

2ksk2

for different values of the regularizing parameter µ ≥0, where λ1,k is the leftmost eigenvalue ofHk. Algorithm 3.2 proceeds by increasing the value of the regularization parameterµ≥0 until the sufficient descent condition (8) is satisfied withs=s(µ).

For each value ofµ, we defineρ(µ) = ([−λ1,k]++µ)/(3ks(µ)k). By [5, Lemma 3.1] (see also [15, 21]),s(µ) is a global minimizer ofhgk, si+12sTHks+ρ(µ)ksk3. The way in whichµis increased is determined by two necessities related toρ(µ): the initialρ(µ) at each iteration should not be excessively small and the final ρ(µ) should not be excessively big. Essentially, the technical manipulation of the quadratic regulariza- tion parameter µ in the algorithm is motivated by these two apparently conflicting objectives which are necessary to obtain the complexity results.

Algorithm3.2. Givenxk, this algorithm computes a steps∈Rn satisfying (8).

Step1. Letλ1,k be the leftmost eigenvalue ofHk. Consider the linear system

(10)

Hk+ ([−λ1,k]++µ)I

s=−gk.

If (10)withµ= 0 is not compatible, then setρk,0= 0and go to Step5; else proceed to Step 2below.

Step2. Compute the minimum norm solution sˆk,0 to the linear system (10) with µ= 0 and set

ρk,0=

∞ if ˆsk,0= 0and[−λ1,k]+>0, 0 if ˆsk,0= 0and[−λ1,k]+= 0, [−λ1,k]+/ 3kˆsk,0k

if ˆsk,06= 0.

If ρk,0≤M, then go to Step 4; else proceed to Step3 below.

Step3. Letq1,k withkq1,kk= 1be an eigenvector of Hk associated with its leftmost eigenvalue λ1,k. Set `3 ← 1 and compute t`3 ≥0 and ˆsk,`3 = ˆsk,0+t`3q1,k such that

(11) [−λ1,k]+/ 3

ˆsk,`3

=M.

If (8)holds with s= ˆsk,`3,return s= ˆsk,`3; else proceed to Step 3.1 below.

(6)

Step3.1. Whilekˆsk,`3k ≥2kˆsk,0k, execute Steps3.1.1–3.1.2below.

Step3.1.1. Set`3←`3+ 1 and computet`3 ≥0and ˆsk,`3= ˆsk,0+t`3q1,k such that

(12)

ˆsk,`3 =1

2

ˆsk,`3−1 . Step3.1.2. If (8)holds with s= ˆsk,`3, then return s= ˆsk,`3.

Step4. If (8)holds withs= ˆsk,0, thenreturns= ˆsk,0; else proceed to Step5 below.

Step5. Set `5 ← 1 and ρk,`5 = max{0.1, ρk,0} and compute µ˜k,`5 > 0 and ˜sk,`5 solution to (10)with µ= ˜µk,`5 such that

(13) ρk,`5 ≤[−λ1,k]++ ˜µk,`5

3k˜sk,`5k ≤100ρk,`5.

If (8)holds with s= ˜sk,`5,return s= ˜sk,`5; else proceed to Step 5.1 below.

Step5.1. Whileµ˜k,`5 <0.1, execute Steps5.1.1–5.1.3below.

Step5.1.1. Set`5←`5+ 1 and (14) ρk,`5= 10

[−λ1,k]++ ˜µk,`5−1

3ks˜k,`5−1k

.

Step 5.1.2 Compute µ˜k,`5 >0 and s˜k,`5 solution to (10) with µ= ˜µk,`5 such that (13)holds.

Step 5.1.3 If (8)holds with s= ˜sk,`5,return s= ˜sk,`5.

Step6. Set`6←1,µ¯k,`6 = 2˜µk,`5, and compute¯sk,`6 solution to (10)withµ= ¯µk,`6. Step6.1. While (8) does not hold withs= ¯sk,`6, execute Steps6.1.1–6.1.2 below.

Step6.1.1. Set`6←`6+ 1 andµ¯k,`6 = 2¯µk,`6−1.

Step6.1.2. Computes¯k,`6 solution to (10)withµ= ¯µk,`6. Step6.2. Return s= ¯sk,`6.

The reader may have noticed that Algorithm 3.2 includes several constants in its definition. Those constants are arbitrary, and all of them can be replaced by any number (sometimes larger or smaller than unity, depending on the case). The algorithm was presented in this way with the simple purpose of avoiding a large number of hard-to-recall letters or parameters.

The way in which Algorithm 3.2 proceeds is directly related to the geometry of the set of solutions of (9), many times called the Levenberg–Marquardt path. On the one hand, whenµ→ ∞, s(µ) tends to 0 describing a curve tangent to−gk. On the other hand, the geometry of the Levenberg–Marquardt path whenµ→0 depends on the positive definiteness ofHk and the compatibility or not of the linear system (10) withµ= 0 as we now describe.

IfHkis positive definite, then the Levenberg–Marquardt path is a bounded curve that joinss= 0 with the Newtonian steps=−(Hk)−1gk. In this case, we have that λ1,k > 0, so [−λ1,k]+ = 0. Then, the system (10) with µ = 0 is compatible and, by Step 2, ρk,0 = 0. Since ρk,0 ≤ M, the algorithm continues at Step 4 and the increment ˆsk,0 is accepted if the sufficient descent condition (8) holds withs= ˆsk,0. (This is always the case if ˆsk,0 = 0, which occurs if and only if gk = 0.) However, if (8) does not hold, after a few initializations at Step 5, the algorithm computes at Step 5.1.2 a regularization parameter µ such that the corresponding ρ(µ) increases with respect to the previous one, but not very much. This corresponds to our purpose of maintaining the auxiliary quantityρ(µ) within controlled bounds. Ifs(µ) does not

(7)

satisfy (8) (checked at Step 5.1.3) and the regularization parameter µ is still small (checked at the loop condition of Step 5.1), we update (increase) the bounds onρ(µ) at Step 5.1.1, and we repeat this process until the fulfillment of (8) or until µ is not small anymore. In that latter case, the process continues in Step 6 with regular increases of the regularization parameter µwhich should lead to the final fulfillment of (8) at the loop condition of Step 6.1. It is easy to see that, when Hk is positive semidefinite and the linear systemHks=−gk is compatible, the algorithm proceeds as in the positive definite case described above.

The case in whichHkis not positive definite but the linear system (10) withµ= 0 is compatible is called the “hard case” in the trust-region literature [7]. In the hard case, the Levenberg–Marquardt path is constituted by two branches. The first branch, which corresponds toµ >0, is a bounded curve that joinss= 0 with the minimum- norm solution of (10) with µ= 0. The second branch, which corresponds toµ= 0, is given by the infinitely many solutions to the system (10) withµ= 0. This set of infinitely many solutions form an affine subspace that contains−[Hk+ [−λ1,k]+I]gk and is spanned by the eigenvectors ofHk associated withλ1,k. Usually, one restricts this affine subspace to the line −[Hk + [−λ1,k]+I]gk +tv with t ∈ R, where v is one of the eigenvectors associated withλ1,k. The algorithm starts by computing the minimum norm solution of (10) with µ = 0, which corresponds to the intersection of the two branches of the Levenberg–Marquardt path. If taking the regularizing parameter µ = 0 we have that the associated ρ(µ) is not very big (ρk,0 ≤ M at Step 2), then we proceed exactly as in the positive definite and compatible positive semidefinite cases, increasing µ and seeking an acceptable increment along the first branch of the Levenberg–Marquardt path. However, ifρk,0> M, we are in the case in which ρ(µ) could be very big. Then, the search starts at Step 3 by seeking an increment along the second branch of the Levenberg–Marquardt path. This happens when λ1,k < 0 and ˆsk,0 = 0 (because gk = 0), since in that case, we set ρk,0 = ∞ at Step 2. Note that, along this branch, the value of µ= 0 does not change and the reduction ofρ(µ) is achieved trivially by increasing the norm ofs(µ). Starting with a sufficiently largeks(µ)k, and by means of successive reductions ofks(µ)kat Step 3.1.1, we seek the fulfillment of (8). However, after a finite number of reductions ofks(µ)k this norm becomes smaller than a multiple of the norm of the minimum-norm solution (except in the case in which we have ˆsk,0= 0). If this happens, we enter Step 4 and then initiate a search in the other branch in an analogous way as we do in the positive definite case. In this situation, we have the guarantee thatρ(µ) is suitable bounded in the intersection point because, otherwise, the sufficient descent condition (8) would have been satisfied.

IfHk is not positive definite and the system (10) withµ= 0 is not compatible, then the Levenberg–Marquardt path is an unbounded curve that, as µ tends to 0, becomes tangent to an affine subspace generated by an eigenvector ofHk associated with λ1,k. In this case, the control goes to Step 5 and the algorithm proceeds as in the already described situation in which Hk is positive definite but the Newtonian step does not satisfy the sufficient descent condition (8).

4. Well-definiteness results. In this section, we will show that Algorithm 3.2 is well defined and that the computed incrementsk that satisfies (8) also satisfies (3).

We start by describing how Algorithm 3.2 could be implemented considering the spec- tral decomposition ofHk. Of course, this is an arbitrary choice and other options are possible like, for example, computing the leftmost eigenvalue ofHk only, and possibly its associated eigenvector, and then solving the linear systems by any factorization

(8)

suitable for symmetric matrices. In any case, the description based on the spectral decomposition ofHk introduces some useful notation for the rest of the section.

Consider the spectral decompositionHk =QkΛkQTk, whereQk = [q1,k. . . qn,k] is orthogonal and Λk = diag(λ1,k, . . . , λn,k) withλ1,k≤ · · · ≤λn,k. Substituting Hk by its spectral decomposition in (10), we obtain [Λk+ ([−λ1,k]++µ)I]QTks=−QTkgk. Therefore, forµ= 0, the linear system (10) is compatible if and only if [QTkgk]j = 0 wheneverλj,k+ [−λ1,k]+ = 0. Assuming that the linear system (10) with µ = 0 is compatible, its minimum norm solution is given by ˆsk,0=Qkyk, where

ykj =

−[QTkgk]j/(λj,k+ [−λ1,k]+), j∈J, 0, j∈J,¯

J ={j ∈ {1, . . . , n} |λj,k+ [−λ1,k]+ 6= 0}, and ¯J ={1, . . . , n} \J. Moreover, note that

kˆsk,0k= sX

j∈J

[QTkgk]j/(λj,k+ [−λ1,k]+)2

.

The norm of ˆsk,`3 = ˆsk,0+t`3q1,k (for any`3≥1) computed at Step 3 is given by kˆsk,`3k=q

kˆsk,0k2+t`32hˆsk,0, q1,ki+t2`3 =q

kˆsk,0k2+t2`3,

where the last equality holds because ˆsk,0 is orthogonal toq1,k by definition. Thus, given a desired norm c`3 for ˆsk,`3 (c`3 = [−λ1,k]+/(3M) when `3 = 1 and c`3 =

1

2kˆsk,`3−1k when`3>1), we have that t`3 =q

c2`3− kˆsk,0k2.

The following technical lemma establishes that Step 5 of Algorithm 3.2 can al- ways be completed finding a regularization parameter µand an increments(µ) that satisfies (13). The assumptiongk 6= 0 in the lemma is perfectly reasonable because, as will be shown later, it always holds at Step 5.

Lemma 4.1. Suppose that gk 6= 0. At Step 5 of Algorithm 3.2, for any `5 ≥1, there existsµ˜k,`5 >0 and˜sk,`5 solution to (10)with µ= ˜µk,`5 satisfying (13).

Proof. For anyµ >0, the matrix of the system (10) is positive definite and the solutions(µ) to (10) is such that

(15) ks(µ)k= v u u t

X

{j|[QTkgk]j6=0}

[QTkgk]j

j,k+ [−λ1,k]++µ) 2

.

Moreover, clearly,

(16) lim

µ→∞ks(µ)k= 0.

In order to analyze the case µ →0, the proof will be divided in two cases: (a) the linear system (10) withµ= 0 is compatible and (b) the linear system (10) withµ= 0 isnot compatible.

Consider first case (a). In this case, since [QTkgk]j = 0 wheneverλj,k+ [−λ1,k]+= 0, (15) is equivalent to

ks(µ)k= v u u t

X

j∈J

[QTkgk]j

j,k+ [−λ1,k]++µ) 2

.

(9)

Therefore,

(17) lim

µ→0ks(µ)k= ˆsk,0

>0

becausegk6= 0 implies ˆsk,06= 0. Thus, by (16) and (17), we have that

(18) lim

µ→∞

[−λ1,k]+

3ks(µ)k =∞ and lim

µ→0

[−λ1,k]+

3ks(µ)k = [−λ1,k]+

3kˆsk,0k . Since, by definition, for any`5≥1,

ρk,`5 ≥ρk,0= [−λ1,k]+

3kˆsk,0k , the desired result follows by continuity from (18).

Consider now case (b). In this case, there existsj such that λj,k+ [−λ1,k]+= 0 and [QTkgk]j6= 0. Therefore, from (15), we have that

(19) lim

µ→0ks(µ)k=∞.

Thus, by (16) and (19), we have that

(20) lim

µ→∞

[−λ1,k]+

3ks(µ)k =∞ and lim

µ→0

[−λ1,k]++µ 3ks(µ)k = 0.

Since, by definition, for any`5≥1, in this case we haveρk,`5 ≥ρk,0= 0.1, the desired result follows by continuity from (20).

Below we state the main assumption that supports the complexity results. Essen- tially, we will assume that the objective function is twice continuously differentiable and that ∇2f satisfies a Lipschitz condition on a suitable region that contains the iterates xk and the trial points xk+strial. Of course, a sufficient condition for the fulfillment of this assumption is the Lipschitz-continuity of∇2f onRn, but in some cases this global assumption may be unnecessarily strong.

Assumption A1. The functionf is twice continuous differentiable for allx∈Rn, and there exists a constantL > 0 such that, for all xk computed by Algorithm 3.2 and every trial incrementstrial computed at Steps 2, 3, 3.1.1, 5, 5.1.2, 6, or 6.1.2 of Algorithm 3.2, we have that

f xk+strial

≤f(xk) + strialT gk+1

2 strialT

Hkstrial+L strial

3

and

g xk+strial

−gk−Hkstrial ≤L

strial

2.

In the following lemma we prove that any trial increment necessarily satisfies the sufficient descent condition (8) if the regularization parameter is large enough.

Lemma 4.2. Suppose that Assumption A1 holds and µ ≥ 0. If 0 6= strial ∈ Rn computed at Step2, 3, 3.1.1, 5, 5.1.2, 6,or6.1.2 of Algorithm 3.2, which by definition satisfies

(21)

Hk+ ([−λ1,k]++µ)

strial=−gk,

(10)

is such that

(22) [−λ1,k]+

3kstrialk ≥L+α, then (8) is satisfied withs=strial.

Proof. Let us define, for alls∈Rn, q(s) =sTgk+1

2sTHks.

SinceHk+ ([−λ1,k]++µ)Iis positive semidefinite for anyµ≥0, by (21), (23) strial minimizesq(s) +1

2([−λ1,k]++µ)ksk2. Define

(24) ρ=[−λ1,k]+

3kstrialk .

By [5, Lemma 3.1],strial is a minimizer ofq(s) +ρksk3. In particular,

(25) q strial

+ρ strial

3≤q(0) = 0.

Now, by Assumption A1, we have that f xk+strial

≤f xk

+ strialT gk+1

2 strialT

Hkstrial+L strial

3

=f xk

+q strial

strial

3+ (L−ρ) strial

3.

Thus, by (22), (24), and (25),f(xk+strial)≤f(xk)−αkstrialk3. This completes the proof.

The lemma below shows that Algorithm 3.2 may return a null increment only at Step 4.

Lemma 4.3. Suppose that Assumption A1 holds. Algorithm 3.2 returns a null increment s= 0 if and only if gk = 0 andλ1,k ≥0. Moreover, an increment s= 0 may only be returned by Algorithm 3.2at Step 4(i.e., Steps 3, 3.1.2, 5, 5.1.3,and6.2 always return nonnull increments).

Proof. Assume thatgk= 0 andλ1,k≥0. Then, we have that the minimum norm solution ˆsk,0to the linear system (10) withµ= 0 computed at Step 2 is null and that ρk,0= 0≤M. Therefore, the algorithm goes to Step 4 and returnss= ˆsk,0= 0 since it satisfies (8).

Assume now that Algorithm 3.2 returned an increments = 0. Since every trial increment computed by the algorithm is a solution to the linear system (10) for some µ≥0, we must have gk = 0. If λ1,k≥0, the first part of thesis holds and it remains to show that the null increment is returned at Step 4. Note that, sincegk= 0 implies ˆ

sk,0= 0 andλ1,k≥0 means [−λ1,k]+= 0, at Step 2 we haveρk,0= 0≤M. Thus, the algorithm goes to Step 4, where the null increment is returned since it satisfies (8).

We now show that assumingλ1,k<0 leads to a contradiction. Sinceλ1,k<0 means [−λ1,k]+ >0 and gk = 0 implies ˆsk,0 = 0, by the way ρk,0 is defined at Step 2, we

(11)

have thatρk,0=∞ 6≤M. In this case the algorithm goes to Step 3. On the one hand, note that ˆsk,0= 0 implies that the algorithm never leaves the loop in Step 3.1 because its condition reduces to kˆsk,`3k ≥ 0. On the other hand, note that, by halving the norm of the trial increments ˆsk,`3, since µ = 0 is fixed, in a finite number of trials, (22) holds and, by Lemma 4.2, the algorithm returnss= ˆsk,`3 6= 0 for some `3≥1, contradicting the fact that the algorithm returned a null increment.

We finish this section proving that the incrementsk computed at Algorithm 3.2, which satisfies (8) and defines xk+1 in section 3.1, is such that it also satisfies (3).

Note that this result assumes the existence of sk by hypothesis. Up to the present moment we proved that Algorithm 3.2 is well defined. The existence ofsk for allkwill be proved in the following section when proving that Algorithm 3.2 always computes sk performing a finite number of operations.

Lemma 4.4. Suppose that Assumption A1 holds. Then, there exist γg >0 and γh>0 such that, for allk∈N, the incrementsk computed by Algorithm 3.2and the new iteratexk+1 =xk+sk computed at Step2 of Algorithm 3.1satisfy

skgk+1k γg

≤ kskk and [−λ1,k]+

γh ≤ kskk.

Moreover,

(26) γg≤max{3M +L,3000(L+α) +L,30 +L}

and

(27) γh≤max{3M,3000(L+α),30}.

Proof. If sk = 0, then, by Lemma 4.3, we have that gk = 0 and λ1,k ≥ 0 and, therefore, the thesis follows trivially. We now assumesk 6= 0. Since sk is a solution to (10) for someµ≥0, we have thatHksk+gk+ ([−λ1,k]++µ)sk= 0. Therefore,

Hksk+gk+

[−λ1,k]++µ kskk

sk

sk = 0.

Then

Hksk+gk =

[−λ1,k]++µ kskk

sk

2. But, by Assumption A1 and the triangle inequality,

gk+1

gk+Hksk

gk+1−gk−Hksk ≤L

sk

2. Therefore,

(28)

gk+1

[−λ1,k]++µ kskk +L

sk

2.

We now analyze in separate the cases in whichsk6= 0 is returned by Algorithm 3.2 at Steps 3, 3.1.2, 4, 5, 5.1.3, and 6.2

Casesk= ˆsk,`3 with`3= 1was returned at Step3: In this case,sk,`3 is a solution to (10) withµ= 0 and, by (11), it satisfies

(29) [−λ1,k]+/

sk,`3 = 3M.

(12)

Case sk = ˆsk,`3 with `3 >1 was returned at Step 3.1.2: This means that there exists ˆsk,`3−1 6= 0 that is a solution to (10) with µ= 0 and for which (8) with s = ˆ

sk,`3−1did not hold. Therefore, by Lemma 4.2, we have that [−λ1,k]+/(3kˆsk,`3−1k)<

L+α. Thus, by (12), we have that

(30) [−λ1,k]+/

ˆsk,`3

<6(L+α).

Case sk= ˆsk,0 was returned at Step 4: In this case, we have that

(31) [−λ1,k]+/ 3kˆsk,0k

≤M or that there exists ˆsk,`3 6= 0 with`3≥1 such that

(32)

ˆsk,`3 <2

ˆsk,0 , ˆ

sk,`3 is a solution to (10) withµ= 0, and (8) did not hold withs= ˆsk,`3. Therefore, by Lemma 4.2, we have that

(33) [−λ1,k]+/ 3

ˆsk,`3

< L+α and, by (32) and (33),

(34) [−λ1,k]+/

ˆsk,0

<6(L+α).

Thus, by (31) and (34),

(35) [−λ1,k]+/

sk,0

≤max{3M,6(L+α)}.

Case sk = ˜sk,`5 with `5 = 1 was returned at Step 5: In this case there are two possibilities: the linear system (10) withµ= 0 is compatible or not. In the first case, ˆ

sk,0 was computed,

ρk,0= [−λ1,k]+/ 3 ˆsk,0

,

and, since (8) withs= ˆsk,0 did not hold, by Lemma 4.2,ρk,0< L+α. In the second case, we simply have thatρk,0= 0. Thus, by (13) and by the fact that, by definition, ρk,1= max{0.1, ρk,0}, in the first case, we have

(36) [−λ1,k]++ ˜µk,`5

3k˜sk,`5k ≤100ρk,`5 = 100 max{0.1, ρk,0} ≤max{10,100(L+α)}, and, in the second case, we have

(37) [−λ1,k]++ ˜µk,`5

3k˜sk,`5k ≤100ρk,`5 = 100 max{0.1,0}= 10.

Therefore, ˜µk,`5 ≥0, (36), and (37) imply that (38) [−λ1,k]+

ksk,`5k ≤ [−λ1,k]++ ˜µk,`5

ksk,`5k ≤max{30,300(L+α)}.

Case sk = ˜sk,`5 with `5 >1 was returned at Step 5.1.3: This means that there exists ˜µk,`5−1>0 and ˜sk,`5−1solution to (10) withµ= ˜µk,`5−1 for which (8) did not hold. Thus, by Lemma 4.2,

[−λ1,k]++ ˜µk,`5−1

3k˜sk,`5−1k < L+α.

(13)

Moreover, by (13) and (14), [−λ1,k]++ ˜µk,`5

3k˜sk,`5k ≤100ρk,`5 = 1000

[−λ1,k]++ ˜µk,`5−1

3k˜sk,`5−1k

. Thus,

(39) [−λ1,k]+

k˜sk,`5k ≤[−λ1,k]++ ˜µk,`5

k˜sk,`5k ≤3000(L+α).

Case sk = ¯sk,`6 was returned at Step 6.2: If`6= 1, then ¯µk,`6 = 2˜µk,`5 for some

`5 ≥ 1 and the solution ˜sk,`5 to (10) with µ= ˜µk,`5 is such that (8) withs = ˜sk,`5 does not hold. Thus, by Lemma 4.2,

[−λ1,k]++ ˜µk,`5

3k˜sk,`5k < L+α.

On the other hand, and since ¯µk,`6 = 2˜µk,`5, we have that (40)

k¯sk,`6k

= v u u t

X

j∈J

[QTkgk]j

λj,k+ [−λ1,k]++ ¯µk,`6

2

= v u u t

X

j∈J

[QTkgk]j

λj,k+ [−λ1,k]++ 2˜µk,`5

2

= v u u t

X

j∈J

[QTkgk]j

2(12j,k+ [−λ1,k]+) + ˜µk,`5) 2

≥ v u u t

X

j∈J

[QTkgk]j

2(λj,k+ [−λ1,k]++ ˜µk,`5) 2

= 1 2

v u u t

X

j∈J

[QTkgk]j

λj,k+ [−λ1,k]++ ˜µk,`5

2

= 1 2

˜sk,`5 >0.

Therefore,

(41)

[−λ1,k]+

k¯sk,`6k ≤ [−λ1,k]++ ¯µk,`6

k¯sk,`6k = [−λ1,k]++ 2˜µk,`5

k¯sk,`6k =2 12[−λ1,k]++ ˜µk,`5

k¯sk,`6k

≤ 2 ([−λ1,k]++ ˜µk,`5)

k¯sk,`6k ≤ 2 ([−λ1,k]++ ˜µk,`5)

1 2ks˜k,`5k

= 4

[−λ1,k]++ ˜µk,`5

k˜sk,`5k

<12(L+α).

If`6>1, then ¯µk,`6= 2¯µk,`6−1 and the solution ¯sk,`6−1 to (10) withµ= ¯µk,`6−1

is such that (8) withs= ¯sk,`6−1 does not hold. Thus, by Lemma 4.2, (42) [−λ1,k]++ ¯µk,`6−1

3k¯sk,`6−1k < L+α.

Moreover, ¯µk,`6 = 2¯µk,`6−1 implies, as shown above, that

(43)

¯sk,`6 ≥ 1

2

¯sk,`6−1 .

Therefore, by (42) and (43), and since ¯µk,`6 ≥0, we have that

(44) [−λ1,k]+

k¯sk,`6k ≤ [−λ1,k]++ ¯µk,`6

k¯sk,`6k <12(L+α).

The desired result (27) follows from (29), (30), (35), (38), (39), (41), and (44), while (26) follows from the same set of inequalities plus (28).

(14)

5. Complexity results. In this section, complexity results on Algorithm 3.2 are presented. In particular, we show that the number of functional evaluations required to compute the incrementsk using Algorithm 3.2 isO(1), i.e., it does not depend on εg norεh. The section finishes establishing the complexity of Algorithms 3.1 and 3.2 in terms of the number of functional (and derivatives) evaluations. The sufficient condition (8) is tested at Steps 3, 3.1.2, 4, 5, 5.1.3, and 6.1. These are the only steps of Algorithm 3.2 in which the objective function is evaluated. Condition (8) is tested only once per iteration at Steps 3, 4, and 5. Therefore, in order to assess the worst- case evaluation complexity of Algorithm 3.2, we must obtain a bound for the number of executions of the remaining mentioned steps, namely, Steps 3.1.2, 5.1.3, and 6.1.

Step 3.1 of Algorithm 3.2 describes the loop that corresponds to the hard case, in which we seek an increment along an appropriate eigenvector ofHk. For each trial increment,f is evaluated and the condition (8) is tested (at Step 3.1.2). Therefore, it is necessary to establish a bound on the number of executions of Step 3.1.2. This is done in Lemma 5.1.

Lemma 5.1. Suppose that Assumption A1 holds. If Step3.1.2 of Algorithm 3.2 is executed, it is executed at most blog2((L+α)/M)c+ 1 times.

Proof. By (11) when`3= 1 and by (12) when`3>1, ˆsk,`3 6= 0 for all`3≥1 and ˆsk,`3

=

[−λ1,k]+/(3M), `3= 1, kˆsk,`3−1k/2, `3>1, or, equivalently,

(45) 2`3−1M = [−λ1,k]+/ 3 ˆsk,`3

.

Thus, by Lemma 4.2, if (8) does not hold withs= ˆsk,`3, we must have 2`3−1M < L+α, i.e.,`3≤ blog2((L+α)/M)c+ 1 as we wanted to prove.

Step 5.1 of Algorithm 3.2 describes a loop where one tries to find an “initial”

sufficiently big regularization parameter. Each time the regularization parameter is increased one tests the condition (8) (at Step 5.1.3). Therefore, it is necessary to establish a bound on the number of evaluations that may be performed at Step 5.1.3.

This is done in Lemma 5.2.

Lemma 5.2. Suppose that Assumption A1 holds. If Step5.1.3 of Algorithm 3.2 is executed, it is executed at most blog10(L+α)c+ 2 times.

Proof. For all `5 ≥ 1, when (8) is tested at Step 5.1.3 with s = ˜sk,`5, ˜sk,`5 is a solution to (10) with µ = ˜µk,`5 >0 and satisfies (13). Therefore, by Lemma 4.3,

˜

sk,`5 6= 0 and, thus, by Lemma 4.2, if (8) does not hold withs= ˜sk,`5, we must have

(46) ρk,`5 < L+α.

On the other hand, since, by definition, ρk,1 ≥ 0.1 and, by (13) and (14), ρk,`5 ≥ 10ρk,`5−1 for all`5≥2, we have that

(47) ρk,`5 ≥10`5−2

for all `5 ≥ 1. By (46) and (47), if (8) does not hold withs = ˜sk,`5, we must have 10`5−2< L+α, i.e.,`5≤ blog10(L+α)c+ 2 as we wanted to prove.

Finally, at Step 6.1 we increase the regularization parameter by means of a dou- bling process (¯µk,`6 = 2¯µk,`6−1). This process guarantees, by Lemmas 4.2 and 4.3,

(15)

that the sufficient condition will eventually hold. In Lemma 5.3, we prove that the number of doubling steps is also bounded by a quantity that only depends on charac- teristics of the problem and algorithmic parameters. For proving this lemma, we need to assume boundedness of kHkk at the iterates generated by the algorithm. Note that, sincef(xk+1)≤f(xk) for allk, a sufficient condition for Assumption A2 is the boundedness ofkH(x)kon the level set defined by f(x0).

Assumption A2. There exists a constant hmax ≥0 such that, for all iterates xk computed by Algorithm 3.1, we have thatkHkk ≤hmax.

Lemma 5.3. Suppose that Assumptions A1and A2 hold. If Step 6.1.2 of Algo- rithm 3.2 is executed, it is executed at most

$ log

1 + 0.2 hmax+ 0.2

−1

log

L+α 0.1

% + 1 times.

Proof. For all `6 ≥ 1, Lemma 4.3 implies that ¯sk,`6 6= 0, and straightforward calculations show that

k¯sk,`6k= sX

j∈J

[QTkgk]j/(λj,k+ [−λ1,k]++ ¯µk,`6)2

.

Moreover, it is easy to see that k¯sk,`6k decreases when ¯µk,`6 increases. Therefore, since, by definition, ¯µk,`6+1= 2¯µk,`6, for all`6≥1, we have that

(48) k¯sk,`6k

k¯sk,`6+1k ≥1.

Thus, for all`6≥1, (49)

[−λ1,k]++ ¯µk,`6+1

3k¯sk,`6+1k

,

[−λ1,k]++ ¯µk,`6

3k¯sk,`6k

=

[−λ1,k]++ ¯µk,`6+1

[−λ1,k]++ ¯µk,`6

k¯sk,`6k k¯sk,`6+1k

≥ [−λ1,k]++ ¯µk,`6+1

[−λ1,k]++ ¯µk,`6

= [−λ1,k]++ 2¯µk,`6

[−λ1,k]++ ¯µk,`6

= 1 + µ¯k,`6

[−λ1,k]++ ¯µk,`6

1 + 0.2 hmax+ 0.2

>1,

where the first inequality follows from (48) and the second inequality follows from the fact that, by the definition of the algorithm, ¯µk,`6 ≥0.2, and by Assumption A2.

From (49) and the fact that, by the definition of the algorithm,`6= 1 implies [−λ1,k]++ ¯µk,`6

3k¯sk,`6k ≥0.1, it follows that

(50) [−λ1,k]++ ¯µk,`6

3k¯sk,`6k ≥0.1

1 + 0.2 hmax+ 0.2

`6−1

for all`6 ≥1. For all `6 ≥1, when (8) is tested at Step 6.1.2 with s = ¯sk,`6, ¯sk,`6 satisfies (10) withµ= ¯µk,`6>0. Therefore, by Lemma 4.2, if (8) does not hold with

(16)

s= ¯sk,`6 we must have, by (50), 0.1

1 + 0.2 hmax+ 0.2

`6−1

< L+α.

This implies the desired result.

We finish this section summarizing the complexity and asymptotic results on Algorithms 3.1 and 3.2.

Theorem 5.1. Let fmin ∈ R, εg > 0, and εh > 0 be given constants, suppose that Assumptions A1 and A2 hold, and let {xk}k=0 be the sequence generated by Algorithms 3.1 and3.2. Then, the cardinality of the set of indices

(51) Kg=

k∈N|f xk

> fminand gk+1

> εg

is, at most,

(52)

$1 α

f(x0)−fmin

gg)3/2

!%

,

while the cardinality of the set of indices

(53) Kh=

k∈N|f xk

> fminandλ1,k<−εh is, at most,

(54)

$1 α

f(x0)−fmin

hh)3

!%

,

where constants γg and γh are as in the thesis of Lemma 4.4 (i.e., they satisfy (26) and (27), respectively).

Proof. Assumptions A1 and A2 imply, by Lemma 4.4, that the hypothesis of Lemma 2.1 hold. Therefore, since Algorithm 3.1 is a particular case of Algorithm 2.1, the thesis follows from Theorem 2.1.

Corollaries 2.1 to 2.3 also hold for Algorithms 3.1 and 3.2 under the hypothesis of Theorem 5.1, the most significant result being the complexity rates that possess the same dependencies ong andhwhether we consider iteration or evaluation com- plexity. Note that the number of iterations is a direct consequence of Theorem 5.1.

On the other hand, Lemmas 5.1 to 5.3 show that, every time Algorithm 3.2 is used by section 3.1 to compute an increment sk, it performs O(1) evaluations of the ob- jective function f, while, by definition, it performs a single evaluation of g and H.

Thus, the evaluation complexity of Algorithms 3.1 and 3.2 coincides with its iteration complexity.

6. Local convergence. Note that ifHk is positive definite then the minimum norm solution ˆsk,0 to the linear system (10) with µ = 0 computed at Step 2 of Algorithm 3.2 is given by ˆsk,0=−(Hk)−1gk, i.e., ˆsk,0is the Newton direction. More- over, since, independently of having ˆsk,0 = 0 or ˆsk,0 6= 0, λ1,k > 0 implies that ρk,0= 0≤M, in this case (Hk positive definite) the algorithm goes directly to Step 4 and checks whether the Newton direction satisfies the sufficient cubic decrease con- dition (8). The lemma below shows that, if (55) holds, then the Newton direction satisfies (8). (If λ1,k > 0 andgk = 0 and, in consequence, sk,0 = 0, it is trivial to see that the (null) Newton direction satisfies (8) and there is nothing to be proved.

Anyway, the lemma below covers this case as well.)

Referências

Documentos relacionados

De acordo com ela, empresas que não possuem muitas oportunidades de crescimento deveriam ser mais endividadas, ou seja, deveriam privilegiar o endividamento externo e assim evitar

a la Bartok .œ ∑ Preparar copo Preparar copo Preparar copo

O album seriado serve para facilitar a animação das sessões com os produtores de castanhas de cajú nos temas relacionados com: A preparação para a colheita, a colheita,

UN Il/ERSIDADE FEDERAL DA PARAfBA CENTRO DE FORMACffO DE PR0FES50RES DEPARTAMENTO DE EDUCACifa.. CURSO* PEDAGOGIA - HABILITACffD EM

1 O recurso está disponível em: &lt;http://dicionariofield.com.br/&gt;.. têm a pretensão de servirem como material de consulta a um usuário não especialista em estudos

Ligado aos terminais da bateria, há um voltímetro ideal M (com resistência interna muito grande), como mostra a f igura. Em um resistor, porém, a corrente passa do terminal

LEEMANS, Pieter de; GOYENS, Michèle (coord.), AristotWs Problemata in Different Times anã Tongues, Leuven, University Press, 2006. LITERATURA DE ROMA ANTIGA, tradução de

O primeiro semicírculo deve ter raio (geralmente maior que o raio do cilindro que será utilizado). Em seguida, basta posicionar a base do cilindro como na Figura 5 de modo