• Nenhum resultado encontrado

An Algorithm to Solve Separable Nonlinear Least Square Problem

N/A
N/A
Protected

Academic year: 2017

Share "An Algorithm to Solve Separable Nonlinear Least Square Problem"

Copied!
3
0
0

Texto

(1)

Copyright © 2013 IJECCE, All right reserved 1297

International Journal of Electronics Communication and Computer Engineering Volume 4, Issue 4, ISSN (Online): 2249071X, ISSN (Print): 22784209

An Algorithm to Solve Separable Nonlinear Least

Square Problem

Wajeb Gharibi

Department of Computer Engineering & Networks, Jazan University, Jazan, KSA

Email: Gharibi@jazanu.edu.sa

Omar Saeed Al-Mushayt

Department of Information Systems,

Jazan University, Jazan, KSA Email: oalmushayt@yahoo.com

Abstract Separable Nonlinear Least Squares (SNLS)

problem is a special class of Nonlinear Least Squares (NLS) problems, whose objective function is a mixture of linear and nonlinear functions. SNLS has many applications in several areas, especially in the field of Operations Research and Computer Science. Problems related to the class of NLS are hard to resolve having infinite-norm metric. This paper gives a brief explanation about SNLS problem and offers a Lagrangian based algorithm for solving mixed linear-nonlinear minimization problem.

Keywords Nonlinear Least Squares Problem,

Infinite-Norm Minimization Problem, Lagrangian Dual, Subgradient Method, Least Squares.

I. I

NTRODUCTION

Separable Nonlinear Least Squares (SNLS) problem is a special class of Nonlinear Least Squares (NLS) problems, whose objective function is a mixture of linear and nonlinear functions. It has many applications in various areas such as: Numerical Analysis, Mechanical Systems, Neural Networks, Telecommunications, Robotics and Environmental Sciences and more [1- 10].

The existing special algorithms for these problems are derived from the variable projections scheme proposed by Golub and Pereyra [1]. However, when the linear part of variables has some bounded constraints, the methods based on variable projection strategy will be invalid. Here, we propose an unseparated scheme for NLS and an algorithm which results in solving a series of Least Squares Separable problems.

Given a set of observation

{ }

y

i , a separable nonlinear squares problem can be defined as follows:

1

( , ) ( , ) (1)

n

i j j i

j

r a y a t

 

where

t

iare independent variables associated with the observation

{ }

y

i , while the

a

jand the

k

-dimensional

vector

are the parameters to be determined by minimizing the LS functional

r a

( , )

. We can write the above equation in the following matrix form:

2 2

2 2

( , )

( )

,

(2)

r a

 

y

a

where the columns of matrix

correspond to the nonlinear functions

 

j

( , )

t

i of the

k

parameters

evaluated at all the

t

i values and the vectors a and y represent the linear parameters and the observations respectively.

It is easy to see that if we knew the nonlinear parameters

then the linear parameters a could be obtained by solving the Linear Least Squares problem:

( ) ,

a y

  which stands for the minimum-norm solution of the Linear Least Squares problem for fixed

, where

( )

is the Moor-Penrose generalized inverse of

( )

. By replacing this ain the original functional, we obtain:

2 2

2 2

1 1

min ( ) min ( ( ) ( ) ) , (3)

2 r 2 I y

   

which is called Variable Projection functional [1].

The following section covers unseparated scheme for the NLS problems. Section 3 shows our proposed method. Section 4 presents numerical results for two examples and conclusion follows.

II. U

NSEPARATED

S

CHEME FOR THE

NLS

P

ROBLEMS

Consider the following NLS problem:

2 2

1

min ( ) ( ) , (4)

2

n

x R

F x f x

where ( ) m

f xR with

( ( ))

f x

i

f x

i

( )

. Many types of

iterative methods have already been designed to solve NLS problem. Most methods for NLS are based on the linear approximation of

f

in each iteration that is derived from Newton method. The main idea of Gauss-Newton method is described as follows:

Suppose our current iterative point is

x

k then we obtain the next point

x

k1

x

k

d

k by solving the following Linear Least Square (LLS) problem:

2 2 1

min ( ) ( ) . (5)

2 n k

k k k

d R

f x J x d

Here

J x

( )

is the Jacobian of

f x

( )

. We can get

1

( ( ))

( )

( ( )

T

( ))

( ) ( ). (6)

k k k k k k k

d

 

J x

f x

 

J x

J x

J x f x

If we compare (6) with the Newton step of (4), we will find that Gauss-Newton method uses

J x

( )

k T

J x

( )

k

containing only the first order information of

f

, substituting the real Hessian of

F x

( )

2 2

1

( ) ( ) ( ) ( ) ( ), (7) m

T

i i i

F x J x J x f x f x

  

(2)

Copyright © 2013 IJECCE, All right reserved 1298

International Journal of Electronics Communication and Computer Engineering Volume 4, Issue 4, ISSN (Online): 2249071X, ISSN (Print): 22784209

The efficient NLS methods, such as Levenberg-Marquardt methods and structured Quasi-Newton methods, are based on Gauss-Newton method [3]. To get the global convergence without losing good local properties, the former ones try to control the step length at each iteration by using the trust region strategy to the subproblem (5). On the other hand, the later ones reserve the first-order information

J x

( )

k T

J x

( )

k of

2

F x

( )

and apply Quasi-Newton method to approximate the second term in (7) [4, 5].

III. A L

AGRANGIAN

B

ASED

A

LGORITHM

Consider the following problem; model of nonlinear functions that can depend on multiple parameters:

1 2

min

n

( )

( )

(8)

n x R y R

A y x

b y

where, b y( )Rm, (generally

1 2

m

n

n

) are nonlinear and

1

max

i

, (

n

).

i n

x

x

x

R

 

This type of problems is very common and has a wide range of applications in different areas [1-8].

Problem (8) is difficult to be resolved because the nonlinearity of

A y x b y

( )

( )

and the nondifferentiality of the infinity norm [6]. It can be written as:

( , )

min max ( ( )

( )) ,

i

1, 2,...,

(9)

x y i

A y x b y

i

m

This can be considered equivalent to the following problem in the sense that their optimal solutions are equal:

2 ( , )

min max[( ( ) ( )) ] ,i 1, 2, ..., (10)

x y i A y x b y i m

 

which is equivalent to

2

( , ) {0,1} 1 1

min max [( ( ) ( )) ] (11) n i i m i i x y i

A y x b y

    

That could be relaxed to (12)

2 ( , ) 0

1 1

m i n m a x [ ( ( ) ( ) ) ] (1 2 )

i i m i i x y i

A y x b y

    

The optimal objective values of (11) and (12) are the same due to the fact that {0,1}nis the extreme points set of:

1

{ :

1;

0,

1, 2,..., }

m i i i

i

m

,

Furthermore, any solvable linear programming problem always has a vertex solution.

The problem (12) has the following dual:

2

( , )

0 1

1

max min

[( ( )

( )) ]

(13)

i i m i i x y i

A y x

b y

  

This problem can be resolved using the subgradient method by iteratively solving its Nonlinear Least Squares subproblems [6, 7].

Algorithm

Step 1:

Choose the initial values

0 and

( ,

x y

0 0

)

.

Step 2:

Solve the following Least Squares problem for fixed

0 using the initial solution

( ,

x y

0 0

)

0 2

( , ) 1

min

[( ( )

( )) ]

(14)

m

i i

x y i

A y x b y

and obtain a local optimal solution denoted by

( ,

x y

1 1

)

.

Step 3:

If the stop conditions satisfied, such as the variance between the current and the next obtained objective values is small enough, then stop.

Otherwise, update

x

0

:

x

1,

y

0

:

y

1,

0 0 2

1 1 1

:

[( ( )

( )) ]

i i

A y x

b y

i

with 0 1 k

 where

k

is the number of iterations and

0

is a constant; then go to Step 2.

IV. N

UMERICAL

R

ESULTS

We implemented the above Algorithm by MATLAB 7 using CPU Pentium IV with 2.4 GHz. We call the MATLAB function LSQNONLIN to solve Least Squares subproblems (14). The algorithm stops when the variance between the current and the next obtained objective values is less than 1e-8.

The data of the examples are produced at random with zero optimal objective values. The dimension is set m=100.

0 1 1 1

( , ,..., )

m m m

 ,

0

1

We ran each algorithm 10 times independently and listed the obtained average objective values with the average of the CPU time in seconds.

A. Example 1

In this example, we give fitting data for the model (Golub and Pereyra 1973 and Kaufman 1975):

1 2

1 2 3

t t

a

a e

a e

The results for this problem are given in Table 1. Table 1

Average Optimal Objective Obtained

Average Time in Seconds

0.0006 0.7

B. Example 2

The second example is given for the model (Golub and Pereyra 1973 and Kaufman 1975):

2 2 2

2 3 4 5 6 7

1 ( ) ( ) ( )

1 2 3 4

t t t

t

ae

a e

ae

a e

The results for this problem are given in Table 2.

Table 2

Average Optimal Objective Obtained

Average Time in Seconds

0.0003 0.3

(3)

Copyright © 2013 IJECCE, All right reserved 1299

International Journal of Electronics Communication and Computer Engineering Volume 4, Issue 4, ISSN (Online): 2249071X, ISSN (Print): 22784209

V. C

ONCLUSIONS

This paper gives a brief explanation about NLS problem supported by the given two examples.

Our proposed Lagrangian based algorithm is more efficient than general unseparated ones. Methods based on this scheme have the same convergence properties as the variable projection scheme.

A

CKNOWLEDGMENT

The first author would like to thank Professor Yong Xia for his valuable notes and comments.

R

EFERENCES

[1] G. H. Golub, and V.Pereyra, “Separable nonlinear least

squares: The variable projection method and its

applications”, Inverse Problems,vol.19, pp. 1-26, 2002. [2] G. H. Golub, and V. Pereyra, “The differentiation of

pseudo-inverses and nonlinear least squares problems

whose variables separate”, SIAM Journal on Numerical

Analysis, vol.10, pp. 413-432, 1973.

[3] J.J. Moré, “The Levenberg-Marquardt algorithm:

implementation and theory”, Numerical Analysis,

Springer-Verlag, vol. 630, pp. 105-116, 1978.

[4] L. Kaufman, “A variable projected method for solving separable nonlinear least squares problems”, BIT, vol.15, pp. 49-57, 1975.

[5] X. Liu, “An efficient unseparated scheme for separable nonlinear least squares problem”, Proceedings of the

eighth national conference of operations research society of China, June 30-July2, 2006, pp. 132-137.

[6] W. Gharibi, and Y. Xia, “A dual approach for solving

nonlinear infinity-norm minimization problems with

applications in separable cases”, Numer. Math. J. Chinese

Univ. (English Ser.), Issue 3, vol. 16, pp. 265-270, 2007. [7] L. Kaufman, and V. Pereyra, “A method for nonlinear

least squares problems with separable nonlinear equality

constraints”, SIAM Journal on Numerical Analysis. vol. 15, pp. 12-20, 1979.

[8] A. Ruhe, and P. A. Wedin., “Algorithms for nonlinear least squares problems”, SIAM Reviewvol.22, pp. 318-337, 1980.

[9] E. W. Cheney, Introduction to approximation theory, McGraw-Hill Book Corporation, 1966.

Referências

Documentos relacionados

O que está evidenciado nesse ato é o deslocamento de sua autoridade como perito de determinado assunto (ASTON; GAUDENZI, 2012), abrindo espaço para que a cocriação

En el apartado anterior quedaron establecidas las diferencias entre, la agroecología, como una nueva matriz científico-tecnológica que sostiene y desarrolla un modelo

By conducting a piecewise linear approximation of a nonlinear program, con- taining a separable nonlinear objective function, it may be possible to gener- ate a linear programming

4 Another notable estimator that does not take a two-step approach is Egesdal et al. construct their objective functions in terms of choice probabilities... ORDINARY LEAST -

O presente trabalho tem como objetivo investigar o efeito da estrutura da vegetação sobre a abundância, riqueza, e composição de espécies de aranhas em uma floresta

Por essa razão, este estudo, acerca das vivências em um intercâmbio acadêmico internacional de pós-graduação em enfermagem, visa contribuir com a

Com a disponibilidade deste curso, os professores têm recorrido cada dia mais a este acesso para se aperfeiçoarem e se habituarem às TIC dentro do ambiente escolar, buscando

We are going to use the previous algorithm to compute the approximation to the differintegrator using the backward difference... For