• Nenhum resultado encontrado

A STUDY ON CERTAIN NEW NUMERICAL ALGORITHMS FOR MINIMIZATION OF NON LINEAR FUNCTIONS

N/A
N/A
Protected

Academic year: 2017

Share "A STUDY ON CERTAIN NEW NUMERICAL ALGORITHMS FOR MINIMIZATION OF NON LINEAR FUNCTIONS"

Copied!
12
0
0

Texto

(1)

A STUDY ON CERTAIN NEW

NUMERICAL ALGORITHMS FOR

MINIMIZATION OF NON LINEAR

FUNCTIONS

K.KARTHIKEYAN

School of Advanced Sciences, VIT University, Vellore- 632014. Abstract :

In this paper, we introduce certain new numerical algorithms for minimization of nonlinear functions. Then the comparative study of the new algorithms with the Newton’s Algorithm is established by means of examples. Key words: Newton’s Algorithm; Sixth order iterative methods; King’s Fourth order Algorithms; Two- step iterative methods;

1. Introduction

Optimization problems with or without constraints arise in various fields such as science, engineering, economics, management sciences, etc., where numerical information is processed. In recent times, many problems in business situations and engineering designs have been modeled as an optimization problem for taking optimal decisions. In fact, numerical optimization techniques have made deep in to almost all branches of engineering and mathematics.

An unconstrained minimization problem is the one where a value of the vector x is sought that minimizes the objective function f(x). This problem can be considered as particular case of the general constrained non-linear programming problem. The study of unconstrained minimization techniques provide the basic understanding necessary for the study of constrained minimization methods and this method can be used to solve certain complex engineering analysis problem.

To solve unconstrained nonlinear minimization problems arising in the diversified field of engineering and technology, we have several methods to get solutions. For instance, multi-step nonlinear conjugate gradient methods [8], a scaled nonlinear conjugate gradient algorithm[2], a method called, ABS-MPVT algorithm [10] are used for solving unconstrained optimization problems. Newton’s method [11] is used for various classes of optimization problems, such as unconstrained minimization problems, equality constrained minimization problems. Chun[6] and Basto[3] have proposed and studied several methods for nonlinear equations with higher order convergence by using the decomposition technique of Adomian[1,13]. Vinay Kanwar et al. [18] introduced new algorithm called, external

touch technique for solving the nonlinear equations. Further, they did the comparative study of the new algorithms and Newton’s algorithm. Jishe Feng [9] introduced two step iterative method for solving nonlinear equation with the comparative study of the new iterative method with Newton’s Algorithm, Abbasbandy method and Basto method. Changbum Chun [5] introduced sequence of iterative techniques improving Newton’s Method by the Decomposition Method for solving nonlinear equations. Recently, Rostam K.Saeed et. al. [12] presented a family of new iterative methods for solving nonlinear equations based on Newton’s method. Numerical examples are discussed to illustrate the efficiency of the methods and Behzad Ghanbari [4] introduced new general fourth–order family of methods for finding simple roots of nonlinear equations which is free from second derivative. J.F.Traub [15] introduced several iterative techniques for the solution of equations. C.Chun and Y. Ham[7] proposed some sixth order variants of Ostrowski root finding methods.

(2)

Young Ik Kim[ 19] introduced a new two – step biparametric family of sixth –order iterative methods free from second derivatives, to find a simple root of a nonlinear algebraic equation.

In this paper, we suggest certain new iterative algorithms for minimization of nonlinear functions. Then, we present the comparative study among the new algorithms and Newton’s algorithm by means of examples.

2. New Algorithms

Consider the nonlinear optimization problem :

Minimize

{

f

(

x

),

x

R

,

f

:

R

R

}

where

f

is a non-linear twice differentiable function.

Consider the function

G

(

x

)

x

(

g

(

x

)

g

(

x

)

)

where

g

(

x

)

f

(

x

)

. Here f(x) is the function to be minimized.

G

(x

)

is defined around the critical point x* of f(x) if

g

(

x

)

f



(

x

)

0

and is given by

G

(

x

)

g

(

x

)

g



(

x

)

g

(

x

)

.

If we assume that

g



(

x

)

0

, we have

G

(

x

)

= 0 iff

g

(

x

)

0

.

Consider the non linear equation

g

(

x

)

0

whose one or more roots are to be found.

y

g

(x

)

represents the graph of the function

g

(x

)

and assume that an initial estimate

x

0 is known for the desired root of the equation

g

(

x

)

0

. We know that the solution of the nonlinear equation

g

(

x

)

0

by classical Newton’s method.

2.1. New method

Let g : R → R have a simple root α and be analytic in a small neighborhood of α. A parametric family of two step iterative methods is follows. For n = 0, 1, 2, 3,…





n n

n n

n

n n

n

n n

n f

n n n

n n n

n n n f n

n

n n n

n

v

v

v

v

v

B

u

u

u

A

v

B

u

A

x

K

x

g

y

g

v

x

g

y

g

u

denoting

where

x

g

y

g

x

K

y

x

x

g

x

g

x

y

2 2

2 1

)

1

(

2

6

1

)

4

(

2

)

2

1

(

.

2

1

)

(

)

1

2

8

(

2

1

)

(

)

(

)

(

)

(

)

(

)

(

,

)

(

)

(

,

)

(

)

(

)

(

)

(

)

(

(3)

2.2. Various choices of (

,

) for

K

f

(

x

n

)

A

(

u

n

)

B

(

v

n

)

Case (I): When

= 0,

= 0, then

1

2

)

(

2

n n n

u

u

u

A

and

(

1

)

2

1

)

(

2

n

n

v

v

B

)

(

)

(

]}

))

(

)

(

(

1

1

)[

2

/

1

(

)

1

))

(

)

(

((

2

))

(

)

(

(

{

2

2

1

n n

n n

n n

n n n

n

x

g

y

g

x

g

y

g

x

g

y

g

x

g

y

g

y

x

where

)

(

)

(

n n n

n

x

g

x

g

x

y

By substituting,

g

(

x

)

f

(

x

)

, we have 2.2.1. New Algorithm – I

)

(

)

(

]}

))

(

)

(

(

1

1

)[

2

/

1

(

)

1

))

(

)

(

((

2

))

(

)

(

(

{

2

2

1

n n

n n

n n

n n

n n

x

f

y

f

x

f

y

f

x

f

y

f

x

f

y

f

y

x







where

)

(

)

(

n n n

n

x

f

x

f

x

y



Case (II): When

= −1/2,

= 2, then

1

2

)

(

2

n n n

u

u

u

A

and

3

)

2

(

2

)

(

2

n n

v

v

B

)

(

)

(

]}

3

)

2

))

(

)

(

((

2

[

)

1

))

(

)

(

((

2

))

(

)

(

(

{

2

2

1

n n

n n

n n

n n n

n

x

g

y

g

x

g

y

g

x

g

y

g

x

g

y

g

y

x

where

)

(

)

(

n n n

n

x

g

x

g

x

y

By substituting,

g

(

x

)

f

(

x

)

, we have 2.2.2. New Algorithm – II

)

(

)

(

]}

3

)

2

))

(

)

(

((

2

[

)

1

))

(

)

(

((

2

))

(

)

(

(

{

2

2

1

n n

n n

n n

n n

n n

x

f

y

f

x

f

y

f

x

f

y

f

x

f

y

f

y

x







 where

)

(

)

(

n n n

n

x

f

x

f

x

y



(4)

Case (III): When

= 1/2,

= −2, then

1

2

)

(

2

n n n

u

u

u

A

and

)

5

/

1

))

5

/

2

(

(

5

1

4

1

(

5

2

)

(

2

n n n

v

v

v

B

)

(

)

(

]}

)

5

/

1

))

5

/

2

(

))

(

)

(

((

5

)

1

)

(

)

(

4

(

1

)[

5

/

2

(

)

1

))

(

)

(

((

2

))

(

)

(

(

{

2 2 1 n n n n n n n n n n n n

x

g

y

g

x

g

y

g

x

g

y

g

x

g

y

g

x

g

y

g

y

x

 where

)

(

)

(

n n n n

x

g

x

g

x

y

By substituting,

g

(

x

)

f

(

x

)

, we have

2.2.3. New Algorithm – III

) ( ) ( ]} ) 5 / 1 )) 5 / 2 ( )) ( ) ( (( 5 ) 1 ) ( ) ( 4 ( 1 )[ 5 / 2 ( ) 1 )) ( ) ( (( 2 )) ( ) ( ( { 2 2 1 n n n n n n n n n n n n x f y f x f y f x f y f x f y f x f y f y x                    where

)

(

)

(

n n n n

x

f

x

f

x

y



Case (IV): When

= 1,

= −4, then

1

2

)

(

2

n n n

u

u

u

A

and 2

2

)

1

2

(

1

3

2

1

)

(

n n n

v

v

v

B

)

(

)

(

]}

)

1

))

(

)

(

(

2

(

)

1

)

(

)

(

3

(

[

2

1

)

1

))

(

)

(

((

2

))

(

)

(

(

{

2 2 1 n n n n n n n n n n n n

x

g

y

g

x

g

y

g

x

g

y

g

x

g

y

g

x

g

y

g

y

x

 where

)

(

)

(

n n n n

x

g

x

g

x

y

By substituting,

g

(

x

)

f

(

x

)

, we have

2.2.4. New Algorithm – IV

(5)

Case (V): When

= −1/3,

= 4/3, then

1

2

)

(

2

n n n

u

u

u

A

and

)

1

4

81

4

1

(

32

1

)

(

n n n

v

v

v

B

)

(

)

(

]}

1

))

(

)

(

(

4

81

)

(

)

(

4

1

[

32

1

)

1

))

(

)

(

((

2

))

(

)

(

(

{

2 1 n n n n n n n n n n n n

x

g

y

g

x

g

y

g

x

g

y

g

x

g

y

g

x

g

y

g

y

x

 where

)

(

)

(

n n n n

x

g

x

g

x

y

By substituting,

g

(

x

)

f

(

x

)

, we have

2.2.5. New Algorithm – V

)

(

)

(

]}

1

))

(

)

(

(

4

81

)

(

)

(

4

1

[

32

1

)

1

))

(

)

(

((

2

))

(

)

(

(

{

2 1 n n n n n n n n n n n n

x

f

y

f

x

f

y

f

x

f

y

f

x

g

y

g

x

g

y

g

y

x











 where

)

(

)

(

n n n n

x

f

x

f

x

y



Case (VI): When

= 1,

= 0, then

1

14

)

(

2

n n n

u

u

u

A

and

6

5

))

3

/

4

(

(

2

3

)

(

v

n

v

n

2

B

)

(

)

(

}

6

5

)]

3

/

4

(

)

(

)

(

[

2

3

1

)

(

)

(

(

14

))

(

)

(

(

{

2 1 n n n n n n n n n n

x

g

y

g

x

g

y

g

x

g

y

g

x

g

y

g

y

x

 where

)

(

)

(

n n n n

x

g

x

g

x

y

By substituting,

g

(

x

)

f

(

x

)

, we have

2.2.6. New Algorithm – VI

)

(

)

(

}

6

5

)]

3

/

4

(

)

(

)

(

[

2

3

1

)

(

)

(

(

14

))

(

)

(

(

{

2 1 n n n n n n n n n n

x

f

y

f

x

f

y

f

x

f

y

f

x

f

y

f

y

x







 where

)

(

)

(

n n n n

x

f

x

f

x

y



Case (VII): When

= −1/2,

= 3/2,

= −1/4,

= ½,

= 0,

= −1/2 then

1

4

)

(

2

n n n

u

u

u

A

and

)

(6)

)

(

)

(

]}

1

))

(

/

)

(

(

3

4

1

[

3

1

1

))

(

)

(

((

4

))

(

)

(

(

{

2 1 n n n n n n n n n n

x

g

y

g

x

g

y

g

x

g

y

g

x

g

y

g

y

x

 where

)

(

)

(

n n n n

x

g

x

g

x

y

By substituting,

g

(

x

)

f

(

x

)

, we have

2.2.7. New Algorithm – VII

)

(

)

(

]}

1

))

(

/

)

(

(

3

4

1

[

3

1

1

))

(

)

(

((

4

))

(

)

(

(

{

2 1 n n n n n n n n n n

x

f

y

f

x

f

y

f

x

f

y

f

x

f

y

f

y

x







 where

)

(

)

(

n n n n

x

f

x

f

x

y



Case (VIII): When

= 0,

= 1/2, then

A

(

u

n

)

u

n2 and

1

1

4

2

)

(

n n n

v

v

v

B

)

(

)

(

]}

1

))

(

)

(

(

1

4

))

(

)

(

(

2

[

)

(

)

(

{

2 1 n n n n n n n n n n

x

g

y

g

x

g

y

g

x

g

y

g

x

g

y

g

y

x

 where

)

(

)

(

n n n n

x

g

x

g

x

y

By substituting,

g

(

x

)

f

(

x

)

, we have

2.2.8. New Algorithm – VIII

)

(

)

(

]}

1

))

(

)

(

(

1

4

))

(

)

(

(

2

[

)

(

)

(

{

2 1 n n n n n n n n n n

x

f

y

f

x

f

y

f

x

f

y

f

x

f

y

f

y

x











 where

)

(

)

(

n n n n

x

f

x

f

x

y



Case (IX): When

= −1/2,

= 5/2, then

A

(

u

n

)

u

n2 and

)

8

/

17

(

)

4

/

5

(

2

3

)

(

2

n n n

v

v

v

B

)

(

)

(

}

8

/

17

)

4

/

5

)

(

/

)

(

(

2

)

3

)

(

/

)

(

(

)

(

)

(

{

2 2 1 n n n n n n n n n n

x

g

y

g

x

g

y

g

x

g

y

g

x

g

y

g

y

x

(7)

)

(

)

(

n n n n

x

g

x

g

x

y

By substituting,

g

(

x

)

f

(

x

)

, we have

2.2.9. New Algorithm – IX

)

(

)

(

}

8

/

17

)

4

/

5

)

(

/

)

(

(

2

)

3

)

(

/

)

(

(

)

(

)

(

{

2 2 1 n n n n n n n n n n

x

f

y

f

x

f

y

f

x

f

y

f

x

f

y

f

y

x











 where

)

(

)

(

n n n n

x

f

x

f

x

y



Case (X): When

= −1/6,

= 7/6, then

A

(

u

n

)

u

n2 and

49

19

)

1

7

(

49

324

7

2

)

(

n n n

v

v

v

B

)

(

)

(

}

49

19

)

1

))

(

/

)

(

(

7

(

49

324

)

(

)

(

{

2 1 n n n n n n n n

x

g

y

g

x

g

y

g

x

g

y

g

y

x

 where

)

(

)

(

n n n n

x

g

x

g

x

y

By substituting,

g

(

x

)

f

(

x

)

, we have

2.2.10. New Algorithm – X

)

(

)

(

}

49

19

)

1

)

)

(

/

)

(

(

7

(

49

324

)

(

)

(

{

2 1 n n n n n n n n

x

f

y

f

x

f

y

f

x

f

y

f

y

x







 where

)

(

)

(

n n n n

x

f

x

f

x

y



Case (XI): When

= 1/8,

= 0, then

A

(

u

n

)

u

n2 and





1

7

)

11

7

(

4

5

7

1

)

(

2 n n n

v

v

v

B

)

(

)

(

}

1

))

(

/

)

(

(

7

)

11

))

(

/

)

(

(

7

(

4

5

7

1

)

(

)

(

{

2 2 1 n n n n n n n n n n

x

g

y

g

x

g

y

g

x

g

y

g

x

g

y

g

y

x





 where

)

(

)

(

n n n n

x

g

x

g

x

y

(8)

2.2.11. New Algorithm – XI

)

(

)

(

}

1

))

(

/

)

(

(

7

)

11

))

(

/

)

(

(

7

(

4

5

7

1

)

(

)

(

{

2 2 1 n n n n n n n n n n

x

f

y

f

x

f

y

f

x

f

y

f

x

f

y

f

y

x















 where

)

(

)

(

n n n n

x

f

x

f

x

y



Case (XII): When

= −1/4,

= 5/2, then

1

4

)

(

2

n n n

u

u

u

A

and

4

)

1

(

5

)

3

(

)

(

2 2

n n n

v

v

v

B

)

(

)

(

}

4

)

1

)

)

(

/

)

(

((

5

)

3

))

(

/

)

(

((

1

))

(

/

)

(

(

4

))

(

/

)

(

(

{

2 2 2 1 n n n n n n n n n n n n

x

g

y

g

x

g

y

g

x

g

y

g

x

g

y

g

x

g

y

g

y

x

 where

)

(

)

(

n n n n

x

g

x

g

x

y

By substituting,

g

(

x

)

f

(

x

)

, we have 2.2.12. New Algorithm – XII

)

(

)

(

}

4

)

1

)

)

(

/

)

(

((

5

)

3

))

(

/

)

(

((

1

))

(

/

)

(

(

4

))

(

/

)

(

(

{

2 2 2 1 n n n n n n n n n n n n

x

f

y

f

x

f

y

f

x

f

y

f

x

f

y

f

x

f

y

f

y

x











 where

)

(

)

(

n n n n

x

f

x

f

x

y



Case (XIII): When

= 0,

= 1, then

1

2

)

(

2

n n n

u

u

u

A

and

(

3

2

)

2

1

)

(

n n n

v

v

v

B

)

(

)

(

}

2

))

(

)

(

(

3

)

(

)

(

2

1

1

))

(

/

)

(

(

2

))

(

/

)

(

(

{

2 1 n n n n n n n n n n n n

x

g

y

g

x

g

y

g

x

g

y

g

x

g

y

g

x

g

y

g

y

x





 where

)

(

)

(

n n n n

x

g

x

g

x

y

(9)

2.2.13. New Algorithm – XIII

)

(

)

(

}

2

)

(

)

(

3

)

(

)

(

2

1

1

))

(

/

)

(

(

2

))

(

/

)

(

(

{

2

1

n n

n n

n n

n n

n n

n n

x

f

y

f

x

f

y

f

x

f

y

f

x

f

y

f

x

f

y

f

y

x















where

)

(

)

(

n n n

n

x

f

x

f

x

y



Case (XIV): When

= −1/2,

= 1, then

1

6

)

(

2

n n n

u

u

u

A

and

2

)

/

1

(

2

)

(

n n

n

v

v

v

B

)

(

)

(

}

2

))

(

)

(

(

))

(

)

(

(

2

1

))

(

/

)

(

(

6

))

(

/

)

(

(

{

2

1

n n

n n

n n

n n

n n

n n

x

g

y

g

y

g

x

g

x

g

y

g

x

g

y

g

x

g

y

g

y

x





where

)

(

)

(

n n n

n

x

g

x

g

x

y

By substituting,

g

(

x

)

f

(

x

)

, we have

2.2.14. New Algorithm – XIV

)

(

)

(

}

2

))

(

)

(

(

))

(

)

(

(

2

1

))

(

/

)

(

(

6

))

(

/

)

(

(

{

2

1

n n

n n

n n

n n

n n

n n

x

f

y

f

y

f

x

f

x

f

y

f

x

f

y

f

x

f

y

f

y

x















where

)

(

)

(

n n n

n

x

f

x

f

x

y



3. Numerical Results

Example 3.1: Consider the function

f

(

x

)

x

3

2

x

5

. The minimized value of the function is 0.816497. The following table depicts the number of iterations needed to converge to the minimized value for all the new algorithms with three initial values x0 = 1, x0 = 2, and x0 = 3.

Table – I: shows a comparison between the New iterative methods and Newton’s method

Sl. No Methods For initial value x0

=1.000000

For initial value x0

=2.000000

For initial value x0

=3.000000

1 Newton’s Algorithm 3 5 5

2 New Algorithm-I 3 5 5

3 New Algorithm-II 2 2 2

4 New Algorithm-III 3 5 5

5 New Algorithm-IV 3 5 5

(10)

7 New Algorithm-VI 3 4 5

8 New Algorithm-VII 3 5 5

9 New Algorithm-VIII 2 2 3

10 New Algorithm-IX 2 3 3

11 New Algorithm-X 3 3 3

12 New Algorithm-XI 3 5 5

13 New Algorithm-XII 2 2 3

14 New Algorithm-XIII 3 5 5

15 New Algorithm-XIV 2 2 3

Example 3.2: Consider the function

f

(

x

)

x

4

x

10

. The minimized value of the function is 0.629961. The

following table depicts the number of iterations needed to converge to the minimized value for all the new algorithms with three initial values x0 = 1, x0 = 2, and x0 = 3.

Table – II: shows a comparison between the New-iterative methods and Newton’s method

Sl. No Methods For initial value x0

=1.000000

For initial value x0

=2.000000

For initial value x0

=3.000000

1 Newton’s Algorithm 4 6 7

2 New Algorithm-I 4 6 8

3 New Algorithm-II 2 3 3

4 New Algorithm-III 4 6 8

5 New Algorithm-IV 4 6 8

6 New Algorithm-V 4 6 8

7 New Algorithm-VI 4 6 7

8 New Algorithm-VII 5 5 6

9 New Algorithm-VIII 2 3 3

10 New Algorithm-IX 3 4 5

11 New Algorithm-X 3 4 4

12 New Algorithm-XI 4 6 7

13 New Algorithm-XII 2 3 4

14 New Algorithm-XIII 4 6 7

(11)

Example 3.3: Consider the function

f

(

x

)

x

e

x

1

. The minimized value of the function is -1. The following table depicts the number of iterations needed to converge to the minimized value for all the new algorithms with three initial values x0 = 1, x0 = 2, and x0 = 3.

Table – III: shows a comparison between the New-iterative methods and Newton’s method

Sl. No Methods For initial value x0

=1.000000

For initial value x0

=2.000000

For initial value x0

=3.000000

1 Newton’s Algorithm 7 8 10

2 New Algorithm-I 7 9 10

3 New Algorithm-II - 3 -

4 New Algorithm-III 7 9 10

5 New Algorithm-IV 7 9 10

6 New Algorithm-V 7 9 10

7 New Algorithm-VI 7 8 9

8 New Algorithm-VII 6 8 9

9 New Algorithm-VIII 3 - 4

10 New Algorithm-IX 4 5 6

11 New Algorithm-X 4 4 5

12 New Algorithm-XI 7 9 10

13 New Algorithm-XII 3 4 5

14 New Algorithm-XIII 7 8 10

15 New Algorithm-XIV - - -

4. Conclusion:

In this paper, we have introduced several new algorithms for minimization of non linear functions. It is clear from the above numerical results that the rate of convergence of some of the new algorithms is faster than the rate convergence of Newton’s algorithms. In near future, we have a plan to extend the proposed new algorithms to constrained optimization problems.

References

[1] Adomian.G (1991): A review of the decomposition method and some recent results for non linear equations, Comput. Math. Appln. 21(5), pp. 101 – 127

[2] Andrei.N(2008): A scaled nonlinear conjugate gradient algorithm for unconstrained Minimization, Optimization 57(4) pp. 549 – 570. [3] Basto.M, Semiano.V, Calheiros.F.L.(2006): A new iterative method to compute nonlinear equations. Appl. Math.Comput. 173, pp.

468-483.

[4] Behzad Ghanbari(2010): A new general fourth-order family of methods for finding simple roots of non linear equations, Journal of King Saud University – Science.

[5] Chengbum Chun(2005): Iterative Methods Improving Newton’s Method by the Decomposition, Computers and Mathematics with Applications, 50, pp. 1559 – 1568.

[6] Chun.C(2005): Iterative methods improving Newton’s method by the decomposition method, Comput. Math. Appl. 50, pp.1559 – 1568. [7] Chun.C, Ham.Y(2007): Some sixth order variants of Ostrowski root finding methods, Appl. Math. Comput. 193, pp. 389 – 394. [8] Ford.J.A, Narushima.Y and Yabe.H(2008): Multi-step nonlinear conjugate gradient methods for constrained minimization, Computational

(12)

[9] Jishe Feng (2009): A New Two-step Method for Solving Nonlinear Equations, International Journal of Nonlinear Science, 8(1), pp. 40 – 44.

[10] Pang.L.P, et al. (2007): A method for solving the system of linear equations and linear inequalities, Mathematical and Computer

Modelling, 46(5-6), pp. 823 – 836.

[11] Polyak. B.T.(2007): Newton’s method and its use in optimization, European Journal of Operational research 181(3), pp. 1086 – 1096. [12] Rostam.K.Saeed and Fuad W. Khthr(2010): Three New Iterative Methods for Solving Non linear Equations, Australian Journal of Basic

and Applied Sciences, 4(6), pp. 1022 – 1030.

[13] Santanu Saha Ray(2007): Solution of the coupled Klein-Gordon Schrodinger equation using the modified decomposition method, International Journal of Nonlinear Science. 4(3) pp. 227-234.

[14] Stoer.J and Bulirsch.R(1993): Introduction to Numerical Analysis, second edition, Springer Verlag. [15] Traub.J.F(1982): Iterative Methods for the solution of equations, Chelsea Publishing Company.

[16] Vinay Kanwar, et al (2003): New numerical techniques for solving non-linear equations, Indian J. pure appl. Math. 34(9) pp. 1339 - 1349. [17] Weerakon. S and Fernando.T.G.I.(2000): A variant of Newton’s method with accelerated third order convergence, Appl. Math. Lett., 13,

pp. 87 – 93.

[18] Yun.J.H(2008): A note on three-step iterative method for nonlinear equations, Appl. Math. Comput. doi:10.1016/j.amc.2008.02.002. [19] Young Ik Kim(2010): A new two step biparametric family of sixth order iterative methods free from second derivatives for solving non

Referências

Documentos relacionados

No Ambulatório de Intercorrências Oncológicas – AIO, o processo de acolhimento com classificação de risco inicia quando o enfermeiro recebe o paciente após ter

Com os dados das contagens elaborou-se uma treliça e foram calculadas as densidades e taxas de ganho e perda da populaçio para os dias e período de observaçio. A densidade

Os objetivos específicos foram: proporcionar ao público alvo interação com a réplica da casa comercial, resgatando a história do processo de comercialização,

Logistic regression models were fitted to estimate the association of birth weight, gestational age at birth, or fetal status, as dependent variables, with timing of the first

Por ocasião da 3.ª edição do Prémio Europa foi criado um prémio que pudesse destacar novos valores para lá da consagração dos artistas maiores, pelo que neste ano de 2007

In this paper, we propose a new sequential linear programming algorithm for solving con- strained nonlinear programming problems, and apply this method to the solution of

In this section, we do some numerical experiments to test Algorithm 2.1 with the modified strong Wolfe line search, and compare the performance with some existing conjugate

Considerando o estudo preliminar realizado por Rocha 2009 o qual avaliou o acesso efetivo aos serviços odontológicos em áreas cobertas pela Estratégia Saúde da Família na cidade