CHAPTER II Introduction to finite difference Methods for Initial Value Problems In this chapter we will lay down the basic concepts for the theory of finite difference methods. First, we must decide on a terminology, something, unfortunately, not yet standardized. 1. Finite difference quotients
Consider the derivative du/dx where u=u(x), x being the independent variable (it could be space or time). In finite difference methods, we represent the continuous function u(x) by a set of values defined at a number of discrete points in a specified region. Thus, we usually introduce a “grid” with discrete points at which the variable u is carried (fig. 2.1). Figure 2.1 Sometimes the word “mesh” or “lattice” is used in place of the word “grid”. The interval, ∆x is called grid interval, grid size, mesh size, etc. We assume that the grid interval, ∆x, is constant for the time being, and so xj=j∆x, where j is
the “index” used to identify the grid points. Using the notation uj=u(xj)=u(j∆x),
we define the forward difference at the point j by
u j uj1uj (2.1)the backward difference at the point j by:
and the central difference at the point (j + ½) by
1 2 j 1 j j u u u (2.3)From these we define the following “finite difference quotients” as forward difference quotient at the point j j 1 j
j j u u u du dx x x (2.4) backward difference quotient at the point j j j 1
j j u u u du dx x x (2.5) and central difference quotient at the point j + ½
12 1 2 1 j j j j u u u du dx x x (2.6) Central difference quotient at the point j may be defined by
1
1
2 2 1 1 1 1 2 2 x j j j j j j u u u du u u dx x x x (2.6)’As (2.4) and (2.5) employ the values of u at two points, they are sometimes referred to as twopoint approximations, whereas (2.6)’ really employs three points and is a threepoint approximation. When x is time, the time point is frequently referred to as a “level”. So (2.4) and (2.5) can be then referred to as a twolevel approximation and (2.6) as a threelevel approximation. How accurate are these finitedifference approximations? Let us now define the concepts of accuracy and truncation error. As an example, consider the forward difference quotient: j 1 j
1
j u j x u j x u u du dx x x (2.7) and expand u in Taylor series about the point xj. Assuming this is possible, we can write:
2 2 3 1 2 3 ... 2! 3! j j j j j u u du x d u x d u x dx dx dx (2.8)According to (2.4), the terms in (2.8) following the derivative (du/dx)j are
truncated for our finitedifference approximation to the derivative, so they are called the “truncation error”. The lowest power of ∆x which appears in the truncation error is called the order of accuracy of the corresponding difference quotient. In the above example it is of order ∆x or 0(∆x), and so we say that this is a firstorder approximation or has firstorder accuracy. Obviously (2.5) is the same, and expansion of (2.6)', similarly will show that it is of second order accuracy (∆x, ∆x3, ∆x5.... terms will all cancel). 2. An example of finite difference approximations to a differential equation Now, with these definitions and concepts let us proceed directly to a simple example of a partial differential equation. We consider the simple equation: u c u 0 t x (2.9) where c is a constant. This is a linear differential equation of first order with a constant coefficient. It is called the advection equation. Here u = u (x,t). So if at t = 0, u (x,0) = F(x) ( < x < ), what is u (x,t) ? This is a simple example of an initial value problem. Let us briefly consider the analytic solution first, so that we shall have a criterion for comparison purposes. Make the following change of variables: ξ=x−ct (2.10) and consider a function u(, t). Its partial derivates are x x u u u t t t (2.11) t t u u x x (2.12)
But x c t (2.13) and 1 t x (2.13) So, using (2.13) and (2.13)’ in (2.11) and (2.12), we have: x t u u u c t x t (2.13)” But u(x,t) satisfies (2.9) and we obtain: u 0 t (2.14) This means that u f
(2.15)is the general solution to (2.9) regardless of the form of f. Therefore, u is constant along the line ≡ x – ct = const. At t = 0, = x and u(x) = f(x). In order to satisfy the initial condition u(x) = F(x) at t = 0, we choose f ≡F.
Thus, u() = F() = F(xct) is the solution to the differential equation (2.9) which satisfies the initial condition.
Referring to Figure (2.2), we see that an initial value merely “moves along” the lines of constant Keeping this in mind, let us investigate the numerical solution to equation (2.9).
Figure 2.2
We constract a grid, as in Figure 2.3
0 An example of finite difference approximations (finite difference schemes) to equation (2.9) is: 1 1 0 n n n n j j j j u u u u c t x (2.16) where we have used the foward difference quotient in time and the backward difference quotient in space. Notice that: unj 1 unj u t t as t 0 (2.17) 1 n n j j u u u x x as x 0 (2.18)
Therefore this is a finite difference approximation to (2.9), since equation (2.16) does approach equation (2.9) as t and x approach 0. Now if we know n j u at a time level n for all j, we can compute n1 j u at the next time level n + 1. If c > 0, (2.16) is called the “upstream” difference scheme. 3. Accuracy and truncation error of a finite difference scheme
We have defined accuracy and truncation error for finite difference quotients. Now we shall define truncation error and accuracy for a finite difference scheme. Denoting u(x,t) as the solution of the differential equation, u(j∆x, n∆t) is its value at the discrete point (j∆x, n∆t) on our grid in figure 2.3, while n j u is the ‘exact’ solution of a finite difference equation. A measure of the accuracy of the scheme can be obtained by substituting the solution of the differential equation into the finite difference equation. For the scheme given by (2.16), we have
, 1
,
,
1
,
u j x n t u j x n t u j x n t u j x n t c t x (2.19)1
where ε is called “truncation error” or “formal error” of the scheme. It is an (inverse) measure of how accurately the solution u(x,t) of the original differential equation (2.9) satisfies the difference equation (2.16). Since n
j
u is
defined only at discrete points, there is no practical way to measure how accurately n
j
u satisfies the original differential equation.
If we obtain the terms in (2.19) from Taylor Series expansion of u about the point (j∆x, n∆t) and use the fact that u(x,t) satisfies (2.9), we can obtain from this scheme, 2 2 2 2 1 1 ... ... 2! 2! u u t c x t x (2.20) We say this is a firstorder scheme because the lowest power of ∆t and ∆x in (2.20) is 1. The notations 0(∆x) or 0(∆t)+0(∆x) are used to represent this. We say that a scheme is consistent with the differential equation if the truncation error of the scheme approaches zero as ∆t and ∆x approach zero. There are two sources of error in a numerical solution. One is the round off error, which is the difference of a numerical solution from the ‘exact’ solution of the finite difference equation, n j u . The other is the discretization error defined by n j u u(j∆x, n∆t). The truncation error, discussed in the last section, can be made as small as wants by taking ∆x and ∆t smaller and smaller as long as the scheme is consistent and u(x,t) is a smooth function. But an increase in accuracy will not necessarily guarantee that the discretization error will be small. We ask what the behavior is of │ n
j
u u(j∆x, n∆t)│ as the grid is refined (∆t and ∆x 0)→ . If the discretizatio∆n error approaches zero, then we say that the solution is convergent. Now let us see an example of a situation in which accuracy is increased but the solution still is not convergent. See figure 2.4. If at first we have chosen ∆x and ∆t such that grid points are the dot
2
Figure 2.4
points in figure 2.4, we could undoubtedly increase the accuracy by taking ∆x and ∆t equal to just ½ the first ∆x and ∆t, that is, by adding the points denoted by small x’ s, forming a denser grid. The domain, which consists of the grid points carrying values of u on which n
j
u depends, is called the “domain of
dependence”. The shaded area in the figure shows this when the upstream scheme (2.16) is used. Notice that this does not change, no matter how refined or dense the grid is as far as ∆x/∆t remains the same. Suppose that the line through the point (j∆x, n∆t), x – ct = xo, where xo is a constant, does not lie in
the domain of dependence. In general, there will be no hope of obtaining smaller discretization error, no matter how small ∆t and ∆x are, because the true solution u(j∆x, n∆t) depends only on the initial value of u at the single point (xo, 0). One could change u(xo, 0) (and hence u(j∆x, n∆t)), but the
3 computed solution n j u would remain the same as long as the initial values were not changed in the domain of dependence. In such a case, usually the error of the solution will not be decreased by approaching a continuum. If the value of c is such that xo lies outside of the domain of dependence, it will not be
possible for a solution of the finite difference equation to approach the true solution.
If a finite difference scheme gives a convergent solution for any initial condition, this scheme is called a convergent finite difference scheme. Therefore, 0 c t 1 x (2.21) is a necessary condition for convergent when the upstream scheme is used. If c is negative (a downstream difference scheme), there is no hope that (2.21) is satisfied. 4. Stability Another important concept is that of stability. Here we ask what the behavior of the discretization error │ n j u u(j∆x, n∆t)│ is as n increases for fixed x and t. Does it stay bounded? This question is related to the stability of the scheme. In many physical problems the true solution is bounded, at least for finite t, so that the solution of the scheme is bounded if it is stable. Similar to the manner in which we defined convergence, we say that a finite difference scheme is stable if it has a stable solution for any initial condition. There are four major ways in which the stability of a scheme may be tested. These are: 1) the direct method, 2) the energy method, 3) von Neumann’s method, and 4) the matrix method.
As an illustration of the direct method, consider the upstream scheme from equation (2.16). We can write directly from (2.16)
4
n j n j n j u u u 1 1 1 , xt c
(2.22) Note that n1 j u is a weighted mean of n j u e n j u 1. If 0 ≤ μ ≤ 1 (the necessary condition for convergence), we may write: n j n j n j u u u 1 1 1 (2.23) Therefore,
j nj n j j n j j u u u 1 1 max 1 max max (2.24) or since n j j n j j u ( )u 1 1 ) ( max max , 1 ( ) ( ) max n max n j uj j uj (2.25) So our solution n ju will always stay bounded. Therefore, 0 ≤ μ ≤ 1 is a
sufficient condition for stability. This is obvious from (2.22) because when 0 ≤ μ ≤ 1, n1 j u is simply linearly interpolated unat the point x = j∆x c∆t. This direct method, however, is not too widely applicable, even for some nonlinear equations. We shall illustrate it here by means of application to the scheme (2.16). With this method we seek to answer the question: “Is
2 j n j u bounded?” Here the summation is over a finite number of grid points in a bounded domain. If it is indeed so, then each n j u will be bounded. Returning then to equation (2.22) and squaring both sides, we have:
1 2
2
1
2 2 1
1
2
1 2 n n n n n j j j j j j j u u u u u
(2.26) For simplicity, assume that u is periodic in x and consider the summation covering only a complete cycle of u in j. Then,
1 2
2 j n j j n j u u (2.27) We note that if
nj10 j n ju u ,
j n j n j j n ju u u 1 2 (2.28)5 if
nj10 j n ju u ,
j n j n j j n ju u u 1 2 (2.29) (2.29) is derived from Schwartz’s inequality and (2.27). Namely,
2 2 2 1 2 2 1
j n j j n j j n j n j j n ju u u u u (2.30) Thus, use of (2.27), (2.28) and (2.29) in (2.26) gives:
j n j j n j u u 1 2 2 2 2 1 2 1 (2.31) provided μ (1μ ) ≥ 0. Therefore,
j n j j n j u u 12 2 (2.32) This shows that 0 ≤ μ ≤ 1 is a sufficient condition for this scheme to be stable. A very powerful tool for testing stability of linear partial differential equations with constant coefficients is von Neumann’s method. Solutions to such equations can be expressed as superposition of waves (Fourier series). Von Neumann’s method simply tests the stability of each component wave. To illustrate the procedure, we return first to the differential equation (2.9): 0 x u c t u First, we assume a solution of the wave form u
x,t Re
û
t eikx
(2.33) where û t is the amplitude of the wave. Using equation (2.33), equation (2.9) becomes: ikcû0 dt dû (2.34) Note that equation (2.34) is now an ordinary differential equation with the solution û
t û
0eikct (2.35) where û (0) is the initial value of û. The solution to equation (2.9) is from equation (2.33), u
x,t Re
û
0eikxct
(2.36)6 For a finite difference equation, we use in place of equation (2.33) n
n ikj x
j û e u Re (2.37) ) ( n û will give the amplitude of the wave. Let ûn1 û n . Then ûn1 û n (2.38) where is the amplification factor. Assume that the solution is bounded for a given tnt. Then, û n û 0 n B (2.39) where B is a positive constant. Since û(0) is a nonzero constant, B1 n , 1 0 û B B (2.40) Without loss of generality, we can assume that B11. Then B1n 1 (2.41) Recall n = t/∆t. So, Btt 1 (2.42) See fig. 2.5. For ∆t in the interval 0 < ∆t < τ, B1 Δt / t <1+B2 Δt t (2.43) For a given finite time t, |λ|≤1+O( Δt ) (2.44) (2.44) is von Neumann’s stability condition applied to our example, which has only one amplification factor,7 Figure 2.5 If we require that the solution is bounded for all t including ∞, (2.42) must be replaced by: 1 (2.45) This is more restrictive stability condition than (2.44) and is appropriate when the true solution is bounded for all t, as in our example of the advection equation.
Let us now illustrate λ for the scheme given by equation (2.16).
Substituting equation (2.33) into equation (2.16) gives: û(n+ 1)−û(n ) Δt +c 1−e−ikΔx Δx û (n) =0 (2.46) or ûn1 û n (2.47) where λ=1−μ(1−coskΔx+i sin kΔx ) (2.48) Taking the modulus of equation (2.48), we obtain |λ|2=1+2μ( μ−1) (1−cos kΔx) (2.49) At μ=1 2 , for example, equation (2.49) is |λ|2 =1 2(1+cos kΔx) (2.50)
8 Since
k
22x at L=2 Δx , k≡ π 2Δx at L=4Δx , etc., the various curves show in the fig. 2.6 are constructed. We see clearly that this scheme has a damping solution when 0 < μ < 1 and a growing solution for μ < 0 and μ > 1. Figure 2.6 In general, the solution n j u can be expressed as a Fourier series. For simplicity, let us assume that the solution is periodic in x with period L0. Then n j u can be written as:. n imk j x m m n j û e u Re 0 m n x j imk m m n j û e u Re 0 0 (2.51) where 0 0 2 L k (2.52) and m is an integer. In (2.51), the summation has been formally taken over all integers. λm is the amplification factor. We have:9 mn x j imk m m n j û e u 0 0 n m x j imk m m n j û e u 0 0 n 0 n j m m m u û
(2.55) If λm 1 is satisfied for all m, 0 m m n j û u (2.56) Therefore, as far as x j imk m m û e 0 ) 0 ( )( , which gives the initial condition, is absolutely convergent Fourier series, n
j
u is bounded. Therefore, λm 1 for all m is sufficient for stability. It is also necessary, because if λm 1 for some m, say m=m1, solution for the initial condition ûm1 1 and ûm 0 for all mm1
is unbounded. From (2.48), m for the upstream scheme is given by: m 1
1cosmk0xisinmk0x
(2.53) Then, the amplification factor is:
1
2
1
cos
1
12 0
mmk
x
(2.54) 1λm holds for all m, if and only if
1
0, or 0 ≤ μ ≤ 1. This is the necessary and sufficient condition for the stability of the scheme.Suppose that we refine the grid by decreasing x and t, from (x)1 and
(t)1 to (x)2 and (t)2, and so on, through divisions by the same integer K.
Then,
1
, , 1 0 0 t K t x K x l l l l (2.55)where l is an integer. Obviously,
n
tt
as l for a given t. Note that remains the same. Then the wave solution m m K m0
l l in the grid
x l x , t
t l amplitudes or decays in n in exactly the same manner as the wave solution mm0 in the original grid does, insofar as only the product10
of m and x matters for m, as in our example (2.54), because ml
x l m0
x 0.For the instability problem, n increases because t increases while t and x are fixed.
For the convergence problem, n increases because Δt and Δx decrease while t is fixed. In this way, we can see that, when the initial condition is expressed by an absolutely convergent Fourier series, the discretization error is bounded as l if and only if the scheme is stable. For a scheme to be convergent, then, stability is necessary. It appears that consistency and stability implies convergence and a rigorous proof has been given for a limited class of differential equations (Lax’s Equivalence Theorem). Finally, we have the matrix method. The upstream scheme given by (2.16) (or by 12.22) can be written in the matrix form as: n J n j n j n j n n J n j n j n j n u u u u u u u u u u 1 1 1 . 1 1 1 1 1 1 1 1 1 0 0 0 ... ... ... ... 1 0 0 ... ... ... ... 1 0 0 ... ... 0 0 .. 1 0 0 1 (2.56) Here the cyclic boundary condition n
n j n u u u 1 1 1 1 has been assumed. Thematrix method examines the eigenvalues of the matrix A, obtained from
0 I A . In general, this may be quite difficult. But for our example, it is easy to find that 1 1 ei2mj , m = 0, 1, 2, ..., j1, (2.57) This has a similar form to (2.48) and, therefore, we obtain 0 ≤ ≤ 1 as the stability condition. An advantage of the matrix method is that different boundary conditions can be directly included in the stability analysis.
11
Suppose that we are given a nonlinear partial differential equation and that we wish to use a finite difference approximation to it. The ordinary procedure would be as follows:
(1) Check consistency. The finite difference analog of the original equation must approach the original equation as the increments of the independent variables approach zero. This may not always be obvious when the original differential equation has a complicated form.
(2) Check truncation error. Normally this is done by means of a Taylor series expansion. We are concerned with lowest power of the gridinterval in the expansion of the independent variables. Since consistency itself means that (the truncation error)0 as ∆x, ∆t0.(1) is included in (2).
(3) Check stability for a simplified (linearized, constant coefficients) version of the equation.
(4) Finally, check the stability, if possible, when the nonlinear terms are retained. This may be done by the energy method. Otherwise, empirical tests are needed.