Docsity
Docsity

Prepare-se para as provas
Prepare-se para as provas

Estude fácil! Tem muito documento disponível na Docsity


Ganhe pontos para baixar
Ganhe pontos para baixar

Ganhe pontos ajudando outros esrudantes ou compre um plano Premium


Guias e Dicas
Guias e Dicas

Cambridge University Press Differential Equations Linear, Nonlinear, Ordinary, Partial, Notas de estudo de Equações Diferenciais

equações diferenciais

Tipologia: Notas de estudo

2013

Compartilhado em 23/12/2013

carlos-chang-1
carlos-chang-1 🇧🇷

3.8

(6)

64 documentos

1 / 548

Documentos relacionados


Pré-visualização parcial do texto

Baixe Cambridge University Press Differential Equations Linear, Nonlinear, Ordinary, Partial e outras Notas de estudo em PDF para Equações Diferenciais, somente na Docsity! Differential Equations Linear, Nonlinear, Ordinary, Partial A.C. King, J. Billingham and S.R. Otto    Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press The Edinburgh Building, Cambridge  , United Kingdom First published in print format - ---- - ---- - ---- © Cambridge University Press 2003 2003 Information on this title: www.cambridge.org/9780521816588 This book is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. - --- - --- - --- Cambridge University Press has no responsibility for the persistence or accuracy of s for external or third-party internet websites referred to in this book, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. Published in the United States of America by Cambridge University Press, New York www.cambridge.org hardback paperback paperback eBook (NetLibrary) eBook (NetLibrary) hardback CONTENTS vii 10.5 Towards the Systematic Determination of Groups Under Which a First Order Equation is Invariant 265 10.6 Invariants for Second Order Differential Equations 266 10.7 Partial Differential Equations 270 11 Asymptotic Methods: Basic Ideas 274 11.1 Asymptotic Expansions 275 11.2 The Asymptotic Evaluation of Integrals 280 12 Asymptotic Methods: Differential Equations 303 12.1 An Instructive Analogy: Algebraic Equations 303 12.2 Ordinary Differential Equations 306 12.3 Partial Differential Equations 351 13 Stability, Instability and Bifurcations 372 13.1 Zero Eigenvalues and the Centre Manifold Theorem 372 13.2 Lyapunov’s Theorems 381 13.3 Bifurcation Theory 388 14 Time-Optimal Control in the Phase Plane 417 14.1 Definitions 418 14.2 First Order Equations 418 14.3 Second Order Equations 422 14.4 Examples of Second Order Control Problems 426 14.5 Properties of the Controllable Set 429 14.6 The Controllability Matrix 433 14.7 The Time-Optimal Maximum Principle (TOMP) 436 15 An Introduction to Chaotic Systems 447 15.1 Three Simple Chaotic Systems 447 15.2 Mappings 452 15.3 The Poincaré Return Map 467 15.4 Homoclinic Tangles 472 15.5 Quantifying Chaos: Lyapunov Exponents and the Lyapunov Spectrum 484 Appendix 1 Linear Algebra 495 Appendix 2 Continuity and Differentiability 502 Appendix 3 Power Series 505 Appendix 4 Sequences of Functions 509 Appendix 5 Ordinary Differential Equations 511 Appendix 6 Complex Variables 517 Appendix 7 A Short Introduction to MATLAB 526 Bibliography 534 Index 536 Preface When mathematical modelling is used to describe physical, biological or chemical phenomena, one of the most common results is either a differential equation or a system of differential equations, together with appropriate boundary and initial conditions. These differential equations may be ordinary or partial, and finding and interpreting their solution is at the heart of applied mathematics. A thorough introduction to differential equations is therefore a necessary part of the education of any applied mathematician, and this book is aimed at building up skills in this area. For similar reasons, the book should also be of use to mathematically-inclined physicists and engineers. Although the importance of studying differential equations is not generally in question, exactly how the theory of differential equations should be taught, and what aspects should be emphasized, is more controversial. In our experience, text- books on differential equations usually fall into one of two categories. Firstly, there is the type of textbook that emphasizes the importance of abstract mathematical results, proving each of its theorems with full mathematical rigour. Such textbooks are usually aimed at graduate students, and are inappropriate for the average un- dergraduate. Secondly, there is the type of textbook that shows the student how to construct solutions of differential equations, with particular emphasis on algo- rithmic methods. These textbooks often tackle only linear equations, and have no pretension to mathematical rigour. However, they are usually well-stocked with interesting examples, and often include sections on numerical solution methods. In this textbook, we steer a course between these two extremes, starting at the level of preparedness of a typical, but well-motivated, second year undergraduate at a British university. As such, the book begins in an unsophisticated style with the clear objective of obtaining quantitative results for a particular linear ordi- nary differential equation. The text is, however, written in a progressive manner, with the aim of developing a deeper understanding of ordinary and partial differ- ential equations, including conditions for the existence and uniqueness of solutions, solutions by group theoretical and asymptotic methods, the basic ideas of con- trol theory, and nonlinear systems, including bifurcation theory and chaos. The emphasis of the book is on analytical and asymptotic solution methods. However, where appropriate, we have supplemented the text by including numerical solutions and graphs produced using MATLAB†, version 6. We assume some knowledge of † MATLAB is a registered trademark of The MathWorks, Inc. x PREFACE MATLAB (summarized in Appendix 7), but explain any nontrivial aspects as they arise. Where mathematical rigour is required, we have presented the appropriate analysis, on the basis that the student has taken first courses in analysis and linear algebra. We have, however, avoided any functional analysis. Most of the material in the book has been taught by us in courses for undergraduates at the University of Birmingham. This has given us some insight into what students find difficult, and, as a consequence, what needs to be emphasized and re-iterated. The book is divided into two parts. In the first of these, we tackle linear differ- ential equations. The first three chapters are concerned with variable coefficient, linear, second order ordinary differential equations, emphasizing the methods of reduction of order and variation of parameters, and series solution by the method of Frobenius. In particular, we discuss Legendre functions (Chapter 2) and Bessel functions (Chapter 3) in detail, and motivate this by giving examples of how they arise in real modelling problems. These examples lead to partial differential equa- tions, and we use separation of variables to obtain Legendre’s and Bessel’s equa- tions. In Chapter 4, the emphasis is on boundary value problems, and we show how these differ from initial value problems. We introduce Sturm–Liouville theory in this chapter, and prove various results on eigenvalue problems. The next two chapters of the first part of the book are concerned with Fourier series, and Fourier and Laplace transforms. We discuss in detail the convergence of Fourier series, since the analysis involved is far more straightforward than that associated with other basis functions. Our approach to Fourier transforms involves a short introduction to the theory of generalized functions. The advantage of this approach is that a discussion of what types of function possess a Fourier transform is straightforward, since all generalized functions possess a Fourier transform. We show how Fourier transforms can be used to construct the free space Green’s function for both ordi- nary and partial differential equations. We also use Fourier transforms to derive the solutions of the Dirichlet and Neumann problems for Laplace’s equation. Our discussion of the Laplace transform includes an outline proof of the inversion the- orem, and several examples of physical problems, for example involving diffusion, that can be solved by this method. In Chapter 7 we discuss the classification of linear, second order partial differential equations, emphasizing the reasons why the canonical examples of elliptic, parabolic and hyperbolic equations, namely Laplace’s equation, the diffusion equation and the wave equation, have the properties that they do. We also consider complex variable methods for solving Laplace’s equation, emphasizing their application to problems in fluid mechanics. The second part of the book is concerned with nonlinear problems and more advanced techniques. Although we have used a lot of the material in Chapters 9 and 14 (phase plane techniques and control theory) in a course for second year undergraduates, the bulk of the material here is aimed at third year students. We begin in Chapter 8 with a brief introduction to the rigorous analysis of ordinary differential equations. Here the emphasis is on existence, uniqueness and com- parison theorems. In Chapter 9 we introduce the phase plane and its associated techniques. This is the first of three chapters (the others being Chapters 13 and 15) that form an introduction to the theory of nonlinear ordinary differential equations, CHAPTER ONE Variable Coefficient, Second Order, Linear, Ordinary Differential Equations Many physical, chemical and biological systems can be described using mathemat- ical models. Once the model is formulated, we usually need to solve a differential equation in order to predict and quantify the features of the system being mod- elled. As a precursor to this, we consider linear, second order ordinary differential equations of the form P (x) d 2y dx2 + Q(x) dy dx + R(x)y = F (x), with P (x), Q(x) and R(x) finite polynomials that contain no common factor. This equation is inhomogeneous and has variable coefficients. The form of these poly- nomials varies according to the underlying physical problem that we are studying. However, we will postpone any discussion of the physical origin of such equations until we have considered some classical mathematical models in Chapters 2 and 3. After dividing through by P (x), we obtain the more convenient, equivalent form, d 2y dx2 + a1(x) dy dx + a0(x)y = f(x). (1.1) This process is mathematically legitimate, provided that P (x) = 0. If P (x0) = 0 at some point x = x0, it is not legitimate, and we call x0 a singular point of the equation. If P (x0) = 0, x0 is a regular or ordinary point of the equation. If P (x) = 0 for all points x in the interval where we want to solve the equation, we say that the equation is nonsingular, or regular, on the interval. We usually need to solve (1.1) subject to either initial conditions of the form y(a) = α, y′(a) = β or boundary conditions, of which y(a) = α and y(b) = β are typical examples. It is worth reminding ourselves that, given the ordinary dif- ferential equation and initial conditions (an initial value problem), the objective is to determine the solution for other values of x, typically, x > a, as illustrated in Figure 1.1. As an example, consider a projectile. The initial conditions are the po- sition of the projectile and the speed and angle to the horizontal at which it is fired. We then want to know the path of the projectile, given these initial conditions. For initial value problems of this form, it is possible to show that: (i) If a1(x), a0(x) and f(x) are continuous on some open interval I that contains the initial point a, a unique solution of the initial value problem exists on the interval I, as we shall demonstrate in Chapter 8. 4 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS calculate the solution for x > a given y(a) = α and y'(a) = β y α a x Fig. 1.1. An initial value problem. (ii) The structure of the solution of the initial value problem is of the form y = Au1(x) + B u2(x)︸ ︷︷ ︸ Complementary function + G(x)︸ ︷︷ ︸ Particular integral , where A, B are constants that are fixed by the initial conditions and u1(x) and u2(x) are linearly independent solutions of the corresponding homoge- neous problem y′′ + a1(x)y′ + a0(x)y = 0. These results can be proved rigorously, but nonconstructively, by studying the operator Ly ≡ d 2y dx2 + a1(x) dy dx + a0(x)y, and regarding L : C2(I) → C0(I) as a linear transformation from the space of twice-differentiable functions defined on the interval I to the space of continuous functions defined on I. The solutions of the homogeneous equation are elements of the null space of L. This subspace is completely determined once its basis is known. The solution of the inhomogeneous problem, Ly = f , is then given formally as y = L−1f . Unfortunately, if we actually want to construct the solution of a particular equation, there is a lot more work to do. Before we try to construct the general solution of the inhomogeneous initial value problem, we will outline a series of subproblems that are more tractable. 1.1 THE METHOD OF REDUCTION OF ORDER 5 1.1 The Method of Reduction of Order As a first simplification we discuss the solution of the homogeneous differential equation d 2y dx2 + a1(x) dy dx + a0(x)y = 0, (1.2) on the assumption that we know one solution, say y(x) = u1(x), and only need to find the second solution. We will look for a solution of the form y(x) = U(x)u1(x). Differentiating y(x) using the product rule gives dy dx = dU dx u1 + U du1 dx , d 2y dx2 = d 2U dx2 u1 + 2 dU dx du1 dx + U d 2u1 dx2 . If we substitute these expressions into (1.2) we obtain d 2U dx2 u1 + 2 dU dx du1 dx + U d 2u1 dx2 + a1(x) ( dU dx u1 + U du1 dx ) + a0(x)Uu1 = 0. We can now collect terms to get U ( d 2u1 dx2 + a1(x) du1 dx + a0(x)u1 ) + u1 d 2U dx2 + dU dx ( 2 du1 dx + a1u1 ) = 0. Now, since u1(x) is a solution of (1.2), the term multiplying U is zero. We have therefore obtained a differential equation for dU/dx, and, by defining Z = dU/dx, have u1 dZ dx + Z ( 2 du1 dx + a1u1 ) = 0. Dividing through by Zu1 we have 1 Z dZ dx + 2 u1 du1 dx + a1 = 0, which can be integrated directly to yield log |Z| + 2 log |u1| + ∫ x a1(s) ds = C, where s is a dummy variable, for some constant C. Thus Z = c u21 exp { − ∫ x a1(s) ds } = dU dx where c = eC . This can then be integrated to give U(x) = ∫ x c u21(t) exp { − ∫ t a1(s) ds } dt + c̃, for some constant c̃. The solution is therefore 8 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS is called the Wronskian. These expressions can be integrated to give c1 = ∫ x −f(s)u2(s) W (s) ds + A, c2 = ∫ x f(s)u1(s) W (s) ds + B. We can now write down the solution of the entire problem as y(x) = u1(x) ∫ x −f(s)u2(s) W (s) ds + u2(x) ∫ x f(s)u1(s) W (s) ds + Au1(x) + Bu2(x). The particular integral is therefore y(x) = ∫ x f(s) { u1(s)u2(x) − u1(x)u2(s) W (s) } ds. (1.6) This is called the variation of parameters formula. Example Consider the equation d 2y dx2 + y = x sinx. The homogeneous form of this equation has constant coefficients, with solutions u1(x) = cosx, u2(x) = sinx. The variation of parameters formula then gives the particular integral as y = ∫ x s sin s { cos s sinx− cosx sin s 1 } ds, since W = ∣∣∣∣ cosx sinx− sinx cosx ∣∣∣∣ = cos2 x + sin2 x = 1. We can split the particular integral into two integrals as y(x) = sinx ∫ x s sin s cos s ds− cosx ∫ x s sin2 s ds = 1 2 sinx ∫ x s sin 2s ds− 1 2 cosx ∫ x s (1 − cos 2s) ds. Using integration by parts, we can evaluate this, and find that y(x) = −1 4 x2 cosx + 1 4 x sinx + 1 8 cosx is the required particular integral. The general solution is therefore y = c1 cosx + c2 sinx− 1 4 x2 cosx + 1 4 x sinx. Although we have given a rational derivation of the reduction of order and vari- ation of parameters formulae, we have made no comment so far about why the procedures we used in the derivation should work at all! It turns out that this has a close connection with the theory of continuous groups, which we will investigate in Chapter 10. 1.2 THE METHOD OF VARIATION OF PARAMETERS 9 1.2.1 The Wronskian Before we carry on, let’s pause to discuss some further properties of the Wronskian. Recall that if V is a vector space over R, then two elements v1,v2 ∈ V are linearly dependent if ∃ α1, α2 ∈ R, with α1 and α2 not both zero, such that α1v1+α2v2 = 0. Now let V = C1(a, b) be the set of once-differentiable functions over the interval a < x < b. If u1, u2 ∈ C1(a, b) are linearly dependent, ∃ α1, α2 ∈ R such that α1u1(x) + α2u2(x) = 0 ∀x ∈ (a, b). Notice that, by direct differentiation, this also gives α1u′1(x) + α2u ′ 2(x) = 0 or, in matrix form,( u1(x) u2(x) u′1(x) u ′ 2(x) )( α1 α2 ) = ( 0 0 ) . These are homogeneous equations of the form Ax = 0, which only have nontrivial solutions if det(A) = 0, that is W = ∣∣∣∣ u1(x) u2(x)u′1(x) u′2(x) ∣∣∣∣ = u1u′2 − u′1u2 = 0. In other words, the Wronskian of two linearly dependent functions is identically zero on (a, b). The contrapositive of this result is that if W ≡ 0 on (a, b), then u1 and u2 are linearly independent on (a, b). Example The functions u1(x) = x2 and u2(x) = x3 are linearly independent on the interval (−1, 1). To see this, note that, since u1(x) = x2, u2(x) = x3, u′1(x) = 2x, and u′2(x) = 3x 2, the Wronskian of these two functions is W = ∣∣∣∣ x2 x32x 3x2 ∣∣∣∣ = 3x4 − 2x4 = x4. This quantity is not identically zero, and hence x2 and x3 are linearly independent on (−1, 1). Example The functions u1(x) = f(x) and u2(x) = kf(x), with k a constant, are linearly dependent on any interval, since their Wronskian is W = ∣∣∣∣ f kff ′ kf ′ ∣∣∣∣ = 0. If the functions u1 and u2 are solutions of (1.2), we can show by differentiating W = u1u′2 − u′1u2 directly that dW dx + a1(x)W = 0. 10 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS This first order differential equation has solution W (x) = W (x0) exp { − ∫ x x0 a1(t)dt } , (1.7) which is known as Abel’s formula. This gives us an easy way of finding the Wronskian of the solutions of any second order differential equation without having to construct the solutions themselves. Example Consider the equation y′′ + 1 x y′ + ( 1 − 1 x2 ) y = 0. Using Abel’s formula, this has Wronskian W (x) = W (x0) exp { − ∫ x x0 dt t } = x0W (x0) x = A x for some constant A. To find this constant, it is usually necessary to know more about the solutions u1(x) and u2(x). We will describe a technique for doing this in Section 1.3. We end this section with a couple of useful theorems. Theorem 1.1 If u1 and u2 are linearly independent solutions of the homoge- neous, nonsingular ordinary differential equation (1.2), then the Wronskian is either strictly positive or strictly negative. Proof From Abel’s formula, and since the exponential function does not change sign, the Wronskian is identically positive, identically negative or identically zero. We just need to exclude the possibility that W is ever zero. Suppose that W (x1) = 0. The vectors ( u1(x1) u′1(x1) ) and ( u2(x1) u′2(x1) ) are then linearly dependent, and hence u1(x1) = ku2(x1) and u′1(x) = ku ′ 2(x) for some constant k. The function u(x) = u1(x)−ku2(x) is also a solution of (1.2) by linearity, and satisfies the initial conditions u(x1) = 0, u′(x1) = 0. Since (1.2) has a unique solution, the obvious solution, u ≡ 0, is the only solution. This means that u1 ≡ ku2. Hence u1 and u2 are linearly dependent – a contradiction. The nonsingularity of the differential equation is crucial here. If we consider the equation x2y′′ − 2xy′ + 2y = 0, which has u1(x) = x2 and u2(x) = x as its linearly independent solutions, the Wronksian is −x2, which vanishes at x = 0. This is because the coefficient of y′′ also vanishes at x = 0. Theorem 1.2 (The Sturm separation theorem) If u1(x) and u2(x) are the linearly independent solutions of a nonsingular, homogeneous equation, (1.2), then 1.3 SOLUTION BY POWER SERIES: THE METHOD OF FROBENIUS 13 Since the last two summations involve identical powers of x, we can combine them to obtain a0 ( c2 − 1 4 ) xc + a1 { (c + 1)2 − 1 4 } xc+1 + ∞∑ n=2 [ an { (n + c)2 − 1 4 } + an−2 ] xn+c = 0. (1.13) Although the operations above are straightforward, we need to take some care to avoid simple slips. Since (1.13) must hold for all values of x, the coefficient of each power of x must be zero. The coefficient of xc is therefore a0 ( c2 − 1 4 ) = 0. Up to this point, most Frobenius analysis is very similar. It is here that the different structures come into play. If we were to use the solution a0 = 0, the series (1.8) would have a1xc+1 as its first term. This is just equivalent to increasing c by 1. We therefore assume that a0 = 0, which means that c must satisfy c2 − 14 = 0. This is called the indicial equation, and implies that c = ± 12 . Now, progressing to the next term, proportional to xc+1, we find that a1 { (c + 1)2 − 1 4 } = 0. Choosing c = 12 gives a1 = 0, and, if we were to do this, we would find that we had constructed a solution with one arbitrary constant. However, if we choose c = − 12 the indicial equation is satisfied for arbitrary values of a1, and a1 will act as the second arbitrary constant for the solution. In order to generate this more general solution, we therefore let c = − 12 . We now progress to the individual terms in the summation. The general term yields an {( n− 1 2 )2 − 1 4 } + an−2 = 0 for n = 2, 3, . . . . This is called a recurrence relation. We solve it by observation as follows. We start by rearranging to give an = − an−2 n(n− 1) . (1.14) By putting n = 2 in (1.14) we obtain a2 = − a0 2 · 1 . For n = 3, a3 = − a1 3 · 2 . 14 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS For n = 4, a4 = − a2 4 · 3 , and substituting for a2 in terms of a0 gives a4 = − 1 4 · 3 ( − a0 2 · 1 ) = a0 4 · 3 · 2 · 1 = a0 4! . Similarly for n = 5, using the expression for a3 in terms of a1, a5 = − a3 5 · 4 = − 1 5 · 4 ( − a1 3 · 2 ) = a1 5! . A pattern is emerging here and we propose that a2n = (−1)n a0 (2n)! , a2n+1 = (−1)n a1 (2n + 1)! . (1.15) This can be proved in a straightforward manner by induction, although we will not dwell upon the details here.† We can now deduce the full solution. Starting from (1.8), we substitute c = − 12 , and write out the first few terms in the summation y = x−1/2(a0 + a1x + a2x2 + · · · ). Now, using the forms of the even and odd coefficients given in (1.15), y = x−1/2 ( a0 + a1x− a0x 2 2! − a1x 3 3! + a0x 4 4! + a1x 5 5! + · · · ) . This series splits naturally into two proportional to a0 and a1, namely y = x−1/2a0 ( 1 − x 2 2! + x4 4! − · · · ) + x−1/2a1 ( x− x 3 3! + x5 5! − · · · ) . The solution is therefore y(x) = a0 cosx x1/2 + a1 sinx x1/2 , since we can recognize the Taylor series expansions for sine and cosine. This particular differential equation is an example of the use of the method of Frobenius, formalized by Frobenius General Rule I If the indicial equation has two distinct roots, c = α, β (α < β), whose difference is an in- teger, and one of the coefficients of xk becomes indeterminate on putting c = α, both solutions can be generated by putting c = α in the recur- rence relation. † In the usual way, we must show that (1.15) is true for n = 0 and that, when the value of a2n+1 is substituted into the recurrence relation, we obtain a2(n+1)+1, as given by substituting n+ 1 for n in (1.15). 1.3 SOLUTION BY POWER SERIES: THE METHOD OF FROBENIUS 15 In the above example the indicial equation was c2 − 14 = 0, which has solutions c = ± 12 , whose difference is an integer. The coefficient of xc+1 was a1 { (c + 1)2 − 14 } = 0. When we choose the lower of the two values (c = − 12 ) this expression does not give us any information about the constant a1, in other words a1 is indeterminate. 1.3.2 The Roots of the Indicial Equation Differ by a Noninteger Quantity We now consider the differential equation 2x(1 − x)d 2y dx2 + (1 − x)dy dx + 3y = 0. (1.16) As before, let’s assume that the solution can be written as the power series (1.8). As in the previous example, this can be differentiated and substituted into the equation to yield 2x(1 − x) ∞∑ n=0 an(n + c)(n + c− 1)xn+c−2 + (1 − x) ∞∑ n=0 an(n + c)xn+c−1 +3 ∞∑ n=0 anx n+c = 0. The various terms can be multiplied out, which gives us ∞∑ n=0 an(n + c)(n + c− 1)2xn+c−1 − ∞∑ n=0 an(n + c)(n + c− 1)2xn+c + ∞∑ n=0 an(n + c)xn+c−1 − ∞∑ n=0 an(n + c)xn+c + 3 ∞∑ n=0 anx n+c = 0. Collecting similar terms gives ∞∑ n=0 an{2(n + c)(n + c− 1) + (n + c)}xn+c−1 + ∞∑ n=0 an{3 − 2(n + c)(n + c− 1) − (n + c)}xn+c = 0, and hence ∞∑ n=0 an(n + c)(2n + 2c− 1)xn+c−1 + ∞∑ n=0 an{3 − (n + c)(2n + 2c− 1)}xn+c = 0. We now extract the first term from the left hand summation so that both summa- tions start with a term proportional to xc. This gives a0c(2c− 1)xc−1 + ∞∑ n=1 an(n + c)(2n + 2c− 1)xn+c−1 18 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS Fig. 1.2. The solution of (1.16) given by (1.18). Case II: c = 12 In this case, we simplify the recurrence relation (1.17) to give an = an−1 {( n− 12 ) (2n− 2) − 3( n + 12 ) 2n } = an−1 ( 2n2 − 3n− 2 2n2 + n ) = an−1 { (2n + 1)(n− 2) n(2n + 1) } = an−1 ( n− 2 n ) . We again recall that this relation holds for n  1 and start with n = 1, which gives a1 = a0(−1). Substituting n = 2 gives a2 = 0 and, since all successive ai will be written in terms of a2, ai = 0 for i = 2, 3, . . . . The second solution of the equation is therefore y = Bx1/2(1−x). We can now use this simple solution in the reduction of order formula, (1.3), to determine an analytical formula for the first solution, (1.18). For example, for 0  x  1, we find that (1.18) can be written as y = −1 6 A [ 3x− 2 + 3x1/2 (1 − x) log { 1 + x1/2 (1 − x)1/2 }] . This expression has a logarithmic singularity in its derivative at x = 1, which explains why the radius of convergence of the power series solution (1.18) is |x|  1. This differential equation is an example of the second major case of the method of Frobenius, formalized by 1.3 SOLUTION BY POWER SERIES: THE METHOD OF FROBENIUS 19 Frobenius General Rule II If the indicial equation has two distinct roots, c = α, β (α < β), whose difference is not an integer, the general solution of the equation is found by successively substituting c = α then c = β into the general recurrence relation. 1.3.3 The Roots of the Indicial Equation are Equal Let’s try to determine the two solutions of the differential equation x d 2y dx2 + (1 + x) dy dx + 2y = 0. We substitute in the standard power series, (1.8), which gives x ∞∑ n=0 an(n + c)(n + c− 1)xn+c−2 + (1 + x) ∞∑ n=0 an(n + c)xn+c−1 +2 ∞∑ n=0 anx n+c = 0. This can be simplified to give ∞∑ n=0 an(n + c)2xn+c−1 + ∞∑ n=0 an(n + c + 2)xn+c = 0. We can extract the first term from the left hand summation to give a0c 2xc−1 + ∞∑ n=1 an(n + c)2xn+c−1 + ∞∑ n=0 an(n + c + 2)xn+c = 0. Now shifting the series using m = n + 1 (and subsequently changing dummy vari- ables from m to n) we have a0c 2xc−1 + ∞∑ n=1 {an(n + c)2 + an−1(n + c + 1)}xn+c = 0, (1.19) where we have combined the two summations. The indicial equation is c2 = 0 which has a double root at c = 0. We know that there must be two solutions, but it appears that there is only one available to us. For the moment let’s see how far we can get by setting c = 0. The recurrence relation is then an = −an−1 n + 1 n2 for n = 1, 2, . . . . When n = 1 we find that a1 = −a0 2 12 , 20 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS and with n = 2, a2 = −a1 3 22 = a0 3 · 2 12 · 22 . Using n = 3 gives a3 = −a2 4 32 = −a0 4 · 3 · 2 12 · 22 · 32 , and we conclude that an = (−1)n (n + 1)! (n!)2 a0 = (−1)n n + 1 n! a0. One solution is therefore y = a0 ∞∑ n=0 (−1)n (n + 1) n! xn, which can also be written as y = a0 { x ∞∑ n=1 (−1)nxn−1 (n− 1)! + ∞∑ n=0 (−1)nxn n! } = a0 { −x ∞∑ m=0 (−1)mxm m! + e−x } = a0(1 − x)e−x. This solution is one that we could not have readily determined simply by inspection. We could now use the method of reduction of order to find the second solution, but we will proceed with the method of Frobenius so that we can see how it works in this case. Consider (1.19), which we write out more fully as x d 2y dx2 + (1 + x) dy dx + 2y = a0c 2xc−1 + ∞∑ n=1 {an(n + c)2 + an−1(n + c + 1)}xn+c = 0. The best we can do at this stage is to set an(n+c)2 +an−1(n+c+1) = 0 for n  1, as this gets rid of most of the terms. This gives us an as a function of c for n  1, and leaves us with x d 2y dx2 + (1 + x) dy dx + 2y = a0c2xc−1. (1.20) Let’s now take a partial derivative with respect to c, where we regard y as a function of both x and c, making use of d dx = ∂ ∂x , ∂ ∂c ( ∂y ∂x ) = ∂ ∂x ( ∂y ∂c ) . This gives x ∂2 ∂x2 ( ∂y ∂c ) + (1 + x) ∂ ∂x ( ∂y ∂c ) + 2 ( ∂y ∂c ) = a0 ∂ ∂c (c2xc−1). 1.3 SOLUTION BY POWER SERIES: THE METHOD OF FROBENIUS 23 Simplifying, we obtain dan dc ∣∣∣∣ c=0 = (−1)na0 (n + 1) n! (φ(n + 1) − 2φ(n) − 1) , where we have introduced the notation φ(n) ≡ n∑ m=1 1 m . (1.21) The second solution is therefore y = a0 [ ∞∑ n=0 (−1)n (n + 1) n! {φ(n + 1) − 2φ(n) − 1}xn + ∞∑ n=0 (−1)n (n + 1) n! xn log x ] . This methodology is formalized in Frobenius General Rule III If the indicial equation has a double root, c = α, one solution is obtained by putting c = α into the recurrence relation. The second independent solution is (∂y/∂c)c=α where an = an(c) for the calculation. There are several other cases that can occur in the method of Frobenius, which, due to their complexity, we will not go into here. One method of dealing with these is to notice that the method outlined in this chapter always produces a solution of the form y = u1(x) = ∑∞ n=0 anx n+c. This can be used in the reduction of order formula, (1.3), to find the second linearly independent solution. Of course, it is rather difficult to get all of u2(x) this way, but the first few terms are usually easy enough to compute by expanding for small x. Having got these, we can assume a general series form for u2(x), and find the coefficients in the usual way. Example Let’s try to solve x2y′′ + xy′ + (x2 − 1)y = 0, (1.22) using the method of Frobenius. If we look for a solution in the usual form, y =∑∞ n=0 anx n+c, we find that a0(c2 − 1)xc + a1 { (1 + c)2 − 1 } xc+1 + ∞∑ k=2 [ ak { (k + c)2 − 1 } + ak−2 ] xk+c = 0. The indicial equation has roots c = ±1, and, by choosing either of these, we find that a1 = 0. If we now look for the general solution of ak = − ak−2 (k + c)2 − 1 , we find that a2 = − a0 (2 + c)2 − 1 = − a0 (1 + c)(3 + c) , 24 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS a4 = − a2 (4 + c)2 − 1 = a0 (1 + c)(3 + c)2(5 + c) , and so on. This gives us a solution of the form y(x, c) = a0xc { 1 − x 2 (1 + c)(3 + c) + x4 (1 + c)(3 + c)2(5 + c) − · · · } . Now, by choosing c = 1, we obtain one of the linearly independent solutions of (1.22), u1(x) = y(x, 1) = x ( 1 − x 2 2 · 4 + x4 2 · 42 · 6 − · · · ) . However, if c = −1, the coefficients a2n for n = 1, 2, . . . are singular. In order to find the structure of the second linearly independent solution, we use the reduction of order formula, (1.3). Substituting for u1(x) gives u2(x) = x ( 1 − x 2 8 + · · · )∫ x 1 t2 ( 1 − t 2 8 + · · · )2 exp ( − ∫ t 1 s ds ) dt = x ( 1 − x 2 8 + · · · )∫ x 1 t2 ( 1 + t2 4 + · · · ) 1 t dt = x ( 1 − x 2 8 + · · · )( − 1 2x2 + 1 4 log x + · · · ) . The second linearly independent solution of (1.22) therefore has the structure u2(x) = 1 4 u1(x) log x− 1 2x v(x), where v(x) = 1 + b2x2 + b4x4 + · · · . If we assume a solution structure of this form and substitute it into (1.22), it is straightforward to pick off the coefficients b2n. Finally, note that we showed in Section 1.2.1 that the Wronskian of (1.22) is W = A/x for some constant A. Now, since we know that u1 = x + · · · and u2 = −1/2x+ · · · , we must have W = x(1/2x2)+1/2x+ · · · = 1/x+ · · · , and hence A = 1. 1.3.4 Singular Points of Differential Equations In this section, we give some definitions and a statement of a theorem that tells us when the method of Frobenius can be used, and for what values of x the infinite series will converge. We consider a second order, variable coefficient differential equation of the form P (x) d 2y dx2 + Q(x) dy dx + R(x)y = 0. (1.23) Before we proceed, we need to further refine our ideas about singular points. If x0 is a singular point and (x−x0)Q(x)/P (x) and (x−x0)2R(x)/P (x) have convergent 1.3 SOLUTION BY POWER SERIES: THE METHOD OF FROBENIUS 25 Taylor series expansions about x0, then x = x0 is called a regular singular point; otherwise, x0 is called an irregular singular point. Example Consider the equation x(x− 1)d 2y dx2 + (1 + x) dy dx + y = 0. (1.24) There are singular points where x2 − x = 0, at x = 0 and x = 1. Let’s start by looking at the singular point x = 0. Consider the expression xQ P = x(1 + x) x2 − x = (1 + x) (x− 1) = −(1 + x)(1 − x) −1. Upon expanding (1 − x)−1 using the binomial expansion we have xQ P = −(1 + x)(1 + x + x2 + · · · + xn + · · · ), which can be multiplied out to give xQ P = −1 − 2x + · · · . This power series is convergent provided |x| < 1 (by considering the binomial expansion used above). Now x2R P = x2 x2 − x = x x− 1 = −x(1 − x) −1. Again, using the binomial expansion, which is convergent provided |x| < 1, x2R P = −x(1 + x + x2 + · · · ) = −x− x2 − x3 − · · · . Since xQ/P and x2R/P have convergent Taylor series about x = 0, this is a regular singular point. Now consider the other singular point, x = 1. We note that (x− 1)Q P = (x− 1)(1 + x) (x2 − x) = (x− 1)(1 + x) x(x− 1) = 1 + x x . At this point we need to recall that we want information near x = 1, so we rewrite x as 1 − (1 − x), and hence (x− 1)Q P = 2 − (1 − x) {1 − (1 − x)} . Expanding in powers of (1 − x) using the binomial expansion gives (x− 1)Q P = 2 ( 1 − 1 − x 2 ) {1 + (1 − x) + (1 − x)2 + · · · }, 28 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS For example, Bessel’s equation, which we will study in Chapter 3, has P (x) = x2, Q(x) = x and R(x) = x2 − ν2, where ν is a constant, and hence has a regular singular point at x = 0. In this case, P̂ (s) = s2, Q̂(s) = s and R̂(s) = 1/s2 − ν2. Since P̂ (0) = 0, Bessel’s equation also has a singular point at infinity. In addition, s2R̂(s)/P̂ (s) = ( 1 − s2ν ) /s, which is singular at s = 0, and we conclude that the point at infinity is an irregular singular point. We will study the behaviour of solutions of Bessel’s equation in the neighbourhood of the point at infinity in Section 12.2.7. Exercises 1.1 Show that the functions u1 are solutions of the differential equations given below. Use the reduction of order method to find the second independent solution, u2. (a) u1 = ex, (x− 1) d 2y dx2 − xdy dx + y = 0, (b) u1 = x−1 sinx, x d 2y dx2 + 2 dy dx + xy = 0. 1.2 Find the Wronskian of (a) x, x2, (b) ex, e−x, (c) x cos(log |x|), x sin(log |x|). Which of these pairs of functions are linearly independent on the interval [−1, 1]? 1.3 Find the general solution of (a) d 2y dx2 − 2dy dx + y = x3/2ex, (b) d 2y dx2 + 4y = 2 sec 2x, (c) d 2y dx2 + 1 x dy dx + ( 1 − 1 4x2 ) y = x, (d) d 2y dx2 + y = f(x), subject to y(0) = y′(0) = 0. 1.4 If u1 and u2 are linearly independent solutions of y′′ + p(x)y′ + q(x)y = 0 and y is any other solution, show that the Wronskian of {y, u1, u2}, W (x) = ∣∣∣∣∣∣ y u1 u2 y′ u′1 u ′ 2 y′′ u′′1 u ′′ 2 ∣∣∣∣∣∣ , is zero everywhere. Hence find a second order differential equation which has solutions y = x and y = log x. EXERCISES 29 1.5 Find the Wronskian of the solutions of the differential equation (1−x2)y′′−2xy′+2y = 0 to within a constant. Use the method of Frobenius to determine this constant. 1.6 Find the two linearly independent solutions of each of the differential equa- tions (a) x2 d 2y dx2 + x ( x− 1 2 ) dy dx + 1 2 y = 0, (b) x2 d 2y dx2 + x (x + 1) dy dx − y = 0, using the method of Frobenius. 1.7 Show that the indicial equation for x(1 − x)d 2y dx2 + (1 − 5x)dy dx − 4y = 0 has a double root. Obtain one series solution of the equation in the form y = A ∞∑ n=1 n2xn−1 = Au1(x). What is the radius of convergence of this series? Obtain the second solution of the equation in the form u2(x) = u1(x) log x + u1(x) (−4x + · · · ) . 1.8 (a) The points x = ±1 are singular points of the differential equation (x2 − 1)2 d 2y dx2 + (x + 1) dy dx − y = 0. Show that one of them is a regular singular point and that the other is an irregular singular point. (b) Find two linearly independent Frobenius solutions of x d 2y dx2 + 4 dy dx − xy = 0, which are valid for x > 0. 1.9 Find the general solution of the differential equation 2x d 2y dx2 + (1 + x) dy dx − ky = 0 (where k is a real constant) in power series form. For which values of k is there a polynomial solution? 1.10 Let α, β, γ denote real numbers and consider the hypergeometric equation x(1 − x)d 2y dx2 + {γ − (α + β + 1)x}dy dx − αβy = 0. Show that x = 0 and x = 1 are regular singular points and obtain the roots of the indicial equation at the point x = 0. Show that if γ is not an integer, there are two linearly independent solutions of Frobenius form. Express a1 30 VARIABLE COEFFICIENT, SECOND ORDER DIFFERENTIAL EQUATIONS and a2 in terms of a0 for each of the solutions. 1.11 Show that each of the equations (a) x3 d 2y dx2 + x2 dy dx + y = 0, (b) x2 d 2y dx2 + dy dx − 2y = 0, has an irregular singular point at x = 0. Show that equation (a) has no solution of Frobenius type but that equation (b) does. Obtain this solution and hence find the general solution of equation (b). 1.12 Show that x = 0 is a regular singular point of the differential equation 2x2 d 2y dx2 + x(1 − x)dy dx − y = 0. Find two linearly independent Frobenius solutions and show that one of these solutions can be expressed in terms of elementary functions. Verify directly that this function satisfies the differential equation. 1.13 Find the two linearly independent solutions of the ordinary differential equation x(x− 1)y′′ + 3xy′ + y = 0 in the form of a power series. Hint: It is straightforward to find one solution, but you will need to use the reduction of order formula to determine the structure of the second solution. 1.14 ∗ Show that, if f(x) and g(x) are nontrivial solutions of the differential equations u′′ + p(x)u = 0 and v′′ + q(x)v = 0, and p(x)  q(x), f(x) vanishes at least once between any two zeros of g(x) unless p ≡ q and f = µg, µ ∈ R (this is known as the Sturm comparison theorem). 1.15 ∗ Show that, if q(x)  0, no nontrivial solution of u′′ + q(x)u = 0 can have more than one zero. 2.1 DEFINITION OF THE LEGENDRE POLYNOMIALS, Pn(x) 33 then u0(x), v1(x), u2(x) and v3(x) are the first four polynomial solutions. It is convenient to write the solution in the form y = APn(x) + BQn(x), ↑ ↑ Polynomial Infinite series, of degree n converges for |x| < 1 where we have defined Pn(x) = un(x)/un(1) for n even and Pn(x) = vn(x)/vn(1) for n odd. The polynomials Pn(x) are called the Legendre polynomials and can be written as Pn(x) = m∑ r=0 (−1)r (2n− 2r)!x n−2r 2nr!(n− r)!(n− 2r)! , where m is the integer part of n/2. Note that by definition Pn(1) = 1. The first five of these are P0(x) = 1, P1(x) = x, P2(x) = 1 2 (3x2 − 1), P3(x) = 1 2 (5x3 − 3x), P4(x) = 35 8 x4 − 15 4 x2 + 3 8 . Graphs of these Legendre polynomials are shown in Figure 2.1, which was generated using the MATLAB script    x = linspace(-1,1,500); pout = []; for k = 1:4 p = legendre(k,x); p=p(1,:); pout = [pout; p]; end plot(x,pout(1,:),x,pout(2,:),’--’,... x,pout(3,:),’-.’,x,pout(4,:),’:’) legend(’P_1(x)’,’P_2(x)’,’P_3(x)’,’P_4(x)’,4),xlabel(’x’) Note that the MATLAB function legendre(n,x) generates both the Legendre polynomial of order n and the associated Legendre functions of orders 1 to n, which we will meet later, so we have to pick off the Legendre polynomial as the first row of the matrix p. Simple expressions for the Qn(x) are available for n = 0, 1, 2, 3 using the reduc- tion of order formula, (1.3). In particular Q0(x) = 1 2 log ( 1 + x 1 − x ) , Q1(x) = 1 2 x log ( 1 + x 1 − x ) − 1, Q2(x) = 1 4 (3x2 − 1) log ( 1 + x 1 − x ) − 3 2 x, Q3(x) = 1 4 (5x3 − 3x) log ( 1 + x 1 − x ) − 5 2 x2 + 2 3 . These functions are singular at x = ±1. Notice that part of the infinite series has 34 LEGENDRE FUNCTIONS Fig. 2.1. The Legendre polynomials P1(x), P2(x), P3(x) and P4(x). been summed to give us the logarithmic terms. Graphs of these functions are shown in Figure 2.2. Example Let’s try to find the general solution of (1 − x2)y′′ − 2xy′ + 2y = 1 1 − x2 , for −1 < x < 1. This is just an inhomogeneous version of Legendre’s equation of order one. The complementary function is yh = AP1(x) + BQ1(x). The variation of parameters formula, (1.6), then shows that the particular integral is yp = ∫ x P1(s)Q1(x) − P1(x)Q1(s) (1 − s2){P1(s)Q′1(s) − P ′1(s)Q1(s)} ds. We can considerably simplify this rather complicated looking result. Firstly, Abel’s formula, (1.7), shows that the Wronskian is W = P1(s)Q′1(s) − P ′1(s)Q1(s) = W0 1 − s2 . 2.2 THE GENERATING FUNCTION FOR Pn(x) 35 Fig. 2.2. The first four Legendre functions, Q0(x), Q1(x), Q2(x) and Q3(x). We can determine the constant W0 by considering the behaviour of W as s → 0. Since P1(s) = s, Q1(s) = 1 2 s log ( 1 + s 1 − s ) − 1, W = 1 + s2 + · · · for s 1. From the binomial theorem, 1/(1− s2) = 1 + s2 + · · · for s 1. We conclude that W0 = 1. This means that yp = Q1(x) ∫ x P1(s) ds− P1(x) ∫ x Q1(s) ds = 1 2 x2Q1(x) − x { 1 4 (x2 − 1) log ( 1 + x 1 − x ) − 1 2 x } . The general solution is this particular integral plus the complementary function (yp(x) + yh(x)). 2.2 The Generating Function for Pn(x) In order to make a more systematic study of the Legendre polynomials, it is helpful to introduce a generating function, G(x, t). This function is defined in such a way that the coefficients of the Taylor series of G(x, t) around t = 0 are Pn(x). We start with the assertion that G(x, t) = (1 − 2xt + t2)−1/2 = ∞∑ n=0 Pn(x)tn. (2.3) 38 LEGENDRE FUNCTIONS 2.3 Differential and Recurrence Relations Between Legendre Polynomials The generating function can also be used to derive recurrence relations between the various Pn(x). Starting with (2.3), we differentiate with respect to t, and find that (x− t)(1 − 2xt + t2)−3/2 = ∞∑ n=0 Pn(x)ntn−1. We now multiply through by (1 − 2xt + t2) to obtain (x− t)(1 − 2xt + t2)−1/2 = (1 − 2xt + t2) ∞∑ n=0 Pn(x)ntn−1, which leads to x ∞∑ n=0 Pn(x)tn − ∞∑ n=0 Pn(x)tn+1 = n ∞∑ n=0 Pn(x)tn−1 − 2xn ∞∑ n=0 Pn(x)tn + n ∞∑ n=0 Pn(x)tn+1. Equating coefficients of tn on both sides shows that xPn(x) − Pn−1(x) = (n + 1)Pn+1(x) − 2xnPn(x) + (n− 1)Pn−1(x), and hence (n + 1)Pn+1(x) − (2n + 1)xPn(x) + nPn−1(x) = 0. (2.5) This is a recurrence relation between Pn+1(x), Pn(x) and Pn−1(x), which can be used to compute the polynomials Pn(x). Starting with P0(x) = 1 and P1(x) = x, we substitute n = 1 into (2.5), which gives 2P2(x) − 3x2 + 1 = 0, and hence P2(x) = 1 2 (3x2 − 1). By iterating this procedure, we can generate the Legendre polynomials Pn(x) for any n. In a rather similar manner we can generate a recurrence relation that involves the derivatives of the Legendre polynomials. Firstly, we differentiate the generating function, (2.3), with respect to x to get t(1 − 2xt + t2)−3/2 = ∞∑ n=0 tnP ′n(x). Differentiation of (2.3) with respect to t gives (x− t)(1 − 2xt + t2)−3/2 = ∞∑ n=0 ntn−1Pn(x). 2.4 RODRIGUES’ FORMULA 39 Combining these gives ∞∑ n=0 ntnPn(x) = (x− t) ∞∑ n=0 tnP ′n(x), and by equating coefficients of tn we obtain the recurrence relation nPn(x) = xP ′n(x) − P ′n−1(x). An Example From Electrostatics In electrostatics, the potential due to a unit point charge at r = r0 is V = 1 |r − r0| . If this unit charge lies on the z-axis, at x = y = 0, z = a, this becomes V = 1√ x2 + y2 + (z − a)2 . In terms of spherical polar coordinates, (r, θ, φ), x = r sin θ cosφ, y = r sin θ sinφ, z = r cos θ. This means that x2 + y2 + (z − a)2 = x2 + y2 + z2 − 2az + a2 = r2 + a2 − 2az, and hence V = 1√ r2 + a2 − 2ar cos θ = 1 a ( 1 − 2 cos θ r a + r2 a2 )−1/2 . As we would expect from the symmetry of the problem, there is no dependence upon the azimuthal angle, φ. We can now use the generating function to write this as a power series, V = 1 a ∞∑ n=0 Pn (cos θ) ( r a )n . 2.4 Rodrigues’ Formula There are other methods of generating the Legendre polynomials, and the most useful of these is Rodrigues’ formula, Pn(x) = 1 2nn! d n dxn {(x2 − 1)n}. For example, for n = 1, P1(x) = 1 211! d 1 dx1 {(x2 − 1)} = x, 40 LEGENDRE FUNCTIONS whilst for n = 2, P2(x) = 1 222! d 2 dx2 {(x2 − 1)2} = 1 4 · 2 d dx {4x(x2 − 1)} = 1 2 (3x2 − 1). The general proof of this result is by induction on n, which we leave as an exercise. Rodrigues’ formula can also be used to develop an integral representation of the Legendre polynomials. In order to show how this is done, it is convenient to switch from the real variable x to the complex variable z = x + iy. We define the finite complex Legendre polynomials in the same way as for a real variable. In particular Rodrigues’ formula, Pn(z) = 1 2nn! d n dzn ( z2 − 1 )n , will be useful to us here. Recall from complex variable theory (see Appendix 6) that, if f(z) is analytic and single-valued inside and on a simple closed curve C, f (n)(z) = n! 2πi ∫ C f(ξ) (ξ − z)n+1 dξ, for n  0, when z is an interior point of C. Now, using Rodrigues’ formula, we have Pn(z) = 1 2n+1πi ∫ C (ξ2 − 1)n (ξ − z)n+1 dξ, (2.6) which is known as Schläfli’s representation. The contour C must, of course, enclose the point ξ = z and be traversed in an anticlockwise sense. To simplify matters, we now choose C to be a circle, centred on ξ = z with radius | √ z2 − 1|, with z = 1. Putting ξ = z + √ z2 − 1eiθ gives, after some simple manipulation, Pn(z) = 1 2π ∫ 2π θ=0 ( z + √ z2 − 1 cos θ )n dθ. This is known as Laplace’s representation. In fact it is also valid when z = 1, since Pn(1) = 1 2π ∫ 2π 0 1dθ = 1. Laplace’s representation is useful, amongst other things, for providing a bound on the size of the Legendre polynomials of real argument. For z ∈ [−1, 1], we can write z = cosφ and use Laplace’s representation to show that |Pn(cosφ)|  1 2π ∫ 2π 0 | cosφ + i sinφ cos θ|n dθ. Now, since | cosφ + i sinφ cos θ| = √ cos2 φ + sin2 φ cos2 θ  √ cos2 φ + sin2 φ = 1, we have |Pn(cosφ)|  1. 2.5 ORTHOGONALITY OF THE LEGENDRE POLYNOMIALS 43 Interchanging the order of summation and integration leads to∫ 1 −1 f(x)Pm(x)dx = ∞∑ n=0 an {∫ 1 −1 Pn(x)Pm(x)dx } = ∞∑ n=0 an 2 2m + 1 δmn = 2am 2m + 1 , using the orthogonality property, (2.7). This means that am = 2m + 1 2 ∫ 1 −1 f(x)Pm(x)dx, (2.9) and we write the series as f(x) = ∞∑ n=0 { 2n + 1 2 ∫ 1 −1 f(x)Pn(x)dx } Pn(x). This is called a Fourier–Legendre series. Let’s consider a couple of examples. Firstly, when f(x) = x2, am = 2m + 1 2 ∫ 1 −1 x2Pm(x)dx, so that a0 = 1 2 ∫ 1 −1 x2 · 1dx = 1 2 [ x3 3 ]1 −1 = 1 2 2 3 = 1 3 , a1 = (2 · 1) + 1 2 ∫ 1 −1 x2 · xdx = 3 2 [ x4 4 ]1 −1 = 0, a2 = (2 · 2) + 1 2 ∫ 1 −1 x2 1 2 (3x2 − 1)dx = 5 2 [ 3x5 2 · 5 − x3 3 · 2 ]1 −1 = 2 3 . Also, (2.8) shows that am = 0 for m = 3, 4, . . . , and therefore, x2 = 1 3 P0(x) + 2 3 P2(x). A finite polynomial clearly has a finite Fourier–Legendre series. Secondly, consider f(x) = ex. In this case am = 2m + 1 2 ∫ 1 −1 exPm(x) dx, and hence a0 = 1 2 ∫ 1 −1 ex dx = 1 2 ( e− e−1 ) , 44 LEGENDRE FUNCTIONS a1 = 3 2 ∫ 1 −1 x ex dx = 3 e−1. To proceed with this calculation it is necessary to find a recurrence relation between the an. This is best done by using Rodrigues’ formula, which gives an = (2n + 1) ( an−2 2n− 3 − an−1 ) for n = 2, 3, . . . , (2.10) from which the values of a4, a5, . . . are easily computed. We will not examine the convergence of Fourier–Legendre series here as the de- tails are rather technical. Instead we content ourselves with a statement that the Fourier–Legendre series converges uniformly on any closed subinterval of (−1, 1) in which f is continuous and differentiable. An extension of this result to the space of piecewise continuous functions is that the series converges to the value 1 2 { f(x+0 ) + f(x − 0 ) } at each point x0 ∈ (−1, 1) where f has a right and left deriva- tive. We will prove a related theorem in Chapter 5. 2.6 Physical Applications of the Legendre Polynomials In this section we present some examples of Legendre polynomials as they arise in mathematical models of heat conduction and fluid flow in spherical geometries. In general, we will encounter the Legendre equation in situations where we have to solve partial differential equations containing the Laplacian in spherical polar coordinates. 2.6.1 Heat Conduction Let’s derive the equation that governs the evolution of an initial distribution of heat in a solid body with temperature T , density ρ, specific heat capacity c and thermal conductivity k. Recall that the specific heat capacity, c, is the amount of heat required to raise the temperature of a unit mass of a substance by one degree. The thermal conductivity, k, of a body appears in Fourier’s law, which states that the heat flux per unit area, per unit time, Q = (Qx, Qy, Qz), is related to the temperature gradient, ∇T , by the simple linear relationship Q = −k∇T . If we now consider a small element of our solid body at (x, y, z) with sides of length δx, δy and δz, the temperature change in this element over a time interval δt is determined by the difference between the amount of heat that flows into the element and the amount of heat that flows out, which gives ρc {T (x, y, z, t + δt) − T (x, y, z, t)} δxδyδz = {Qx (x, y, z, t) −Qx (x + δx, y, z, t)} δtδyδz + {Qy (x, y, z, t) −Qy (x, y + δy, z, t)} δtδxδz (2.11) + {Qz (x, y, z, t) −Qz (x, y, z + δz, t)} δtδxδy. 2.6 PHYSICAL APPLICATIONS OF THE LEGENDRE POLYNOMIALS 45 ρ k c K kg m−3 J m−1 s−1 K−1 J kg−1 K−1 m2 s−1 copper 8920 385 386 1.1 × 10−4 water 1000 254 4186 6.1 × 10−5 glass 2800 0.8 840 3.4 × 10−7 Table 2.1. Some typical physical properties of copper, water (at room temperature and pressure) and glass. Note that a typical term on the right hand side of this, for example, {Qx (x, y, z, t) −Qx (x + δx, y, z, t)} δtδyδz, is the amount of heat crossing the x-orientated faces of the element, each with area δyδz, during the time interval (t, t + δt). Taking the limit δt, δx, δy, δz → 0, we obtain ρc ∂T ∂t = − { ∂Qx ∂x + ∂Qy ∂y + ∂Qz ∂z } = −∇ · Q. Substituting in Fourier’s law, Q = −k∇T , gives the diffusion equation, ∂T ∂t = K∇2T, (2.12) where K = k/ρc is called the thermal diffusivity. Table 2.1 contains the values of relevant properties for three everyday materials. When the temperature reaches a steady state (∂T/∂t = 0), this equation takes the simple form ∇2T = 0, (2.13) which is known as Laplace’s equation. It must be solved in conjunction with appropriate boundary conditions, which drive the temperature gradients in the body. Example Let’s try to find the steady state temperature distribution in a solid, uniform sphere of unit radius, when the surface temperature is held at f(θ) = T0 sin4 θ in spherical polar coordinates, (r, θ, φ). This temperature distribution will satisfy Laplace’s equation, (2.13). Since the equation and boundary conditions do not depend upon the azimuthal angle, φ, neither does the solution, and hence Laplace’s equation takes the form 1 r2 ∂ ∂r ( r2 ∂T ∂r ) + 1 r2 sin θ ∂ ∂θ ( sin θ ∂T ∂θ ) = 0. Let’s look for a separable solution, T (r, θ) = R(r)Θ(θ). This gives Θ d dr ( r2 dR dr ) + R sin θ d dθ ( sin θ dΘ dθ ) = 0, 48 LEGENDRE FUNCTIONS The function ezmesh gives an easy way of plotting parametric surfaces like this. The first three arguments give the x, y and z coordinates as parametric functions of r and t = θ, whilst the fourth specifies the ranges of r and t. Fig. 2.3. The steady state temperature in a uniform sphere with surface temperature sin4 θ, given by (2.14). 2.6.2 Fluid Flow Consider a fixed volume V bounded by a surface S within an incompressible fluid. Although fluid flows into and out of V , the mass of fluid within V remains constant, since the fluid is incompressible, so any flux out of V at one place is balanced by a flux into V at another place. Mathematically, we can express this as∫ S u · n dS = 0, where u is the velocity of the fluid, n is the outward unit normal to S, and hence u·n is the normal component of the fluid velocity out through S. If the fluid velocity field, u, is smooth, we can use the divergence theorem to rewrite this statement of conservation of mass as ∫ V ∇ · u dV = 0. As this applies to an arbitrary volume V within the fluid, we must have ∇ · u = 0 throughout the flow. If we also suppose that the fluid is inviscid (there is no friction as one fluid element flows past another), it is possible to make the further assumption that the flow is irrotational (∇ × u = 0). Physically, this means that 2.6 PHYSICAL APPLICATIONS OF THE LEGENDRE POLYNOMIALS 49 there is no spin in any fluid element as it flows around. Inviscid, irrotational, incompressible flows are therefore governed by the two equations ∇ · u = 0 and ∇×u = 0.† For simply connected flow domains, ∇×u = 0 if and only if u = ∇φ for some scalar potential function φ, known as the velocity potential. Substituting into ∇ · u = 0 gives ∇2φ = 0. In other words, the velocity potential satisfies Laplace’s equation. As for boundary conditions, there can be no flux of fluid into a solid body so that u · n = n · ∇φ = ∂φ/∂n = 0 where the fluid is in contact with a solid body. As an example of such a flow, let’s consider what happens when a sphere of radius r = a is placed in a uniform stream, assuming that the flow is steady, inviscid and irrotational. The flow at infinity must be uniform, with u = U i where i is the unit vector in the x-direction. First of all, it is clear that the problem will be best treated in spherical polar coordinates. We know from the previous example that the bounded, axisymmetric solution of Laplace’s equation is φ = ∞∑ n=0 ( Anr n + Bnr−1−n ) Pn(cos θ). (2.15) The flow at infinity has potential φ = Ux = U r cos θ. Since P1(cos θ) = cos θ, we see that we must take A1 = 1 and An = 0 for n > 1. To fix the constants Bn, notice that there can be no flow through the surface of the sphere. This gives us the boundary condition on the radial velocity as ur = ∂φ ∂r = 0 at r = a. On substituting (2.15) into this boundary condition, we find that B1 = 12a 3 and Bn = 0 for n > 1. The solution is therefore φ = U ( r + a3 2 r2 ) cos θ. (2.16) The streamlines (continuous curves that are tangent to the velocity vector) are shown in Figure 2.4 when a = U = 1. In order to obtain this figure, we note that (see Section 7.2.2) the streamlines are given by ψ = U ( r − a 3 r2 ) sin θ = Uy ( 1 − a 3 (x2 + y2)3/2 ) = constant. We can then use the MATLAB script    x = linspace(-2,2,400); y = linspace(0,2,200); [X Y] = meshgrid(x,y); Z = Y.*(1-1./(X.^2+Y.^2).^(3/2)); v = linspace(0,2,15); contour(X,Y,Z,v) † We will consider an example of viscous flow in Section 6.4. 50 LEGENDRE FUNCTIONS The command meshgrid creates a grid suitable for use with the plotting command contour out of the two vectors, x and y. The vector v specifies the values of ψ for which a contour is to be plotted. Fig. 2.4. The streamlines for inviscid, irrotational flow about a unit sphere. In order to complete this example, we must consider the pressure in our ideal fluid. The force exerted by the fluid on a surface S by the fluid outside S is purely a pressure, p, which acts normally to S. In other words, the force on a surface element with area dS and outward unit normal n is −pn dS. We would now like to apply Newton’s second law to the motion of the fluid within V , the volume enclosed by S. In order to do this, we need an expression for the acceleration of the fluid. Let’s consider the change in the velocity of a fluid particle between times t and t + δt, which, for small δt, we can Taylor expand to give u(x(t + δt), t + δt) − u(x(t), t) = δt { ∂u ∂t (x, t) + dx dt · ∇u(x, t) } + · · · = δt { ∂u ∂t (x, t) + (u(x, t) · ∇)u(x, t) } + · · · , since u = dx/dt. This means that the fluid acceleration is Du Dt = ∂u ∂t + (u · ∇)u, where D/Dt is the usual notation for the material derivative, or time derivative following the fluid particle. We can now use Newton’s second law on the fluid within S to obtain∫ S −pn dS = ∫ V ρ Du Dt dV. After noting that∫ S pn dS = i ∫ S (pi) · n dS + j ∫ S (pj) · n dS + k ∫ S (pk) · n dS, 2.7 THE ASSOCIATED LEGENDRE EQUATION 53 and Qmn (x) is singular at x = ±1. It is straightforward to show from their definitions that P 11 (x) = (1 − x2)1/2 = sin θ, P 12 (x) = 3x(1 − x2)1/2 = 3 sin θ cos θ = 3 2 sin 2θ, P 22 (x) = 3(1 − x2) = 3 sin2 θ = 3 2 (1 − cos 2θ), P 13 (x) = 3 2 (5x2 − 1)(1 − x2)1/2 = 3 8 (sin θ + 5 sin 3θ), where x = cos θ. There are various recurrence formulae and orthogonality relations between the associated Legendre functions, which can be derived in much the same way as those for the ordinary Legendre functions. Example: Spherical harmonics Let’s try to find a representation of the solutions of Laplace’s equation in three dimensions, ∂2u ∂x2 + ∂2u ∂y2 + ∂2u ∂z2 = 0, that are homogeneous in x, y and z. By homogeneous we mean of the form xiyjzk. It is simplest to work in spherical polar coordinates, in terms of which Laplace’s equation takes the form ∂ ∂r ( r2 ∂u ∂r ) + 1 sin θ ∂ ∂θ ( sin θ ∂u ∂θ ) + 1 sin2 θ ∂2u ∂φ2 = 0. A homogeneous solution of order n = i + j + k in (x, y, z) will look, in the new coordinates, like u = rnSn(θ, φ). Substituting this expression for u in Laplace’s equation, we find that 1 sin θ ∂ ∂θ ( sin θ ∂Sn ∂θ ) + 1 sin2 θ ∂2Sn ∂φ2 + n(n + 1)Sn = 0. Separable solutions take the form Sn = (Am cosmφ + Bm sinmφ)F (θ), with m an integer. The function F (θ) satisfies 1 sin θ d dθ ( sin θ dF dθ ) + { n(n + 1) − m 2 sin2 θ } F = 0. If we now make the usual transformation, µ = cos θ, F (θ) = y(µ), we get (1 − µ2)y′′ − 2µy′ + { n(n + 1) − m 2 1 − µ2 } y = 0, 54 LEGENDRE FUNCTIONS which is the associated Legendre equation. Our solutions can therefore be written in the form u = rn(Am cosmφ + Bm sinmφ){Cn,mPmn (cos θ) + Dn,mQmn (cos θ)}. The quantities rn cosmφPmn (cos θ), which appear as typical terms in the solution, are called spherical harmonics and appear widely in the solution of linear bound- ary value problems in spherical geometry. We will see how they arise in a quantum mechanical description of the hydrogen atom in Section 4.3.6. Exercises 2.1 Solve Legendre’s equation of order two, (1− x2)y′′ − 2xy′ + 6y = 0, by the method of Frobenius. What is the simplest solution of this equation? By using this simple solution and the reduction of order method find a closed form expression for Q1(x). 2.2 Use the generating function to evaluate (a) P ′n(1), (b) Pn(0). 2.3 Prove that (a) P ′n+1(x) − P ′n−1(x) = (2n + 1)Pn(x), (b) (1 − x2)P ′n(x) = nPn−1(x) − nxPn(x). 2.4 Find the first four nonzero terms in the Fourier–Legendre expansion of the function f(x) = { 0 for −1 < x < 0, 1 for 0 < x < 1. What value will this series have at x = 0? 2.5 Establish the results (a) ∫ 1 −1 xPn(x)Pn−1(x)dx = 2n 4n2 − 1 for n = 1, 2, . . . , (b) ∫ 1 −1 Pn(x)P ′n+1(x)dx = 2 for n = 0, 1, . . . , (c) ∫ 1 −1 xP ′n(x)Pn(x)dx = 2n 2n + 1 for n = 0, 1, . . . . 2.6 Determine the Wronskian of Pn and Qn for n = 0, 1, 2, . . . . 2.7 Solve the axisymmetric boundary value problem for Laplace’s equation, ∇2T = 0 for 0 < r < a, 0 < θ < π, T (a, θ) = 2 cos5 θ. 2.8 Show that (a) P 23 (x) = 15x(1 − x2), (b) P 14 (x) = 5 2 (7x3 − 3x)(1 − x2)1/2. EXERCISES 55 2.9 ∗ Prove that |P ′n(x)| < n2 and |P ′′n (x)| < n4 for −1 < x < 1. 2.10 Derive Equation (2.10). 2.11 ∗ Find the solution of the Dirichlet problem, ∇2Φ = 0 in r > 2 subject to Φ → 0 as r → ∞ and Φ(2, θ, φ) = sin2 θ cos 2φ. 2.12 ∗ The self-adjoint form of the associated Legendre equation is d dx { (1 − x2)Pmn (x) } + { n(n + 1) − m 2 1 − x2 } Pmn (x) = 0. Using this directly, prove the orthogonality property∫ 1 −1 Pml (x)P m n (x)dx = 0 for l = n. Evaluate ∫ 1 −1 [Pmm (x)] 2 dx. 2.13 (a) Suppose Pn(x0) = 0 for some x0 ∈ (−1, 1). Show that x0 is a simple zero. (b) Show that Pn with n  1 has n distinct zeros in (−1, 1). 2.14 Project A simplified model for the left ventricle of a human heart is pro- vided by a sphere of time-dependent radius R = R(t) with a circular aortic opening of constant area A, as shown in Figure 2.5. During contraction we suppose that the opening remains fixed whilst the centre of the sphere moves directly toward the centre of the opening and the radius R(t) de- creases accordingly. As a result, some of the blood filling the ventricle cavity is ejected though the opening with mean speed U = U(t) into the attached cylindrical aorta. This occurs sufficiently rapidly that we can as- sume that the flow is inviscid, irrotational, incompressible, and symmetric with respect to the aortal axis. (a) State a partial differential equation appropriate to the fluid flow for this situation. (b) During contraction, show that a point on the surface of the ventricle has velocity Ṙn − (Ṙa)i, where n is the outward unit normal at time t, i is the unit vector in the aortic flow direction and a = cosα(t). Show that having (R sinα)2 = R2(1 − a2) constant gives ∂φ ∂n = f(s) = { Us for a < s < 1, Ṙ(1 − s/a) for −1 < s < a, where s = cos θ with θ the usual spherical polar coordinate. CHAPTER THREE Bessel Functions In this chapter, we will discuss a class of functions known as Bessel functions. These are named after the German mathematician and astronomer Friedrich Bessel, who first used them to analyze planetary orbits, as we shall discuss later. Bessel functions occur in many other physical problems, usually in a cylindrical geometry, and we will discuss some examples of these at the end of this chapter. Bessel’s equation can be written in the form x2 d 2y dx2 + x dy dx + ( x2 − ν2 ) y = 0, (3.1) with ν real and positive. Note that (3.1) has a regular singular point at x = 0. Using the notation of Chapter 1, xQ P = x2 x2 = 1, x2R P = x2 ( x2 − ν2 ) x2 = x2 − ν2, both of which are polynomials and have Taylor expansions with infinite radii of convergence. Any series solution will therefore also have an infinite radius of con- vergence. 3.1 The Gamma Function and the Pockhammer Symbol Before we use the method of Frobenius to construct the solutions of Bessel’s equa- tion, it will be useful for us to make a couple of definitions. The gamma function is defined by Γ(x) = ∫ ∞ 0 e−qqx−1 dq, for x > 0. (3.2) Note that the integration is over the dummy variable q and x is treated as constant during the integration. We will start by considering the function evaluated at x = 1. By definition, Γ(1) = ∫ ∞ 0 e−q dq = 1. We also note that Γ(x + 1) = ∫ ∞ 0 e−qqx dq, 3.1 THE GAMMA FUNCTION AND THE POCKHAMMER SYMBOL 59 which can be integrated by parts to give Γ(x + 1) = [ −qxe−q ]∞ 0 + ∫ ∞ 0 e−qxqx−1 dq = x ∫ ∞ 0 e−qqx−1 dq = xΓ(x). We therefore have the recursion formula Γ(x + 1) = xΓ(x). (3.3) Suppose that x = n is a positive integer. Then Γ(n + 1) = nΓ(n) = n(n− 1)Γ(n− 1) = · · · = n(n− 1) . . . 2 · 1 = n!. (3.4) We therefore have the useful result, Γ(n + 1) = n! for n a positive integer. We will often need to know Γ (1/2). Firstly, consider the definition, Γ ( 1 2 ) = ∫ ∞ 0 e−qq−1/2dq. If we introduce the new variable Q = √ q, so that dQ = 12q −1/2dq, this integral becomes Γ ( 1 2 ) = 2 ∫ ∞ 0 e−Q 2 dQ. We can also write this integral in terms of another new variable, Q = Q̃, to obtain{ Γ ( 1 2 )}2 = ( 2 ∫ ∞ 0 e−Q 2 dQ )( 2 ∫ ∞ 0 e−Q̃ 2 dQ̃ ) . Since the limits are independent, we can combine the integrals as{ Γ ( 1 2 )}2 = 4 ∫ ∞ 0 ∫ ∞ 0 e−(Q 2+Q̃2) dQ dQ̃. If we now change to standard polar coordinates we have dQ dQ̃ = r dr dθ, where Q = r cos θ and Q̃ = r sin θ, and hence{ Γ ( 1 2 )}2 = 4 ∫ π/2 θ=0 ∫ ∞ r=0 e−r 2 r dr dθ. The limits of integration give us the positive quadrant of the (Q, Q̃)-plane, as re- quired. Performing the integration over θ we have{ Γ ( 1 2 )}2 = 2π ∫ ∞ 0 re−r 2 dr, and integrating with respect to r gives{ Γ ( 1 2 )}2 = 2π [ −1 2 e−r 2 ]∞ 0 = π. Finally, we have Γ ( 1 2 ) = 2 ∫ ∞ 0 e−Q 2 dQ = √ π. (3.5) 60 BESSEL FUNCTIONS We can use Γ(x) = Γ(x + 1)/x to define Γ(x) for negative values of x. For example, Γ ( −1 2 ) = Γ ( − 12 + 1 ) − 12 = −2Γ ( 1 2 ) = −2 √ π. We also find that Γ(x) is singular at x = 0. From the definition, (3.2), the integrand diverges like 1/q as q → 0, which is not integrable. Alternatively, Γ(x) = Γ(x+1)/x shows that Γ(x) ∼ 1/x as x → 0. Note that the gamma function is available in MATLAB as the function gamma. The Pockhammer symbol is a simple way of writing down long products. It is defined as (α)r = α(α + 1)(α + 2) . . . (α + r − 1), so that, for example, (α)1 = α and (α)2 = α(α + 1), and, in general, (α)r is a product of r terms. We also choose to define (α)0 = 1. Note that (1)n = n!. A relationship between the gamma function and the Pockhammer symbol that we will need later is Γ(x) (x)n = Γ(x + n) (3.6) for x real and n a positive integer. To derive this, we start with the definition of the Pockhammer symbol, (x)n = x(x + 1)(x + 2) . . . (x + n− 1). Now Γ(x) (x)n = Γ(x) {x(x + 1)(x + 2) . . . (x + n− 1)} = {Γ(x)x} {(x + 1)(x + 2) . . . (x + n− 1)} . Using the recursion relation (3.3), Γ(x) (x)n = Γ(x + 1) {(x + 1)(x + 2) . . . (x + n− 1)} . We can repeat this to give Γ(x) (x)n = Γ(x + n− 1) (x + n− 1) = Γ(x + n). 3.2 Series Solutions of Bessel’s Equation We can now proceed to consider a Frobenius solution, y(x) = ∞∑ n=0 anx n+c. When substituted into (3.1), this yields x2 ∞∑ n=0 an(n + c)(n + c− 1)xn+c−2 + x ∞∑ n=0 an(n + c)xn+c−1 3.2 SERIES SOLUTIONS OF BESSEL’S EQUATION 63 We considered this example in detail in Section 1.3.1, and found that y(x) = a0 cosx x1/2 + a1 sinx x1/2 . This means that (see Exercise 3.2) J1/2(x) = √ 2 πx sinx, J−1/2(x) = √ 2 πx cosx. The recurrence relations (3.21) and (3.22), which we will derive later, then show that Jn+1/2(x) and J−n−1/2(x) are products of finite polynomials with sinx and cosx. Finally we consider what happens when 2ν is an even integer, and hence ν is an integer. A rather lengthy calculation allows us to write the solution in the form y = AJν(x) + BYν(x), where Yν is Weber’s Bessel function of order ν defined as Yν(x) = Jν(x) cos νπ − J−ν(x) sin νπ . (3.10) Notice that the denominator of this expression is obviously zero when ν is an integer, so this case requires careful treatment. We note that the second solution of Bessel’s equation can also be determined using the method of reduction of order as y(x) = AJν(x) + BJν(x) ∫ x 1 q Jν(q)2 dq. In Figure 3.1 we show Ji(x) for i = 0 to 3. Note that J0(0) = 1, but that Ji(0) = 0 for i > 0, and that J (j)i (0) = 0 for j < i, i > 1. We generated Figure 3.1 using the MATLAB script    x=0:0.02:20; subplot(2,2,1), plot(x,besselj(0,x)), title(’J_0(x)’) subplot(2,2,2), plot(x,besselj(1,x)), title(’J_1(x)’) subplot(2,2,3), plot(x,besselj(2,x)), title(’J_2(x)’) subplot(2,2,4), plot(x,besselj(3,x)), title(’J_3(x)’) We produced Figures 3.2, 3.5 and 3.6 in a similar way, using the MATLAB functions bessely, besseli and besselk. In Figure 3.2 we show the first two Weber’s Bessel functions of integer order. Notice that as x → 0, Yn(x) → −∞. As you can see, all of these Bessel functions are oscillatory. The first three zeros of J0(x) are 2.4048, 5.5201 and 8.6537, whilst the first three nontrivial zeros of J1(x) are 3.8317, 7.0156 and 10.1735, all to four decimal places. 64 BESSEL FUNCTIONS Fig. 3.1. The functions J0(x), J1(x), J2(x) and J3(x). Fig. 3.2. The functions Y0(x) and Y1(x). 3.3 The Generating Function for Jn(x), n an integer Rather like the Legendre polynomials, there is a simple function that will generate all of the Bessel functions of integer order. In order to establish this, it is useful to manipulate the definition of the Jν(x). From (3.9), we have, for ν = n, Jn(x) = xn 2nΓ(1 + n) ∞∑ i=0 ( −x2/4 )i i!(1 + n)i = ∞∑ i=0 (−1)ix2i+n 22i+ni!(1 + n)iΓ(1 + n) . 3.3 THE GENERATING FUNCTION FOR Jn(x), n AN INTEGER 65 Using (3.6) we note that Γ(1 + n)(1 + n)i = Γ(1 + n + i), and using (3.4) we find that this is equal to (n + i)!. For n an integer, we can therefore write Jn(x) = ∞∑ i=0 (−1)ix2i+n 22i+ni!(n + i)! . (3.11) Let’s now consider the generating function g(x, t) = exp { 1 2 x ( t− 1 t )} . (3.12) The series expansions of each of the constituents of this are exp (xt/2) = ∞∑ j=0 (xt/2)j j! , exp ( − x 2t ) = ∞∑ i=0 (−x/2t)i i! , both from the Taylor series for ex. These summations can be combined to produce g(x, t) = ∞∑ j=0 ∞∑ i=0 (−1)ixi+jtj−i 2j+ii!j! . Now, putting j = i + n so that −∞  n  ∞, this becomes g(x, t) = ∞∑ n=−∞ { ∞∑ i=0 (−1)ix2i+n 22i+ni!(n + i)! } tn. Comparing the coefficients in this series with the expression (3.11) we find that g(x, t) = exp { 1 2 x ( t− 1 t )} = ∞∑ n=−∞ Jn(x)tn. (3.13) We can now exploit this relation to study Bessel functions. Using the fact that the generating function is invariant under t → −1/t, we have ∞∑ n=−∞ Jn(x)tn = ∞∑ n=−∞ Jn(x) ( −1 t )n = ∞∑ n=−∞ Jn(x)(−1)nt−n. Now putting m = −n, this is equal to −∞∑ m=∞ J−m(x)(−1)mtm. Now let n = m in the series on the right hand side, which gives ∞∑ n=−∞ Jn(x)tn = ∞∑ n=−∞ J−n(x)(−1)ntn. Comparing like terms in the series, we find that Jn(x) = (−1)nJ−n(x), and hence that Jn(x) and J−n(x) are linearly dependent over R (see Section 1.2.1). This explains why the solution of Bessel’s equation proceeds rather differently when ν is an integer, since Jν(x) and J−ν(x) cannot then be independent solutions. 68 BESSEL FUNCTIONS Fig. 3.4. The auxiliary circle and the projection of P onto Q. function of µ, which vanishes when P and Q are coincident with A or A′, that is when µ is an integer multiple of π. Hence we must be able to write φ− µ = ∞∑ n=1 An sinnµ. (3.20) As we shall see in Chapter 5, this is a Fourier series. In order to determine the constant coefficients An, we differentiate (3.20) with respect to µ to yield dφ dµ − 1 = ∞∑ n=1 nAn cosnµ. We can now exploit the orthogonality of the functions cosnµ to determine An. We multiply through by cosmµ and integrate from zero to π to obtain∫ π µ=0 ( dφ dµ − 1 ) cosmµdµ = ∫ π µ=0 cosmµ dφ dµ dµ = πm 2 Am. Since φ = 0 when µ = 0 and φ = π when µ = π, we can change the independent variable to give πm 2 Am = ∫ π φ=0 cosmµdφ. Substituting for µ from (3.19), we have that Am = 2 mπ ∫ π φ=0 cosm (φ− e sinφ) dφ. Finally, by direct comparison with (3.18), Am = 2mJm(me), so that φ = µ + 2 { J1(e) sinµ + 1 2 J2(2e) sin 2µ + 1 3 J3(3e) sin 3µ + · · · } . 3.4 DIFFERENTIAL AND RECURRENCE RELATIONS 69 Since µ is proportional to the time the planet takes to travel from A to P , this expression gives us the variation of the angle φ with time. 3.4 Differential and Recurrence Relations Between Bessel Functions It is often useful to find relationships between Bessel functions with different indices. We will derive two such relationships. We start with (3.9), multiply by xν and differentiate to obtain d dx [xνJν(x)] = d dx { ∞∑ n=0 (−1)n x2n+2ν 22n+ν n! Γ(1 + ν + n) } = ∞∑ n=0 (−1)n (2n + 2ν)x2n+2ν−1 22n+ν n! Γ(1 + ν + n) . Since Γ(1+ ν +n) = (n+ ν)Γ(n+ ν), this gives a factor that cancels with the term 2(n + ν) in the numerator to give d dx [xνJν(x)] = ∞∑ n=0 (−1)nx2n xν xν−1 22n+ν−1 n! Γ(ν + n) . We can rewrite this so that we have the series expansion for Jν−1(x), as d dx [xνJν(x)] = xνxν−1 2ν−1 ∞∑ n=0 (−1)n x2n 22n n! Γ(ν)(ν)n , so that d dx {xνJν(x)} = xνJν−1(x). (3.21) Later, we will use this expression to develop relations between general Bessel func- tions. Note that by putting ν equal to zero d dx {J0(x)} = J−1(x). However, we recall that Jn(x) = (−1)nJ−n(x) for n an integer, so that J1(x) = −J−1(x) and hence J ′0(x) = −J1(x), where we have used a prime to denote the derivative with respect to x. In the same vein as the derivation of (3.21), d dx (x−νJν(x)) = d dx ( x−νxν 2νΓ(1 + ν) ∞∑ n=0 ( −x2/4 )n n! (1 + ν)n ) = d dx ( ∞∑ n=0 (−1)n x2n 22n+ν n! Γ(1 + ν + n) ) = ∞∑ n=0 (−1)n x2n−1 n 22n+ν−1 n! Γ(1 + ν + n) . Notice that the first term in this series is zero (due to the factor of n in the numer- ator), so we can start the series at n = 1 and cancel the factors of n. The series 70 BESSEL FUNCTIONS can then be expressed in terms of the dummy variable m = n− 1 as d dx (x−νJν(x)) = 1 2ν−1 ∞∑ m=0 (−1)m+1 x2m+1 22m+2 m! Γ(ν + m + 2) = − 1 2ν+1 ∞∑ m=0 (−1)m x2m+1 22m m! Γ(ν + m + 2) . Using the fact that Γ(ν + m + 2) = Γ(ν + 2) (ν + 2)m and the series expansion of Jν+1(x), (3.11), we have d dx { x−νJν(x) } = − x 2ν+1Γ(ν + 1) ∞∑ m=0 ( −x2/4 )m m! (2 + ν)m , and consequently d dx { x−νJν(x) } = −x−νJν+1(x). (3.22) Notice that (3.21) and (3.22) both hold for Yν(x) as well. We can use these relationships to derive recurrence relations between the Bessel functions. We expand the differentials in each expression to give the equa- tions J ′ν(x) + ν x Jν(x) = Jν−1(x), where we have divided through by xν , and J ′ν(x) − ν x Jν(x) = −Jν+1(x), where this time we have multiplied by xν . By adding these expressions we find that J ′ν(x) = 1 2 {Jν−1(x) − Jν+1(x)} , and by subtracting them 2ν x Jν(x) = Jν−1(x) + Jν+1(x), which is a pure recurrence relationship. These results can also be used when integrating Bessel functions. For example, consider the integral I = ∫ xJ0(x) dx. This can be integrated using (3.21) with ν = 1, since I = ∫ xJ0(x) dx = ∫ (xJ1(x)) ′ dx = xJ1(x). 3.6 ORTHOGONALITY OF THE BESSEL FUNCTIONS 73 Notice that we could have also multiplied the differential equation for Jν(µx) by Jν(λx)/x and integrated to give∫ a 0 Jν(λx) { x d 2 dx2 Jν(µx) + d dx Jν(µx) + 1 x (µ2x2 − ν2)Jν (µx) } dx = 0. (3.26) We now subtract (3.25) from (3.26) to give∫ a 0 { Jν(µx) d dx ( x d dx Jν(λx) ) − Jν(λx) d dx ( x d dx Jν(µx) ) +x(λ2 − µ2)Jν(λx)Jν(µx) } dx = 0. (3.27) We have simplified these expressions slightly by observing that xJ ′′ν + J ′ ν = (xJ ′ ν) ′. Now consider the first term of the integrand and note that it can be integrated by parts to give ∫ a 0 Jν(µx) d dx ( x d dx Jν(λx) ) dx = [ Jν(µx)x d dx Jν(λx) ]a 0 − ∫ a 0 x d dx Jν(λx) d dx Jν(µx)dx. Similarly for the second term, which is effectively the same with µ and λ inter- changed, ∫ a 0 Jν(λx) d dx ( x d dx Jν(µx) ) dx = [ Jν(λx)x d dx Jν(µx) ]a 0 − ∫ a 0 x d dx Jν(µx) d dx Jν(λx)dx. Using these expressions in (3.27), we find that (λ2 − µ2) ∫ a 0 xJν(λx)Jν(µx)dx = Jν(λa)aµJ ′ν(µa) − Jν(µa)aλJ ′ν(λa). (3.28) Finally, since we chose λ and µ so that Jν(λa) = Jν(µa) = 0 and µ = λ,∫ a 0 xJν(µx)Jν(λx)dx = 0. This is an orthogonality relation for the Bessel functions, with weighting function w(x) = x. We now need to calculate the value of ∫ a 0 xJν(µx)2 dx. To this end, we substitute λ = µ +  into (3.28). For  1, neglecting terms in 2 and smaller, we find that −2µ ∫ a 0 xJν(µx)Jν(µx + x) dx = a [µJν(µa + a)J ′ν(µa) − (µ + )Jν(µa)J ′ν(µa + a)] . (3.29) In order to deal with the terms evaluated at x = a(µ+ ) we consider Taylor series 74 BESSEL FUNCTIONS expansions, Jν(a(µ + )) = Jν(aµ) + aJ ′ν(aµ) + · · · , J ′ν(a(µ + )) = J ′ν(aµ) + aJ ′′ν (aµ)+ · · · . These expansions can then be substituted into (3.29). On dividing through by  and considering the limit  → 0, we find that∫ a 0 x[Jν(µx)]2 dx = a2 2 [J ′ν(µa)] 2 − 1 2 a2Jν(µa)J ′′ν (µa) − a 2µ Jν(µa)J ′ν(µa). We now suppose that Jν(µa) = 0, which gives∫ a 0 x[Jν(µx)]2 dx = a2 2 [J ′ν(µa)] 2. In general, ∫ a 0 xJν(µx)Jν(λx) dx = a2 2 [J ′ν(µa)] 2δλµ, (3.30) where Jν(µa) = Jν(λa) = 0 and δλµ is the Kronecker delta function. We can now construct a series expansion of the form f(x) = ∞∑ i=1 CiJν(λix). (3.31) This is known as a Fourier–Bessel series, and the λi are chosen such that Jν(λia) = 0 for i = 1, 2, . . . , λ1 < λ2 < · · · . As we shall see later, both f(x) and f ′(x) must be piecewise continuous for this series to converge. After multiply- ing both sides of (3.31) by xJν(λjx) and integrating over the interval [0, a] we find that ∫ a 0 xJν(λjx)f(x) dx = ∫ a 0 xJν(λjx) ∞∑ i=1 CiJν(λix) dx. Assuming that the series converges, we can interchange the integral and the sum- mation to obtain∫ a 0 xJν(λjx)f(x) dx = ∞∑ i=1 Ci ∫ a 0 xJν(λjx)Jν(λix) dx. We can now use (3.30) to give∫ a 0 xJν(λjx)f(x) dx = ∞∑ i=1 Ci a2 2 [J ′ν(λja)] 2 δλjλi = Cj λja 2 2 [J ′ν(λja)] 2 , and hence Cj = 2 a2 [J ′ν(λja)] 2 ∫ a 0 xJν(λjx)f(x) dx. (3.32) Example Let’s try to expand the function f(x) = 1 on the interval 0  x  1, as a Fourier– Bessel series. Since J0(0) = 1 but Ji(0) = 0 for i > 0, we will choose ν to be zero for our expansion. We rely on the existence of a set of values λj such that J0(λj) = 0 3.6 ORTHOGONALITY OF THE BESSEL FUNCTIONS 75 for j = 1, 2, . . . (see Figure 3.1). We will need to determine these values either from tables or numerically. Using (3.32), we have Cj = 2 [J ′0(λj)] 2 ∫ 1 0 xJ0(λjx) dx. If we introduce the variable y = λjx (so that dy = λjdx), the integral becomes Cj = 2 [J ′0(λj)] 2 1 λ2j ∫ λj y=0 yJ0(y) dy. Using the expression (3.21) with ν = 1, we have Cj = 2 [J ′0(λj)] 2 1 λ2j ∫ λj y=0 d dy (yJ1(y)) dy = 2 λ2j [J ′ 0(λj)] 2 [yJ1(y)] λj 0 = 2J1(λj) λj [J ′0(λj)] 2 = 2 λjJ1(λj) , and hence ∞∑ i=1 2 λiJ1(λi) J0(λix) = 1 for 0  x < 1, where J0(λi) = 0 for i = 1, 2, . . . . In Figure 3.7 we show the sum of the first fifteen terms of the Fourier–Bessel series. Notice the oscillatory nature of the solution, which is more pronounced in the neighbourhood of the discontinuity at x = 1. This phenomenon always occurs in series expansions relative to sequences of orthogonal functions, and is called Gibbs’ phenomenon. Before we can give a MATLAB script that produces Figure 3.7, we need to be able to calculate the zeros of Bessel functions. Here we merely state a couple of simple results and explain how these are helpful. The interested reader is referred to Watson (1922, Chapter 15) for a full discussion of this problem. The Bessel– Lommel theorem on the location of the zeros of the Bessel functions Jν(x) states that when − 12 < ν  1 2 and mπ < x < (m + 1 2 )π, Jν(x) is positive for even m and negative for odd m. This implies that Jν(x) has an odd number of zeros in the intervals (2n− 1)π/2 < x < 2nπ for n an integer. In fact, it can be shown that the positive zeros of J0(x) lie in the intervals nπ + 3π/4 < x < nπ + 7π/8. This allows us to use the ends of these intervals as an initial bracketing interval for the roots of J0(x). A simple MATLAB script that uses this result is    global nu ze nu=0; for ir=1:20 start_int = (ir-1)*pi+3*pi/4; end_int = (ir-1)*pi+7*pi/8; ze(ir) = fzero(’bessel’,[start_int end_int]); end 78 BESSEL FUNCTIONS This can be determined by using (3.33) with f(x) = x2 and ν = 0, so that the general solution is y(x) = ∫ x πs 2 (J0(s)Y0(x) − J0(x)Y0(s)) ds + AJ0(x) + BY0(x). In order to integrate sJ0(s) we note that this can be written as (sJ1(s))′ and similarly for sY0(s), which gives y(x) = πx 2 (J1(x)Y0(x) − J0(x)Y1(x)) + AJ0(x) + BY0(x). But we note that J1(x) = −J ′0(x) and Y1(x) = −Y ′0(x), so that the expression in the brackets is merely the Wronskian, and hence y(x) = 1 + AJ0(x) + BY0(x). Although it is clear, with hindsight, that y(x) = 1 is the particular integral solution, simple solutions are not always easy to spot a priori. Example Let’s find the particular integral of x2 d 2y dx2 + x dy dx + (x2 − ν2)y = x. (3.34) We will look for a series solution as used in the method of Frobenius, namely y(x) = ∞∑ n=0 anx n+c. Substituting this into (3.34), we obtain an expression similar to (3.7), a0(c2 − ν2)xc + a1{(1 + c)2 − ν2}xc+1 + ∞∑ n=2 [an{(n + c)2 − ν2} + an−2]xn+c = x. Note that xc needs to match with the x on the right hand side so that c = 1 and a0 = 1/(1 − ν2). We will defer discussion of the case ν = 1. At next order we find that a1(22 − ν2) = 0, and consequently, unless ν = 2, we have a1 = 0. For the general terms in the summation we have an = − an−2 {(n + 1)2 − ν2} . Note that since a1 = 0, an = 0 for n odd. It now remains to determine the general term in the sequence. For n = 2 we find that a2 = − a0 32 − ν2 = − 1 (12 − ν2)(32 − ν2) and then with n = 4 and using the form of a2 we have a4 = 1 (12 − ν2)(32 − ν2)(52 − ν2) . 3.8 SOLUTIONS EXPRESSIBLE AS BESSEL FUNCTIONS 79 The general expression is a2n = (−1)n n+1∏ i=1 1 (2i− 1)2 − ν2 . This can be manipulated by factorizing the denominator and extracting a factor of 22 from each term in the product (of which there are n + 1). This gives a2n = (−1)n 22n+2 n+1∏ i=1 1 i− 12 + 1 2ν n+1∏ i=1 1 i− 12 − 1 2ν = (−1)n 22n+2 1( 1 2 + 1 2ν ) n+1 ( 1 2 − 1 2ν ) n+1 . Hence the particular integral of the differential equation is y(x) = x ∞∑ n=0 (−1)n 22n+2 x2n( 1 2 + 1 2ν ) n+1 ( 1 2 − 1 2ν ) n+1 . (3.35) In fact, solutions of the equation x2 d 2y dx2 + x dy dx + (x2 − ν2)y = xµ+1 (3.36) are commonly referred to as sµ,ν and are called Lommel’s functions. They are undefined when µ± ν is an odd negative integer, a case that is discussed in depth by Watson (1922). The series expansion of the solution of (3.36) is y(x) = xµ−1 ∞∑ m=0 (−1)m ( 1 2x )2m+2 Γ ( 12µ− 12ν + 12)Γ ( 12µ + 12ν + 12) Γ ( 1 2µ− 1 2ν + m + 3 2 ) Γ ( 1 2µ + 1 2ν + m + 3 2 ) . We can use this to check that (3.35) is correct. Note that we need to use (3.6). 3.8 Solutions Expressible as Bessel Functions There are many other differential equations whose solutions can be written in terms of Bessel functions. In order to determine some of these, we consider the transfor- mation y(x) = xαỹ(xβ). Since α and β could be fractional, we will restrict our attention to x  0. We substitute this expression into the differential equation and seek values of α and β which give Bessel’s equation for ỹ. Example Let’s try to express the solutions of the differential equation d 2y dx2 − xy = 0 in terms of Bessel functions. This is called Airy’s equation and has solutions Ai(x) and Bi(x), the Airy functions. We start by introducing the function ỹ. 80 BESSEL FUNCTIONS Differentiating with respect to x we have dy dx = αxα−1ỹ + βxα+β−1ỹ′. Differentiating again we obtain d 2y dx2 = α(α− 1)xα−2ỹ + (2αβ + β2 − β)xα+β−2ỹ′ + β2xα+2β−2ỹ′′. These expressions can now be substituted into Airy’s equation to give α(α− 1)xα−2ỹ + (2αβ + β2 − β)xα+β−2ỹ′ + β2xα+2β−2ỹ′′ − xα+1ỹ = 0. It is now convenient to multiply the entire equation by x−α+2/β2 (this means that the coefficient of ỹ′′ is x2β), which gives x2β ỹ′′ + (2α + β − 1) β xβ ỹ′ + { − 1 β2 x3 + α(α− 1) β2 } ỹ = 0. Considering the coefficient of ỹ we note that we require x3 ∝ x2β which gives β = 32 . The coefficient of ỹ′ gives us that α = 12 . The equation is now x2β ỹ′′ + xβ ỹ′ + { − ( 2xβ 3 )2 − 1 9 } ỹ = 0, which has solutions K1/3(2x3/2/3) and I1/3(2x3/2/3). The general solution of Airy’s equation in terms of Bessel’s functions is therefore y(x) = x1/2 { AK1/3 ( 2 3 x3/2 ) + BI1/3 ( 2 3 x3/2 )} . In fact, Ai(x) = √ x/3K1/3(2x3/2/3)/π. A graph of this Airy function is shown in Figure 11.12. The Airy functions Ai and Bi are available in MATLAB through the function airy. 3.9 Physical Applications of the Bessel Functions 3.9.1 Vibrations of an Elastic Membrane We will now derive the equation that governs small displacements, z = z(x, y, t), of an elastic membrane. We start by considering a small membrane with sides of length δSx in the x-direction and δSy in the y-direction, which makes angles ψx, ψx+δψx and ψy, ψy+δψy with the horizontal, as shown in Figure 3.8. Newton’s second law of motion in the vertical, z-direction gives ∂2z ∂t2 ρδSxδSy = δSy{T sin(ψx + δψx) − T sin(ψx)} + δSx{T sin(ψy + δψy) − T sin(ψy)}, where ρ is the constant density (mass per unit area) of the membrane and T is the tension, assumed constant for small vibrations of the membrane. We will eventually consider the angles ψx and ψy to be small, but at the outset we will consider the 3.9 PHYSICAL APPLICATIONS OF THE BESSEL FUNCTIONS 83 these come from). In terms of these variables, (3.39) becomes ∂2z ∂ξ∂η = 0. Integrating this with respect to η gives ∂z ∂ξ = F (ξ), and with respect to ξ z = ∫ ξ F (s) ds + g(η) = f(ξ) + f(η), and hence we have that z(x, t) = f(x− ct) + g(x + ct). (3.40) This represents the sum of two waves, one, represented by f(x − ct), propagating from left to right without change of form at speed c, and one, represented by g(x+ct), from right to left at speed c. To see this, consider, for example, the solution y = f(x − ct), and simply note that on the paths x = ct + constant, f(x − ct) is constant. Similarly, g(x + ct) is constant on the paths x = −ct + constant. The functions f and g can be determined from the initial displacement and velocity. If z(x, 0) = z0(x), ∂z ∂t (x, 0) = u0(x), then (3.40) with t = 0 gives us f(x) + g(x) = z0(x). (3.41) If we now differentiate (3.40) with respect to time, we have ∂z ∂t = −cf ′(x− ct) + cg′(x + ct), and hence when t = 0, −cf ′(x) + cg′(x) = u0(x). This can be integrated to yield −cf(x) + cg(x) = ∫ x a u0(s) ds, (3.42) where a is an arbitrary constant. Solving the simultaneous equations (3.41) and (3.42), we find that f(x) = 1 2 z0(x) − 1 2c ∫ x a u0(s) ds, g(x) = 1 2 z0(x) + 1 2c ∫ x a u0(s) ds. On substituting these into (3.40), we obtain d’Alembert’s solution of the one- dimensional wave equation, z(x, t) = 1 2 {z0(x− ct) + z0(x + ct)} + 1 2c ∫ x+ct x−ct u0(s) ds. (3.43) 84 BESSEL FUNCTIONS In particular, if u0 = 0, a string released from rest, the solution consists of a left-travelling and a right-travelling wave, each with the same shape but half the amplitude of the initial displacement. The solution when z0 = 0 for |x| > a and z0 = 1 for |x| < a, a top hat initial displacement, is shown in Figure 3.10. Fig. 3.10. D’Alembert’s solution for an initially stationary top hat displacement. Two-Dimensional Solutions of the Wave Equation Let’s now consider the solution of the two-dimensional wave equation, (3.38), for an elastic, disc-shaped membrane fixed to a circular support. In cylindrical polar coordinates, the two-dimensional wave equation becomes 1 c2 ∂2z ∂t2 = ∂2z ∂r2 + 1 r ∂z ∂r + 1 r2 ∂2z ∂θ2 . (3.44) We will look for solutions in the circular domain 0  r  a and 0  θ < 2π. Such solutions must be periodic in θ with period 2π. The boundary condition is z = 0 at r = a. We seek a separable solution, z = R(r)τ(t)Θ(θ). On substituting this into (3.44), we find that 1 c2 τ ′′ τ = rR′′ + R′ rR + 1 r2 Θ′′ Θ = −ω 2 c2 , 3.9 PHYSICAL APPLICATIONS OF THE BESSEL FUNCTIONS 85 where −ω2/c2 is the separation constant. An appropriate solution is τ = eiωt, which represents a time-periodic solution. This is what we would expect for the vibrations of an elastic membrane, which is, after all, a drum. The angular frequency, ω, is yet to be determined. We can now write r2R′′ + rR′ R + ω2r2 c2 = −Θ ′′ Θ = n2, where n2 is the next separation constant. This gives us Θ = A cosnθ + B sinnθ, with n a positive integer for 2π-periodicity. Finally, R(r) satisfies r2 d 2R dr2 + r dR dr + ( ω2 c2 r2 − n2 ) R = 0. We can simplify this by introducing a new coordinate s = λr where λ = ω/c, which gives s2 d 2R ds2 + s dR ds + ( s2 − n2 ) R = 0, which is Bessel’s equation with ν = n. Consequently the solutions can be written as R(s) = AJn(s) + BYn(s). We need a solution that is bounded at the origin so, since Yn(s) is unbounded at s = 0, we require B = 0. The other boundary condition is that the membrane is constrained not to move at the edge of the domain, so that R(s) = 0 at s = λa. This gives the condition Jn(λa) = 0, (3.45) which has an infinite number of solutions λ = λni. Specifying the value of λ = ω/c prescribes the frequency at which the membrane will oscillate. Consequently the functional form of the natural modes of oscillation of a circular membrane is z = Jn(λnir) (A cosnθ + B sinnθ) eiωit, (3.46) where ωi = cλni and the values of λni are solutions of (3.45). Figure 3.11 shows a few of these natural modes when a = 1, which we created using the MATLAB function ezmesh (see Section 2.6.1). Here we have considered the natural modes of oscillation. We could however have tackled an initial value problem. Let’s consider an example where the initial displacement of the membrane is specified to be z(r, θ, 0) = G(r, θ) and the mem- brane is released from rest, so that ∂z/∂t = 0, when t = 0. The fact that the membrane is released from rest implies that the temporal variation will be even, so we need only consider a solution proportional to cosωit. Consequently, using the linearity of the wave equation to add all the possible solutions of the form (3.46), the general form of the displacement of the membrane is z(r, θ, t) = ∞∑ i=0 ∞∑ n=0 Jn(λnir) (Ani cosnθ + Bni sinnθ) cosωit. 88 BESSEL FUNCTIONS Fig. 3.12. An example of frequency modulation, with β = 1.2, fm = 1.2 and fc = 1. and 2 sin(2πfct) sin {(2n + 1)2πfmt} = − cos {2πfct + (2n + 1)2πfmt} + cos {2πfct− (2n + 1)2πfmt} . Substituting these expressions back into (3.50), we find that Fc(t) can be written as Fc(t) = J0(β) cos(2πfct) + ∞∑ n=1 J2n(β) [cos {2πt (fc + 2nfm)} + cos {2πt (fc − 2nfm)}] − ∞∑ n=0 J2n+1(β) [cos {2πt (fc − (2n + 1)fm)} − cos {2πt (fc + (2n + 1)fm)}] , the first few terms of which are given by Fc(t) = J0(β) cos(2πfct) − J1(β) [cos {2π(fc − fm)t} − cos {2π(fc + fm)t}] +J2(β) [cos {2π(fc − 2fm)t} + cos {2π(fc + 2fm)t}] −J3(β) [cos {2π(fc − 3fm)t} − cos {2π(fc + 3fm)t}] + · · · . This means that the main carrier signal has frequency fc and amplitude J0(β), and that the other components, known as sidebands, have frequencies fc ± nfm for EXERCISES 89 Fig. 3.13. The frequency spectrum of a signal with β = 0.2, fm = 1.2 and fc = 1. n an integer, and amplitudes Jn(β). These amplitudes, known as the frequency spectrum of the signal, are shown in Figure 3.13. Exercises 3.1 The functions J0(x) and Y0(x) are solutions of the equation x d 2y dx2 + dy dx + xy = 0. Show that if Y0(x) is the singular solution then Y0(x) = J0(x) log x− ∞∑ i=1 φ(i) (i!)2 ( −x 2 4 )i , where φ(i) = i∑ k=1 1 k . 3.2 Using the definition Jν(x) = xν 2νΓ(1 + ν) ∞∑ i=0 (−x2/4)i i!(1 + ν)i , show that J1/2(x) = √ 2 πx sinx. 90 BESSEL FUNCTIONS 3.3 ∗ Show that (a) 2J ′ν(x) = Jν−1(x) − Jν+1(x), (b) 2νJν(x) = xJν+1(x) + xJν−1(x). 3.4 Determine the series expansions for the modified Bessel functions Iν(x) and I−ν(x). 3.5 Find the Wronskian of the functions (a) Jν(x) and J−ν(x), (b) Jν(x) and Yν(x) for ν not an integer. 3.6 Determine the series expansion of∫ xµJν(x) dx. When µ = 1 and ν = 0 show that this is equivalent to xJ1(x). 3.7 Give the solutions, where possible in terms of Bessel functions, of the dif- ferential equations (a) d 2y dx2 − x2y = 0, (b) x d 2y dx2 + dy dx + y = 0, (c) x d 2y dx2 + (x + 1)2y = 0, (d) d 2y dx2 + α2y = 0, (e) d 2y dx2 − α2y = 0, (f) d 2y dx2 + β dy dx + γy = 0, (g) (1 − x2)d 2y dx2 − 2xdy dx + n(n + 1)y = 0. 3.8 Using the expression Jν(z) = 1 2π ∫ 2π 0 cos (νθ − z sin θ) dθ, show that Jν(0) = 0 for ν a nonzero integer and J0(0) = 1. 3.9 Determine the coefficients of the Fourier–Bessel series for the function f(x) = { 1 for 0  x < 1, −1 for 1  x  2, in terms of the Bessel function J0(x). 3.10 Determine the coefficients of the Fourier–Bessel series for the function f(x) = x on the interval 0  x  1 in terms of the Bessel function J1(x) (and repeat the exercise for J2(x)). Modify the MATLAB code used to generate Figure 3.7 to check your answers. 3.11 Calculate the Fourier–Bessel expansion for the functions
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved