Numerical methods for engineers ebook

 
    Contents
  1. Multiphysics Modeling: Numerical Methods and Engineering Applications
  2. Numerical Methods for Engineers By Steven C. Chapra, Raymond P. Canale
  3. Numerical Analysis for Science, Engineering and Technology:: volume 1 | Bentham Science
  4. Numerical Methods for Engineers, 6th Edition

Editorial Reviews. About the Author. Steven C. Chapra (Medford, MA) is Professor of Civil and Numerical Methods for Engineers - Kindle edition by Chapra. Editorial Reviews. About the Author. Steven C. Chapra (Medford, MA) is Professor of Civil and Kindle Store · Kindle eBooks · Engineering & Transportation. Ebook Jaan Kausalas - Numerical Methods in Engineering with MATLAB He has taught numerical methods, including finite element and boundary el- ement.

Author:ALYSIA STARRY
Language:English, Spanish, French
Country:Solomon Islands
Genre:Personal Growth
Pages:544
Published (Last):25.09.2016
ISBN:903-1-17311-985-6
Distribution:Free* [*Registration needed]
Uploaded by: NIKI

50253 downloads 101086 Views 14.77MB ePub Size Report


Numerical Methods For Engineers Ebook

Numerical methods for engineers / Steven C. Chapra, Berger chair in computing and engineering, Tufts University, Raymond P. Canale, professor emeritus of. eBook free PDF download on Numerical Methods for Engineers By Steven C. Chapra, Raymond P. Canale. Book download link provided by. Numerical. Methods for. Engineers and. Scientists. Second Edition. Revised and Expanded. Joe D. Hoffman. Department of Mechanical Engineering.

Using Fortran 95 to solve a range of practical engineering problems, Numerical Methods for Engineers, Second Edition provides an introduction to numerical methods, incorporating theory with concrete computing exercises and programmed examples of the techniques presented. Covering a wide range of numerical applications that have immediate relevancy for engineers, the book describes forty-nine programs in Fortran In addition, there is a precision module that controls the precision of calculations. Well-respected in their field, the authors discuss a variety of numerical topics related to engineering. Some of the chapter features include… The numerical solution of sets of linear algebraic equations Roots of single nonlinear equations and sets of nonlinear equations Numerical quadrature, or numerical evaluation of integrals An introduction to the solution of partial differential equations using finite difference and finite element approaches Describing concise programs that are constructed using sub-programs wherever possible, this book presents many different contexts of numerical analysis, forming an excellent introduction to more comprehensive subroutine libraries such as the numerical algorithm group NAG. Table of Contents.

Thus, the successful outcome of the previous example is not guaranteed. Despite this, we have found Solver useful enough to make it a feasible option for quickly obtaining roots in a wide range of engineering applications. It is superb at manipulating and locating the roots of polynomials. The fzero function is designed to locate one root of a single function. A simplified representation of its syntax is fzero f,x0,options where f is the function you are analyzing, x0 is the initial guess, and options are the optimization parameters these are changed using the function optimset.

If options are omitted, default values are employed. Note that one or two guesses can be employed. The same applies to the components of the constant vector b. The algorithm for the elimination phase now almost writes itself: Therefore, Aik is not re- placed by zero, but retains its original value. Dur- ing back substitution b is overwritten by the solution vector x, so that b contains the solution upon exit.

Let there be m such constant vectors, denoted by b1 , b2 ,. The solutions are then obtained by back substitution in the usual manner, one vector at a time. It would quite easy to make the corresponding changes in gauss. However, the LU decomposition method, described in the next article, is more versatile in handling multiple constant vectors.

Solution We used the program shown below. After constructing A and b, the output format was changed to long so that the solution would be printed to 14 decimal places. Here are the results: LU decomposition is not unique the combinations of L and U for a prescribed A are endless , unless certain constraints are placed on L or U.

These constraints distinguish one type of decomposition from another. Three commonly used decompositions are listed in Table 2. The cost of each additional solution is relatively small, since the forward and back substitution operations are much less time consuming than the decomposition process.

The diagonal elements of L do not have to be stored, since it is understood that each of them is unity. The contents of b are replaced by y during forward substitution. Similarly, back substitution overwrites y with the solution x. We study it here because it is invaluable in certain other applications e. By solving these equations in a certain order, it is possible to have only one unknown in each equation.

Consider the lower triangular portion of each matrix in Eq. Taking the term containing L i j outside the summation in Eq. Therefore, once L i j has been computed, Ai j is no longer needed. This makes it possible to write the elements of L over the lower triangular portion of A as they are computed. The elements above the principal diagonal of A will remain untouched.

If a negative L 2j j is encountered during decomposition, an error message is printed and the program is terminated. Substituting the given matrix for A in Eq. Then LUsol is used to compute the solution one vector at a time. By evaluating the determinant, classify the following matrices as singular, ill- conditioned or well-conditioned.

If all the nonzero terms are clus- tered about the leading diagonal, then the matrix is said to be banded. All the elements lying outside the band are zero. The matrix shown above has a bandwidth of three, since there are at most three nonzero elements in each row or column. Such a matrix is called tridiagonal. The original vectors c and d are destroyed and replaced by the vectors of the decomposed matrix. The vector y overwrites the constant vector b during the forward substitution.

Similarly, the solution vector x replaces y in the back substitution process. Thus Gauss elimination, which results in an upper triangular matrix of the form shown in Eq. There is an alternative storage scheme that can be employed during LU decom- position. If elimination has progressed to the stage where the kth row has become the pivot row, we have the following situation: The original vectors d, e and f are destroyed and replaced by the vectors of the decomposed matrix.

As in LUsol3, the vector y over- writes the constant vector b during forward substitution and x replaces y during back substitution. However, Gauss elimination fails immediately due to the presence of the zero pivot element the element A The above example demonstrates that it is sometimes essential to reorder the equations during the elimination phase.

The reordering, or row pivoting, is also re- quired if the pivot element is not zero, but very small in comparison to other elements in the pivot row, as demonstrated by the following set of equations: This is the principle behind scaled row pivoting, discussed next. The vector s can be obtained with the following algorithm: Note that the corresponding row interchange must also be carried out in the scale factor array s.

Apart from row swapping, the elimination and solution phases are identical to those of function gauss in Art. The most important of these is keeping a record of the row inter- changes during the decomposition phase.

In LUdecPiv this record is kept in the permutation array perm, initially set to [1, 2,. Whenever two rows are inter- changed, the corresponding interchange is also carried out in perm. Thus perm shows how the original rows were permuted. This information is then passed to the function LUsolPiv, which rearranges the elements of the constant vector in the same order before carrying out forward and back substitutions. There are no infallible rules for determining when pivoting should be used.

And we should not forget that pivoting is not the only means of controlling roundoff errors—there is also double precision arithmetic. It should be strongly emphasized that the above rules of thumb are only meant for equations that stem from real engineering problems. Therefore, it is excluded from further consideration. As r32 is larger than r22 , the third row is the better pivot row. It should be noted that U is the matrix that would result in the LU decomposition of the following row-wise permutation of A the ordering of rows is the same as achieved by pivoting: Alternate Solution It it not necessary to physically exchange equations during piv- oting.

The elimination would then proceed as follows for the sake of brevity, we skip repeating the details of choosing the pivot equation: In hand computations this is not a problem, because we can determine the order by inspection. The contents of p indicate the order in which the pivot rows were chosen. The equations are solved by back substitution in the reverse order: By dispensing with swapping of equations, the scheme outlined above would probably result in a faster and more complex algorithm than gaussPiv, but the number of equations would have to be quite large before the difference becomes noticeable.

The spring stiffnesses are denoted by ki , the weights of the masses are Wi , and xi are the displacements of the masses measured from the positions where the springs are undeformed. Write a program that solves these equations, given k and W. The differences are: For the statically determinate truss shown, the equilibrium equations of the joints are: Write a program that solves these equations for any given n pivoting is recommended. The proof is simple: Inversion of large matrices should be avoided whenever possible due its high cost.

READ ALSO: TDS FORM 16 PDF

As seen from Eq. If LU decomposition is employed in the solution, the solution phase forward and back substitution must be repeated n times, once for each bi.

However, the inverse of a triangular matrix remains triangular. Iterative, or indirect methods, start with an initial guess of the solution x and then repeatedly improve the solution until the change in x becomes negligible. Since the required number of iterations can be very large, the indirect methods are, in general, slower than their direct counterparts.

However, iterative methods do have the following advantages that make them attractive for certain problems: This makes it possible to deal with very large matrices that are sparse, but not neces- sarily banded. Iterative procedures are self-correcting, meaning that roundoff errors or even arithmetic mistakes in one iterative cycle are corrected in subsequent cycles. A serious drawback of iterative methods is that they do not always converge to the solution.

The initial guess for x plays no role in determining whether convergence takes place—if the procedure converges for one starting vector, it would do so for any starting vector. The initial guess affects only the number of iterations that are required for convergence. If a good guess for the solution is not available, x can be chosen randomly. Equation 2. This completes one iteration cycle. Convergence of the Gauss—Seidel method can be improved by a technique known as relaxation. The idea is to take the new value of xi as a weighted average of its previous value and the value predicted by Eq.

This is called underrelaxation. The user must provide the function iterEqs that computes the improved x from the iterative formulas in Eq. The resulting procedure is known as the method of steepest descent. It is not a popular algorithm due to slow convergence. Now suppose that we have carried out enough iterations to have computed the whole set of n residual vectors. It thus appears that the conjugate gradient algorithm is not an iterative method at all, since it reaches the exact solution after n compu- tational cycles.

In practice, however, convergence is usually achieved in less than n iterations.

Multiphysics Modeling: Numerical Methods and Engineering Applications

The conjugate gradient method is not competitive with direct methods in the solution of small sets of equations. Its strength lies in the handling of large, sparse systems where most elements of A are zero. It is important to note that A enters the algorithm only through its multiplication by a vector; i.

The maximum allowable number of iterations is set to n. This function must be supplied by the user see Example 2. We must also supply the starting vector x and the constant right-hand-side vector b. Solution The conjugate gradient method should converge after three iterations. The small discrepancy is caused by roundoff errors in the computations. Solution In this case the iterative formulas in Eq. The solution vector x is initialized to zero in the program, which also sets up the constant vector b.

Invert the following matrices: If Eq. The inversion procedure should contain only forward substitution. Solve the following equations with the Gauss—Seidel method: If the equa- tions are overdetermined A has more rows than columns , the least-squares solution is computed. On return, U is an upper trian- gular matrix and L contains a row-wise permutation of the lower triangular matrix.

A banded matrix in sparse form can be created by the following command: The columns of B may be longer than the diagonals they represent. A diagonal in the upper part of A takes its elements from lower part of a column of B, while a lower diagonal uses the upper part of B. The printout of a sparse matrix displays the values of these elements and their indices row and column numbers in parentheses. Almost all matrix functions, including the ones listed above, also work on sparse matrices.

The source of the data may be ex- perimental observations or numerical computations. In interpolation we construct a curve through the data points.

Numerical Methods for Engineers By Steven C. Chapra, Raymond P. Canale

In doing so, we make the implicit assumption that the data points are accurate and distinct. Thus the curve does not have to hit the data points. This property is illustrated in Fig. Example of quadratic cardinal functions. It is instructive to note that the farther a data point is from x, the more it contributes to the error at x. Each pass through the for-loop generates the entries in the next column, which overwrite the corresponding elements of a. Therefore, a ends up con- taining the diagonal terms of Table 3.

This works well if the interpolation is carried out repeatedly at different values of x using the same polynomial. Each pass through the for- loop computes the terms in next column of the table, which overwrite the previous elements of y.

At the end of the procedure, y contains the diagonal terms of the table. Three to six nearest-neighbor points produce good results in most cases. An interpolant intersecting more than six points must be viewed with suspicion. The reason is that the data points that are far from the point of interest do not contribute to the accuracy of the interpolant. In fact, they can be detrimental. The danger of using too many points is illustrated in Fig.

There are 11 equally spaced data points represented by the circles. The solid line is the interpolant, a poly- nomial of degree ten, that intersects all the points. A much smoother result would be obtained by using a cubic interpolant spanning four nearest-neighbor points. Polynomial interpolant displaying oscillations. As an example, consider Fig. There are six data points, shown as circles. Extrapolation may not follow the trend of data.

If extrapolation cannot be avoided, the following two measures can be useful: A linear or quadratic interpolant, for example, would yield a reasonable estimate of y 14 for the data in Fig. Frequently this plot is almost a straight line. This is illustrated in Fig. Logarithmic plot of the data in Fig.

Determine the degree of this polynomial by constructing the divided difference table, similar to Table 3. Hence the polynomial is a cubic. Solution This is an example of inverse interpolation, where the roles of x and y are interchanged. Employing the format of Table 3. Elastic strip y Figure 3. Mechanical model of natural cubic spline.

Numerical Analysis for Science, Engineering and Technology:: volume 1 | Bentham Science

Pins data points x The mechanical model of a cubic spline is shown in Fig. It is a thin, elastic strip that is attached with pins to the data points. At the pins, the slope and bending moment and hence the second derivative are continuous. There is no bending mo- ment at the two end pins; hence the second derivative of the spline is zero at the end points.

Since these end conditions occur naturally in the beam model, the resulting curve is known as the natural cubic spline. The pins, i. Cubic spline. The last two terms in Eq. This task is carried out by the function splineCurv: It returns the segment number; that is, the value of the subscript i in Eq. The second derivatives at the other knots are obtained from Eq. The corresponding interpolant is obtained from Eq.

The interpolant can now be evaluated from Eq. The program must be able to evaluate the interpolant for more than one value of x. Find the zero of y x from the following data: The function y x represented by the data in Prob.

Given the data x 0 0. Use the method that you consider to be most con- venient. Compute the zero of the function y x from the following data: Solve Example 3. Black, Z. Kreith, F. Deter- mine the relative density of air at The form of f x is determined beforehand, usually from the theory associated with the experiment from which the data is obtained.

This brings us to the question: The function S to be minimized is thus the sum of the squares of the residuals. Equations 3. In that case, both the numerator and the denominator in Eq. Substitution into Eq. The normal equations become progressively ill-conditioned with increasing m. Polynomials of high order are not recommended, because they tend to reproduce the noise inherent in the data.

The polynomial evaluation in stdDev is carried out by the subfunction polyEval which is described in Art. For example, the instrument taking the measurements may be more sensitive in a certain range of data. Sometimes the data represent the results of several experiments, each carried out under different circumstances.

We note from Eq. Compute the standard deviation in each case. Following the steps in Example 3. From Eqs. As expected, this result is somewhat different from that obtained in Part 1. The computations of the residuals and standard deviation are as follows: Three tensile tests were carried out on an aluminum bar.

In each test the strain was measured at the same values of stress. The results were Stress MPa Solve Prob.

The results were: This problem was solved by interpolation in Prob. This problem was solved in Prob. The table shows the variation of the relative thermal conductivity k of sodium with temperature T. Singer, C. Knowing that radioactivity decays exponentially with time: If x is an array, y is computed for all elements of x.

If x is a matrix, s is computed for each column of x. If x is a matrix, xbar is computed for each column of x. Before proceeding further, it might be helpful to review the concept of a function. In numerical computing the rule is invariably a computer algorithm. The roots of equations may be real or complex. Complex zeroes of polynomials are treated near the end of this chapter.

There is no universal recipe for estimating the value of a root. If the equation is associated with a physical problem, then the context of the problem physical insight might suggest the approximate location of the root. Otherwise, the function must be plotted, or a systematic numerical search for the roots can be carried out.

One such search method is described in the next article. Prior bracketing is, in fact, mandatory in the methods described in this chapter. Another useful tool for detecting and bracketing roots is the incremental search method. The basic idea behind the incremental search method is simple: If the interval is small enough, it is likely to contain a single root. There are several potential problems with the incremental search method: However, these locations are not true zeroes, since the function does not cross the x-axis.

Plot of tan x. The search starts at a and proceeds in steps dx toward b. Once a zero is detected, rootsearch returns its bounds x1,x2 to the calling program. This can be repeated as long as rootsearch detects a root.

This procedure yields the following results: This technique is also known as the interval halving method. Bisection is not the fastest method available for com- puting roots, but it is the most reliable.

Once a root has been bracketed, bisection will always close in on it. The method of bisection uses the same principle as incremental search: Otherwise, the root lies in x1 , x3 , in which case x2 is replaced by x3. In either case, the new interval x1 , x2 is half the size of the original interval.

The number of bisections n required to reduce the interval to tol is computed from Eq. Solution The best way to implement the method is to use the table shown below. Note that the interval to be bisected is determined by the sign of f x , not its magnitude.

Utilize the functions rootsearch and bisect. Thus the input argument fex4 3 in rootsearch is a handle for the function fex4 3 listed below. In most problems the method is much faster than bisection alone, but it can become sluggish if the function is not smooth.

Inverse quadratic iteration. These points allow us to carry out the next iteration of the root by inverse quadratic interpolation viewing x as a quadratic function of f. If the result x of the interpolation falls inside the latest bracket as is the case in Figs. Otherwise, another round of bisection is applied. Relabeling points after an iteration.

We have now recovered the orig- inal sequencing of points in Figs. First interpolation cycle Substituting the above values of x and f into the numer- ator of the quotient in Eq. Second interpolation cycle Applying the interpolation in Eq. Solution 2. The sensible approach is to avoid the potentially troublesome regions of the function by bracketing the root as tightly as possible from a visual inspection of the plot.

The Newton—Raphson formula can be derived from the Taylor series expansion of f x about x: Graphical interpretation of the Newton—Raphson f xi formula. The for- mula approximates f x by the straight line that is tangent to the curve at xi. The algorithm for the Newton—Raphson method is simple: Only the latest value of x has to be stored.

Here is the algorithm: Examples where the Newton—Raphson method diverges. Although the Newton—Raphson method converges fast near the root, its global convergence characteristics are poor. The reason is that the tangent line is not al- ways an acceptable approximation of the function, as illustrated in the two examples in Fig. The midpoint of the bracket is used as the initial guess of the root. The brackets are updated after each iteration. Since newtonRaphson uses the function f x as well as its derivative, function routines for both denoted by func and dfunc in the listing must be provided by the user.

Compute this root with the Newton—Raphson method. The same argument applies to the function newtonRaphson. We used the following program, which prints the number of iterations in addition to the root: After making the change in the above program, we obtained the result in 5 iterations. The trouble is the lack of a reliable method for bracketing the solution vector x. Therefore, we cannot provide the solution algorithm with a guaranteed good starting value of x, unless such a value is suggested by the physics of the problem.

The simplest and the most effective means of computing x is the Newton— Raphson method. It works well with simultaneous equations, provided that it is sup- plied with a good starting point. There are other methods that have better global con- vergence characteristics, but all of them are variants of the Newton—Raphson method. Newton—Raphson Method In order to derive the Newton—Raphson method for a system of equations, we start with the Taylor series expansion of fi x about the point x: Estimate the solution vector x.

Evaluate f x. Compute the Jacobian matrix J x from Eq. Set up the simultaneous equations in Eq. As in the one-dimensional case, success of the Newton—Raphson procedure depends entirely on the initial estimate of x.

If a good starting point is used, convergence to the solution is very rapid. Otherwise, the results are unpredictable. This formula can be obtained from Eq. The simultaneous equations in Eq. The function subroutine func that returns the array f x must be supplied by the user.

It is often possible to save computer time by neglecting the changes in the Jacobian matrix between iterations, thus computing J x only once. From the plot we also get a rough estimate of the coordi- nates of an intersection point: Then we would be left with a single equation which can be solved by the methods described in Arts. In this problem, we obtain from Eq.

Start with the point 1, 1, 1. Find this root with three decimal place accuracy by the method of bisection. Use the Newton—Raphson method. De- termine this root with the Newton—Raphson method within four decimal places. Utilize the functions rootsearch and brent. You may use the program in Example 4. The maximum compressive stress in the column is given by the so-called secant formula: Start by estimating the locations of the points from a sketch of the circles, and then use the Newton—Raphson method to compute the coordinates.

If the coordinates of three points on the circle are x 8. Note that there are two solutions. But if complex roots are to be computed, it is best to use a method that specializes in polynomials. Here we present a method due to Laguerre, which is reliable and simple to implement. Evaluation of Polynomials It is tempting to evaluate the polynomial in Eq. But computational economy is not the prime reason why this algorithm should be used. Because the result of each multiplication is rounded off, the procedure with the least number of multiplications invariably accumulates the smallest roundoff error.

From Eq.

Moreover, by eliminating the roots that have already been found, the chances of computing the same root more than once are eliminated. It turns out that the result, which is ex- act for the special case considered here, works well as an iterative formula with any polynomial. Differentiating Eq. Compute G x and H x from Eqs.

Determine the improved root r from Eq. This process is repeated until all n roots have been found. If a computed root has a very small imaginary part, it is very likely that it rep- resents roundoff error.

Therefore, polyRoots replaces a tiny imaginary part by zero. Hence the results should be viewed with caution when dealing with polynomials of high degree. Solution Use the given estimate of the root as the starting value. Determine all the other zeroes of Pn x by using a calculator. Problems 10—16 Find all the zeroes of the given Pn x. Thus the eigenvalues of A are the zeroes of Pn x.

An equally effective tool is the Taylor series expansion of f x about the point xk. The latter has the advantage of providing us with information about the error involved in the approximation. Numerical differentiation is not a particularly accurate process.

For this reason, a derivative of a function can never be computed with the same precision as the function itself. We also record the sums and differences of the series: Equations a — h can be viewed as simultaneous equations that can be solved for various derivatives of f x.

The number of equations involved and the number of terms kept in each equation depend on the order of the derivative and the desired degree of accuracy. The term O h2 reminds us that the truncation error behaves as h2.

Table 5. For example, consider the situation where the function is given at the n discrete points x1 , x2 ,. Since central differences use values of the function on each side of x, we would be unable to compute the derivatives at x1 and xn. Solving Eq. We can derive the approximations for higher derivatives in the same manner. For example, Eqs. The results are shown in Tables 5. The common practice is to use expressions of O h2. To obtain noncentral difference formulas of this order, we have to retain more terms in the Taylor series.

Numerical Methods for Engineers, 6th Edition

We start with Eqs. As you can see, the computations for high-order derivatives can become rather tedious. The effect on the roundoff error can be profound. On the other hand, we cannot make h too large, because then the truncation error would become excessive. This unfortunate situation has no remedy, but we can obtain some relief by taking the following precautions: We carry out the calculations with six- and eight-digit precision, using different values of h.

Canale Publisher: Sixth Pages: Book description: Students love it because it is written for them—with clear explanations and examples throughout. The text features a broad array of applications that span all engineering disciplines. The sixth edition retains the successful instructional techniques of earlier editions.

This prepares the student for upcoming problems in a motivating and engaging manner. Much more than a summary, the Epilogue deepens understanding of what has been learn ed and provides a peek into more advanced methods.

Related articles:


Copyright © 2019 gongturoqate.cf.