What is a Taylor polynomial? For what purposes are Taylor polynomials used?
What is a Taylor series?
How do we determine the accuracy when we use a Taylor polynomial to approximate a function?
So far, each infinite series we have discussed has been a series of real numbers, such as
\[
1+\frac+\frac+\cdots+\frac+\cdots=\sum_^ <\infty>\frac
\]
In the remainder of this chapter, we will include series that involve a variable. For instance, if in the geometric series in Equation (8.5.1) we replace the ratio \(r=\frac\) with the variable \(x\), we have the infinite (still geometric) series
\[
1+x+x^2+\cdots+x^k+\cdots=\sum_^ <\infty>x^k
\]
Here we see something very interesting: because a geometric series converges whenever its ratio \(r\) satisfies \(|r|\), we can say that for \(|x| \[
1+x+x^2+\cdots+x^k+\cdots=\frac .
\]
Equation (8.5.3) states that the non-polynomial function \(\frac\) on the right is equal to the infinite polynomial expresssion on the left. Because the terms on the left get very small as \(k\) gets large, we can truncate the series and say, for example, that
\[
1+x+x^2+x^3 \approx \frac \nonumber
\]
Preview Activity \(\PageIndex\) illustrates the first steps in the process of approximating functions with polynomials. Using this process we can approximate trigonometric, exponential, logarithmic, and other nonpolynomial functions as closely as we like (for certain values of \(x\) ) with polynomials. This is extraordinarily useful in that it allows us to calculate values of these functions to whatever precision we like using only the operations of addition, subtraction, multiplication, and division, which can be easily programmed in a computer.
We next extend the approach in Preview Activity 8.5.1 to arbitrary functions at arbitrary points. Let \(f\) be a function that has as many derivatives as we need at a point \(x=a\). Recall that \(P_1(x)\) is the tangent line to \(f\) at \((a, f(a))\) and is given by the formula
\[
P_1(x)=f(a)+f^<\prime>(a)(x-a) . \nonumber
\]
\(P_1(x)\) is the linear approximation to \(f\) near \(a\) that has the same slope and function value as \(f\) at the point \(x=a\).
We next want to find a quadratic approximation
\[
P_2(x)=P_1(x)+c_2(x-a)^2 \nonumber
\]
so that \(P_2(x)\) more closely models \(f(x)\) near \(x=a\). Consider the following calculations of the values and derivatives of \(P_2(x)\) :
To make \(P_2(x)\) fit \(f(x)\) better than \(P_1(x)\), we want \(P_2(x)\) and \(f(x)\) to have the same concavity at \(x=a\), in addition to having the same slope and function value. That is, we want to have
\[
P_2^<\prime \prime>(a)=f^<\prime \prime>(a) . \nonumber
\]
This implies that
\[
2 c_2=f^<\prime \prime>(a) \nonumber
\]
and thus
\[
c_2=\frac
\]
Therefore, the quadratic approximation \(P_2(x)\) to \(f\) centered at \(x=a\) is
\[
P_2(x)=f(a)+f^<\prime>(a)(x-a)+\frac
\]
This approach extends naturally to polynomials of higher degree. We define polynomials
\[
\begin
P_3(x)=P_2(x)+c_3(x-a)^3, \\
P_4(x)=P_3(x)+c_4(x-a)^4, \\
P_5(x)=P_4(x)+c_5(x-a)^5,
\end \nonumber
\]
and in general
\[
P_n(x)=P_(x)+c_n(x-a)^n . \nonumber
\]
The defining property of these polynomials is that for each \(n, P_n(x)\) and all its first \(n\) derivatives must agree with those of \(f\) at \(x=a\). In other words we require that
\[
P_n^(a)=f^(a) \nonumber
\]
for all \(k\) from 0 to \(n\).
To see the conditions under which this happens, suppose
\[
P_n(x)=c_0+c_1(x-a)+c_2(x-a)^2+\cdots+c_n(x-a)^n . \nonumber
\]
Then
\[
\begin
P_n^(a)=c_0 \\
P_n^(a)=c_1 \\
P_n^(a)=2 c_2 \\
P_n^(a)=(2)(3) c_3 \\
P_n^(a)=(2)(3)(4) c_4 \\
P_n^(a)=(2)(3)(4)(5) c_5
\end \nonumber
\]
and, in general,
\[
P_n^(a)=(2)(3)(4) \cdots(k-1)(k) c_k=k ! c_k . \nonumber
\]
So having \(P_n^(a)=f^(a)\) means that \(k ! c_k=f^(a)\) and therefore
\[
c_k=\frac
\]
for each value of \(k\). Using this expression for \(c_k\), we have found the formula for the polynomial approximation of \(f\) that we seek. Such a polynomial is called a Taylor polynomial.
The \(n\)th order Taylor polynomial of \(f\) centered at \(x=a\) is given by
\[
\begin
P_n(x) &=f(a)+f^<\prime>(a)(x-a)+\frac
&=\sum_^n \frac(a)>(x-a)^k .
\end \nonumber
\]
This degree \(n\) polynomial approximates \(f(x)\) near \(x=a\) and has the property that \(P_n^(a)=f^(a)\) for \(k=0,1, \ldots, n\).
Example 8.5.1. Determine the third order Taylor polynomial for \(f(x)=e^x\), as well as the general \(n\)th order Taylor polynomial for \(f\) centered at \(x=0\).
We know that \(f^<\prime>(x)=e^x\) and so \(f^<\prime \prime>(x)=e^x\) and \(f^<\prime \prime \prime>(x)=e^x\). Thus,
\[
f(0)=f^<\prime>(0)=f^<\prime \prime>(0)=f^<\prime \prime \prime>(0)=1 . \nonumber
\]
So the third order Taylor polynomial of \(f(x)=e^x\) centered at \(x=0\) is
\[
\begin
P_3(x) &=f(0)+f^<\prime>(0)(x-0)+\frac
&=1+x+\frac+\frac .
\end \nonumber
\]
In general, for the exponential function \(f\) we have \(f^(x)=e^x\) for every positive integer \(k\). Thus, the \(k\) th term in the \(n\)th order Taylor polynomial for \(f(x)\) centered at \(x=0\) is
\[
\frac
\]
Therefore, the \(n\)th order Taylor polynomial for \(f(x)=e^x\) centered at \(x=0\) is
\[
P_n(x)=1+x+\frac+\cdots+\frac x^n=\sum_^n \frac . \nonumber
\]
We have just seen that the \(n\)th order Taylor polynomial centered at \(a=0\) for the exponential function \(e^x\) is
\[
\sum_^n \frac . \nonumber
\]
In this activity, we determine small order Taylor polynomials for several other familiar functions, and look for general patterns.
i. Calculate the first four derivatives of \(f(x)\) at \(x=0\). Then find the fourth order Taylor polynomial \(P_4(x)\) for \(\frac\) centered at 0 .
ii. Based on your results from part (i), determine a general formula for \(f^(0)\)
i. Calculate the first four derivatives of \(f(x)\) at \(x=0\). Then find the fourth order Taylor polynomial \(P_4(x)\) for \(\cos (x)\) centered at 0 .
ii. Based on your results from part (i), find a general formula for \(f^(0)\). (Think about how \(k\) being even or odd affects the value of the \(k\) th derivative.)
i. Calculate the first four derivatives of \(f(x)\) at \(x=0\). Then find the fourth order Taylor polynomial \(P_4(x)\) for \(\sin (x)\) centered at 0 .
ii. Based on your results from part (i), find a general formula for \(f^(0)\). (Think about how \(k\) being even or odd affects the value of the \(k\) th derivative.)
It is possible that an \(n\)th order Taylor polynomial is not a polynomial of degree \(n\); that is, the order of the approximation can be different from the degree of the polynomial. For example, in Activity 8.5.3 we found that the second order Taylor polynomial \(P_2(x)\) centered at 0 for \(\sin (x)\) is \(P_2(x)=x\). In this case, the second order Taylor polynomial is a degree 1 polynomial.
In Activity \(\PageIndex\) we saw that the fourth order Taylor polynomial \(P_4(x)\) for \(\sin (x)\) centered at 0 is
\[
P_4(x)=x-\frac . \nonumber
\]
The pattern we found for the derivatives \(f^(0)\) describe the higher-order Taylor polynomials, e.g.,
\[
\begin
P_5(x)=x-\frac+\frac, \\
P_7(x)=x-\frac+\frac-\frac, \\
P_9(x)=x-\frac+\frac-\frac+\frac,
\end \nonumber
\]
and so on. It is instructive to consider the graphical behavior of these functions; Figure \(\PageIndex\) shows the graphs of a few of the Taylor polynomials centered at 0 for the sine function.
Figure \(\PageIndex<1>\): The order \(1,5,7\), and 9 Taylor polynomials centered at \(x=0\) for \(f(x)=\sin (x)\).1>
Notice that \(P_1(x)\) is close to the sine function only for values of \(x\) that are close to 0 , but as we increase the degree of the Taylor polynomial the Taylor polynomials provide a better fit to the graph of the sine function over larger intervals. This illustrates the general behavior of Taylor polynomials: for any sufficiently well-behaved function, the sequence \(\left\\) of Taylor polynomials converges to the function \(f\) on larger and larger intervals (though those intervals may not necessarily increase without bound). If the Taylor polynomials ultimately converge to \(f\) on its entire domain, we write
Let \(f\) be a function all of whose derivatives exist at \(x=a\). The Taylor series for \(f\) centered at \(x=a\) is the series \(T_f(x)\) defined by
\[
T_f(x)=\sum_^ <\infty>\frac(a)>(x-a)^k .
\]
In the special case where \(a=0\) in Definition 8.5.3, the Taylor series is also called the Maclaurin series for \(f\). From Example 8.5.1 we know the \(n\)th order Taylor polynomial centered at 0 for the exponential function \(e^x\); thus, the Maclaurin series for \(e^x\) is
\[
\sum_^ <\infty>\frac . \nonumber
\]
In Activity \(\PageIndex\ we determined small order Taylor polynomials for a few familiar functions, and also found general patterns in the derivatives evaluated at 0 . Use that information to write the Taylor series centered at 0 for the following functions.
a. \(f(x)=\frac\)
b. \(f(x)=\cos (x)\) (You will need to carefully consider how to indicate that many of the coefficients are 0 . Think about a general way to represent an even integer.)
c. \(f(x)=\sin (x)\) (You will need to carefully consider how to indicate that many of the coefficients are 0 . Think about a general way to represent an odd integer.)
d. \(f(x)=\frac\)
The Maclaurin series for \(e^x, \sin (x), \cos (x)\), and \(\frac\) will be used frequently, so we should be certain to know and recognize them well.
In the previous section (in Figure \(\PageIndex\) and Activity \(\PageIndex\) ) we observed that the Taylor polynomials centered at 0 for \(e^x, \cos (x)\), and \(\sin (x)\) converged to these functions for all values of \(x\) in their domain, but that the Taylor polynomials centered at 0 for \(\frac\) converge to \(\frac\) on the interval \((-1,1)\) and diverge for all other values of \(x\). So the Taylor series for a function \(f(x)\) does not need to converge for all values of \(x\) in the domain of \(f\).
Our observations suggest two natural questions: can we determine the values of \(x\) for which a given Taylor series converges? And does the Taylor series for a function \(f\) actually converge to \(f(x)\) ?
Graphical evidence suggests that the Taylor series centered at 0 for \(e^x\) converges for all values of \(x\). To verify this, use the Ratio Test to determine all values of \(x\) for which the Taylor series
\[
\sum_^ <\infty>\frac
\]
converges absolutely.
Recall that the Ratio Test applies only to series of nonnegative terms. In this example, the variable \(x\) may have negative values. But we are interested in absolute convergence, so we apply the Ratio Test to the series
\[
\sum_^<\infty>\left|\frac\right|=\sum_^ <\infty>\frac<|x|^k> . \nonumber
\]
Now, observe that
\[
\begin
\lim _ \frac> &=\lim _ \frac<\frac<|x|^>><\frac<|x|^k>> \\
&=\lim _ \frac <|x|^k !> <|x|^k(k+1) !>\\
&=\lim _ \frac<|x|> \\
&=0
\end \nonumber
\]
for any value of \(x\). So the Taylor series ( \(\PageIndex\) ) converges absolutely for every value of \(x\), and thus converges for every value of \(x\).
One question still remains: while the Taylor series for \(e^x\) converges for all \(x\), what we have done does not tell us that this Taylor series actually converges to \(e^x\) for each \(x\). We'll return to this question when we consider the error in a Taylor approximation near the end of this section.
We can apply the main idea from Example 8.5.4 in general. To determine the values of \(x\) for which a Taylor series
\[
\sum_^ <\infty>c_k(x-a)^k \nonumber
\]
centered at \(x=a\) will converge, we apply the Ratio Test with \(a_k=\left|c_k(x-a)^k\right|\). The series converges if \(\lim _ \frac> Observe that
\[
\frac>=|x-a| \frac<\left|c_
\]
so when we apply the Ratio Test, we get
\[
\lim _ \frac>=\lim _|x-a| \frac<\left|c_
\]
Note suppose that
\[
\lim _ \frac<\left|c_
\]
so that
\[
\lim _ \frac>=|x-a| \cdot L . \nonumber
\]
There are three possibilities for \(L: L\) can be 0 , it can be a finite positive value, or it can be infinite. Based on this value of \(L\), we can determine for which values of \(x\) the original Taylor series converges.
Because the Ratio Test is inconclusive when the \(|x-a| \cdot L=1\), the endpoints \(a \pm \frac\) have to be checked separately.
It is important to notice that the set of \(x\) values at which a Taylor series converges is always an interval centered at \(x=a\). For this reason, the set on which a Taylor series converges is called the interval of convergence. Half the length of the interval of convergence is called the radius of convergence. If the interval of convergence of a Taylor series is infinite, then we say that the radius of convergence is infinite.
The Ratio Test allows us to determine the set of \(x\) values for which a Taylor series converges absolutely. However, just because a Taylor series for a function \(f\) converges, we cannot be certain that the Taylor series actually converges to \(f(x)\). To show why and where a Taylor series does in fact converge to the function \(f\), we next consider the error that is present in Taylor polynomials.
We now know how to find Taylor polynomials for functions such as \(\sin (x)\), and how to determine the interval of convergence of the corresponding Taylor series. We next develop an error bound that will tell us how well an \(n\)th order Taylor polynomial \(P_n(x)\) approximates its generating function \(f(x)\). This error bound will also allow us to determine whether a Taylor series on its interval of convergence actually equals the function \(f\) from which the Taylor series is derived. Finally, we will be able to use the error bound to determine the order of the Taylor polynomial \(P_n(x)\) that we will ensure that \(P_n(x)\) approximates \(f(x)\) to the desired degree of accuracy.
For this argument, we assume throughout that we center our approximations at 0 (but a similar argument holds for approximations centered at \(a\) ). We define the exact error, \(E_n(x)\), that results from approximating \(f(x)\) with \(P_n(x)\) by
\[
E_n(x)=f(x)-P_n(x) . \nonumber
\]
We are particularly interested in \(\left|E_n(x)\right|\), the distance between \(P_n\) and \(f\).
Because
\[
P_n^(0)=f^(0) \nonumber
\]
for \(0 \leq k \leq n\), we know that
\[
E_n^(0)=0 \nonumber
\]
for \(0 \leq k \leq n\). Furthermore, since \(P_n(x)\) is a polynomial of degree less than or equal to \(n\), we know that
\[
P_n^(x)=0 . \nonumber
\]
Thus, since \(E_n^(x)=f^(x)-P_n^(x)\), it follows that
\[
E_n^(x)=f^(x) \nonumber
\]
for all \(x\).
Suppose that we want to approximate \(f(x)\) at a number \(c\) close to 0 using \(P_n(c)\). If we assume \(\left|f^(t)\right|\) is bounded by some number \(M\) on \([0, c]\), so that
\[
\left|f^(t)\right| \leq M \nonumber
\]
for all \(0 \leq t \leq c\), then we can say that
\[
\left|E_n^(t)\right|=\left|f^(t)\right| \leq M \nonumber
\]
for all \(t\) between 0 and \(c\). Equivalently,
\[
-M \leq E_n^(t) \leq M
\]
on \([0, c]\). Next, we integrate the three terms in Inequality \(\PageIndex\) from \(t=0\) to \(t=x\), and thus find that
\[
\int_0^x-M d t \leq \int_0^x E_n^(t) d t \leq \int_0^x M d t \nonumber
\]
for every value of \(x\) in \([0, c]\). Since \(E_n^(0)=0\), the First FTC tells us that
\[
-M x \leq E_n^(x) \leq M x \nonumber
\]
for every \(x\) in \([0, c]\).
Integrating this last inequality, we obtain
\[
\int_0^x-M t d t \leq \int_0^x E_n^(t) d t \leq \int_0^x M t_ d t \nonumber
\]
and thus
\[
-M \frac \leq E_n^(x) \leq M \frac \nonumber
\]
for all \(x\) in \([0, c]\).
Integrating \(n\) times, we arrive at
\[
-M \frac> \leq E_n(x) \leq M \frac> \nonumber
\]
for all \(x\) in \([0, c]\). This enables us to conclude that
\[
\left|E_n(x)\right| \leq M \frac<|x|^
\]
for all \(x\) in \([0, c]\), and we have found a bound on the approximation's error, \(E_n\).
Our work above was based on the approximation centered at \(a=0\); the argument may be generalized to hold for any value of \(a\), which results in the following theorem.
The Lagrange Error Bound for \(P_n(x)\).
Let \(f\) be a continuous function with \(n+1\) continuous derivatives. Suppose that \(M\) is a positive real number such that \(\left|f^(x)\right| \leq M\) on the interval \([a, c]\). If \(P_n(x)\) is the \(n\)th order Taylor polynomial for \(f(x)\) centered at \(x=a\), then
\[
\left|P_n(c)-f(c)\right| \leq M \frac<|c-a|^
\]
We can use this error bound to tell us important information about Taylor polynomials and Taylor series, as we see in the following examples and activities.
Determine how well the 10th order Taylor polynomial \(P_(x)\) for \(\sin (x)\), centered at 0 , approximates \(\sin (2)\).
To answer this question we use \(f(x)=\sin (x), c=2, a=0\), and \(n=10\) in the Lagrange error bound formula. We also need to find an appropriate value for \(M\). Note that the derivatives of \(f(x)=\sin (x)\) are all equal to \(\pm \sin (x)\) or \(\pm \cos (x)\). Thus,
\[
\left|f^(x)\right| \leq 1 \nonumber
\]
for any \(n\) and \(x\). Therefore, we can choose \(M\) to be 1 . Then
\[
\left|P_(2)-f(2)\right| \leq(1) \frac<|2-0|^<11>>=\frac> \approx 0.00005130671797 . \nonumber
\]
So \(P_(2)\) approximates \(\sin (2)\) to within at most \(0.00005130671797\). A computer algebra system tells us that
\[
P_(2) \approx 0.9093474427 \text < and >\sin (2) \approx 0.9092974268 \nonumber
\]
with an actual difference of about \(0.0000500159\).
Let \(P_n(x)\) be the \(n\)th order Taylor polynomial for \(\sin (x)\) centered at \(x=0\). Determine how large we need to choose \(n\) so that \(P_n(2)\) approximates \(\sin (2)\) to 20 decimal places.
Show that the Taylor series for \(\sin (x)\) actually converges to \(\sin (x)\) for all \(x\).
Recall from the previous example that since \(f(x)=\sin (x)\), we know
\[
\left|f^(x)\right| \leq 1 \nonumber
\]
for any \(n\) and \(x\). This allows us to choose \(M=1\) in the Lagrange error bound formula. Thus,
\[
\left|P_n(x)-\sin (x)\right| \leq \frac<|x|^
\]
for every \(x\).
We showed in earlier work that the Taylor series \(\sum_^ <\infty>\frac\) converges for every value of \(x\). Because the terms of any convergent series must approach zero, it follows that
\[
\lim _ \frac>=0 \nonumber
\]
for every value of \(x\). Thus, taking the limit as \(n \rightarrow \infty\) in the inequality \(\PageIndex\) , it follows that
\[
\lim _\left|P_n(x)-\sin (x)\right|=0 . \nonumber
\]
As a result, we can now write
\[
\sin (x)=\sum_^ <\infty>\frac> <(2 n+1) !>\nonumber
\]
for every real number \(x\).
This page titled 8.5: Taylor Polynomials and Taylor Series is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Matthew Boelkins, David Austin & Steven Schlicker (ScholarWorks @Grand Valley State University) via source content that was edited to the style and standards of the LibreTexts platform.