Home » Induction

Tag: Induction

Prove by induction a formula for the sum from 1 to ∞ of nkxn

Four of the previous exercise (here, here, here, and here, Section 10.9, Exercises #11 – #14) suggest a formula

    \[ \sum_{n=1}^{\infty} n^k x^n = \frac{P_k (x) }{(1-x)^{k+1}}. \]

In the formula P_k(x) indicates a polynomial of degree k, with the term of lowest degree being x and the highest degree term being x^k (we call a polynomial of degree k with highest term x^k a monic polynomial of degree k). Use mathematical induction to prove this formula, without justifying the manipulations with the series.


Proof. For the case k = 1, we have

    \[ \sum_{n=1}^{\infty} nx^n = \frac{x}{(1-x)^2}, \]

from the first exercise linked above (Section 10.7, Exercise #11). Letting P_k (x) = x, a degree k = 1 polynomial, with term of lowest degree being x and term of highest degree being x^1 =x. Therefore, the statement holds for the case k = 1. Assume then that the statement is true for some j = k \in \mathbb{Z}_{>0}, so

    \[ \sum_{n=1}^{\infty} n^j x^n = \frac{P_j(x)}{(1-x)^{j+1}}. \]

Differentiating both sides (assuming without justification that we can do this) we obtain

    \begin{align*}  && \sum_{n=1}^{\infty} n^{j+1} x^{n-1} &= \frac{D(P_j(x)) \cdot (1-x)^{j+1} + P_j(x)(j+1)(1-x)^j}{(1-x)^{2j+2}} \\[9pt]  \implies && \sum_{n=1}^{\infty} n^{j+1} x^{n-1} &= \frac{D(P_j(x))(1-x) + P_j(x)(j+1)}{(1-x)^{j+2}} \\[9pt]  \implies && \sum_{n=1}^{\infty} n^{j+1} x^{n-1} &= \frac{D(x^j + Q_{j-1}(x) + x)(1-x) + (x^j + Q_{j-1}(x) +x)(j+1)}{(1-x)^{j+2}}  \intertext{where $Q_{j-1}(x)$ is a polynomial of degree $j-1$}  \implies && \sum_{n=1}^{\infty} n^{j+1}x^{n-1} &= \frac{(jx^{j-1} + Q_{j-2}(x) +1) - (jx^j + Q_{j-1}(x) + x) + (j+1)x^j + (j+1)Q_{j-1}(x) + x(j+1)}{(1-x)^{j+2}} \\[9pt]  \implies && \sum_{n=1}^{\infty} n^{j+1} x^{n-1} &= \frac{(jx^{j-1} + Q_{j-2}(x) + 1) - (jx^j + Q_{j-1}(x) + x) + (j+1)x^j + (j+1)Q_{j-1}(x) + x(j+1)}{(1-x)^{j+2}} \\[9pt]  \implies && \sum_{n=1}^{\infty} n^{j+1} x^{n-1} &= \frac{(jx^{j-1} + Q_{j-2}(x) + 1) - (jx^j + Q_{j-1}(x) + x) + (j+1)x^j + (j+1)Q_{j-1}(x) + x(j+1)}{(1-x)^{j+2}} \\[9pt]  \implies && \sum_{n=1}^{\infty} x^{n-1} &= \frac{x^j + Q^*_{j-1} (x) + 1}{(1-x)^{j+1}} \intertext{where $Q^*_{j-1}(x)$ is a polynomial of degree $j-1$}  \implies && \sum_{n=1}^{\infty} n^{j+1} x^{n-1} &= \frac{(jx^{j-1} + Q_{j-2}(x) + 1) - (jx^j + Q_{j-1}(x) + x) + (j+1)x^j + (j+1)Q_{j-1}(x) + x(j+1)}{(1-x)^{j+2}} \\[9pt]  \implies && \sum_{n=1}^{\infty} n^{j+1}x^{n-1} &= \frac{x^j + Q^*_{j-1} (x) + 1}{(1-x)^{j+1}} \\[9pt]  \implies && \sum_{n=1}^{\infty} n^{j+1}x^n &= \frac{P_{j+1}(x)}{(1-x)^{j+2}}. \end{align*}

where P_{j+1}(x) is a polynomial of degree j+1 with the term of lowest degree being x and that of the highest degree being x^{j+1}. Hence, if the statement is true for j then it is true for j+1; thus, the statement is true for all positive integers k. \qquad \blacksquare

Prove a formula for the 2nth derivative of x sin (ax)

Define f(x) = x \sin (ax) and prove the formula for the 2nth derivative of f,

    \[ f^{(2n)}(x) = (-1)^n (a^{2n} x \sin (ax) - 2n a^{2n-1} \cos (ax)). \]


Proof. The proof is by induction. For the case n = 1 we need to take two derivatives of f (since f^{(2n)} = f^{(2)} = f'').

    \begin{align*}  && f(x) &= x \sin (ax) \\ \implies && f'(x) &= \sin (ax) + ax \cos (ax) \\ \implies && f''(x) &= a \cos (ax) + a \cos (ax) - a^2 x \sin (ax) \\  &&&= (-1)^n(a^{2n} x \sin (ax) - 2n a^{2n-1} \cos (ax)). \end{align*}

So, the formula holds for the case n = 1. Assume then that the formula is true for some positive integer k. Then the inductive hypothesis is that,

    \[ f^{(2k)}(x) = (-1)^k \left( a^{2k} x \sin (ax) - 2k a^{2k-1} \cos (ax) \right).\]

Now, we want to take two derivatives (to get a formula for f^{(2(k+1))}(x)).

    \begin{align*}  f^{(2k+1)}(x) = \left(f^{(2k)}(x)\right)' &= \left( (-1)^k \left( a^{2k} x \sin (ax) - 2k a^{2k-1} \cos (ax)\right)\right)' \\[9pt]  &= (-1)^k \big( a^{2k} \sin (ax) + a^{2k+1} x \cos (ax) + 2ka^{2k} \sin (ax) \big) \\[9pt]  &= (-1)^k \big( (2k+1)a^{2k} \sin (ax) + a^{2k+1} x \cos (ax) \big). \end{align*}

Now, we take another derivative,

    \begin{align*}  f^{(2(k+1))}(x) &= \left( f^{(2k+1)}(x)\right)' \\[9pt]  &= \left( (-1)^k \big( (2k+1)a^{2k} \sin (ax) + a^{2k+1} x \cos (ax) \big) \right)' \\[9pt]  &= (-1)^k \big( (2k+1)a^{2k+1} \cos (ax) + a^{2k+1} \cos (ax) - a^{2k+2} x \sin (ax) \big) \\[9pt]  &= (-1)^k \big( -a^{2(k+1)} x \sin (ax) + (2k+2)a^{2k+1} \cos (ax) \big) \\[9pt]  &= (-1)^{k+1} \big(a^{2(k+1)} x \sin (ax) - 2(k+1)a^{2(k+1) - 1} \cos (ax) \big). \end{align*}

Thus, the formula holds for k+1. Hence, it holds for all positive integers n. \qquad \blacksquare

Prove some properties of a differentiable function satisfying a given functional equation

Let g be a function differentiable everywhere such that

    \[ g'(0) = 2 \quad \text{and} \quad g(x+y) = e^y g(x) + e^x g(y) \qquad \text{for all } x,y \in \mathbb{R}. \]

  1. Prove that g(2x) = 2e^x g(x) and conjecture and prove a similar formula for g(3x).
  2. Conjecture and prove a formula for g(nx) in terms of g(x) for all positive integers n.
  3. Prove that g(0) = 0 and compute

        \[ \lim_{h \to 0} \frac{g(h)}{h}. \]

  4. Prove that there exists a constant C such that

        \[ g'(x) = g(x) + Ce^x \]

    for all x. Find the value of the constant C.


  1. Proof. We can compute this using the functional equation:

        \[ g(2x) = g(x+x) = e^x g(x) + e^x g(x) = 2e^x g(x). \qquad \blacksquare \]

    Next, we conjecture

        \[ g(3x) = 3e^{2x} g(x). \]

    Proof. Again, we compute using the functional equation, and the above formula for g(2x),

        \begin{align*}  g(3x) &= g(2x + x) \\  &= e^x g(2x) + e^{2x}g(x) \\  &= e^x (2e^x g(x)) + e^{2x} g(x) \\  &= 2e^{2x}g(x) + e^{2x}g(x) \\  &= 3e^{2x}g(x). \qquad \blacksquare \end{align*}

  2. We conjecture

        \[ g(nx) = ne^{(n-1)x} g(x). \]

    Proof. The proof is by induction. We have already established the cases n = 2 and n = 3 (and the n = 1 case is the trivial g(x) = g(x)). Assume then that the formula holds for some integer k \geq 1. Then we have

        \begin{align*}  g((k+1)x) &= g(kx + x) \\  &= e^x g(kx) + e^{kx} g(x) &(\text{functional equation})\\  &= e^x (ke^{(k-1)x} g(x)) + e^{kx}g(x) &(\text{induction hypothesis}) \\  &= ke^{kx} g(x) + e^{kx} g(x) \\  &= (k+1)e^{kx} g(x). \end{align*}

    Thus, if the formula holds for k, it also holds for k + 1. Hence, by induction it holds for all integers k. \qquad \blacksquare

  3. Proof. Using the functional equation

        \[ g(0) = g(0+0) = e^0 g(0) + e^0 g(0) = 2g(0) \quad \implies \quad g(0) = 0. \]

    Then, since the derivative g'(x) exists for all x (by hypothesis) we know it must exist in particular at x = 0. Using the limit definition of derivative, and the facts that g(0) = 0 and g'(0) = 2 we have

        \[ g'(0) = \lim_{h \to 0} \frac{g(0+h) - g(0)}{h} = \lim_{h \to 0} \frac{g(h)}{h} = 2. \qquad \blacksquare\]

  4. Proof. Since the derivative g'(x) must exist for all x (by hypothesis) we know that the limit

        \[ \lim_{h \to 0} \frac{g(x+h) - g(x)}{h} \]

    must exist for all x. Using the functional equation for g we have

        \begin{align*}  \lim_{h \to 0} \frac{g(x+h) - g(x)}{h} &= \lim_{h \to 0} \frac{e^h g(x) + e^x g(h) - g(x)}{h} \\[9pt]  &= \lim_{h \to 0} \frac{g(x)(e^h - 1) + e^x g(h)}{h}. \end{align*}

    But then, from part (c) we know \lim_{h \to 0} \frac{g(h)}{h} = 2 and from this exercise (Section 6.17, Exercise #38) we know

        \[ \lim_{x \to 0} \frac{e^x - 1}{x} = 1. \]

    Therefore, we have

        \begin{align*}  \lim_{h \to 0} \frac{g(x)(e^h - 1) + e^x g(h)}{h} &= \lim_{h \to 0} \left( \frac{e^x g(h)}{h} \right) + \lim_{h \to 0} \left( \frac{g(x)(e^h - 1)}{h} \right) \\  &= e^x \lim_{h \to 0} \left(\frac{g(h)}{h} \right) + g(x) \lim_{h \to 0} \left( \frac{e^h-1}{h} \right) \\  &= 2e^x + g(x). \qquad \blacksquare \end{align*}

Prove some inequalities using the function ex-1-x

  1. Define a function:

        \[ f(x) = e^x - 1 - x \qquad \text{for all } x \in \mathbb{R}. \]

    Prove that f'(x) \geq 0 for all x \geq 0 and f'(x) \leq 0 for all x \leq 0. Prove the following inequalities are valid for all x > 0,

        \[ e^x > 1 + x, \qquad e^{-x} > 1 -x. \]

  2. Using integration and part (a) prove

        \[ e^x > 1 + x + \frac{x^2}{2!}, \qquad e^{-x} < 1 - x + \frac{x^2}{2!}. \]

  3. Using integration and part (a) prove

        \[ e^x > 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!}, \qquad e^{-x} > 1 - x + \frac{x^2}{2!} - \frac{x^3}{3!}. \]

  4. State and prove a generalization of the inequalities above.

  1. First, we take the derivative of f(x),

        \[ f'(x) = e^x - 1 \quad \implies \quad f'(0) = 0. \]

    Since e^x - 1 is strictly increasing everywhere (since f''(x) = e^x > 0 for all x) we have f'(x) \geq 0 for x \geq 0 and f'(x) \leq 0 for x \leq 0. Thus, f(x) has a minimum at x = 0 and f(0) = 0 so f(x) > 0 for all x \neq 0. Therefore, f(x) > 0 when x > 0; hence,

        \[ e^x -1 - x > 0 \quad \implies \quad e^x > 1 + x. \]

    Furthermore, when x < 0 we have

        \[ e^x - 1 - x > 0 \quad \implies \quad e^x > 1 + x. \]

    Therefore, if x > 0 then -x < 0 and so we have

        \[ e^{-x} > 1 - x. \]

  2. Using the inequalities in part (a) we integrate over the interval 0 to x,

        \begin{align*}  e^x > 1 + x && \implies && \int_0^x e^t \, dt > \int_0^x (1+t) \, dt \\  && \implies && e^x - 1> \left(t + \frac{t^2}{2}\right)\Bigr \rvert_0^x + C \\  && \implies && e^x - 1> x + \frac{x^2}{2!} \\  && \implies && e^x > 1 + x + \frac{x^2}{2!} \end{align*}

    For the other inequality we proceed similarly,

        \begin{align*}  e^{-x} > 1 - x && \implies && \int_0^x e^{-t} \, dt &> \int_0^x (1-t) \, dt \\  && \implies && -e^{-x} + 1 &> x - \frac{x^2}{2}\\  && \implies && e^{-x} &< 1 - x + \frac{x^2}{2!}. \qquad \blacksquare  \end{align*}

  3. We use the same strategy as before, starting with the inequalities we established in part (b),

        \begin{align*}  e^x > 1 + x + \frac{x^2}{2!} && \implies && \int_0^x e^t \, dt > \int_0^x \left (1 + t + \frac{t^2}{2!} \right) \, dt \\  && \implies && e^x - 1 > \left( t + \frac{t^2}{2!} + \frac{t^3}{3!}\right)\Bigr \rvert_0^x \\  && \implies && e^x > 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!}. \end{align*}

    Similarly, for the other inequality,

        \begin{align*}  e^{-x} < 1 - x + \frac{x^2}{2!} && \implies && \int_0^x e^{-t} \, dt &< \int_0^x \left( 1 - t + \frac{t^2}{2!} \right) \, dt \\  && \implies && -e^{-x} + 1& < x - \frac{x^2}{2!} + \frac{x^3}{3!} \\  && \implies && e^{-x} &> 1 - x +\frac{x^2}{2!} - \frac{x^3}{3!}. \qquad \blacksquare \end{align*}

  4. Claim: The following general inequalities hold for all x > 0:

        \begin{align*}  e^x &> \sum_{k=0}^n \frac{x^k}{k!} & \text{for all } n \in \mathbb{Z}_{>0} \\ e^{-x} &> \sum_{k=0}^n (-1)^k \frac{x^k}{k!} & \text{for all odd } n \in \mathbb{Z}_{>0} \\ e^{-x} &< \sum_{k=0}^n (-1)^k \frac{x^k}{k!} & \text{for all even } n \in \mathbb{Z}_{>0}. \end{align*}

    Proof. The proof is by induction. We have already established the inequalities for the n = 1 case for all three inequalities. Now, to prove the first inequality assume

        \[ e^x > \sum_{k=0}^n \frac{x^k}{k!} \qquad \text{for some } n \in \mathbb{Z}_{>0}. \]

    Then we have

        \begin{align*}  && \int_0^x e^t \, dt &> \int_0^x \left( \sum_{k=0}^n \frac{t^k}{k!} \right) \, dt \\[9pt]  \implies && e^x - 1 &> \sum_{k=0}^n \int_0^x \frac{t^k}{k!} \, dt &(\text{linearity of integral}) \\[9pt]  \implies && e^x &> 1 + \sum_{k=0}^n \left( \frac{t^{k+1}}{(k+1)!} \right)\Bigr \rvert_0^x \\[9pt]  \implies && e^x &> 1 + \sum_{k=0}^n \frac{x^{k+1}}{(k+1)!} \\[9pt]  \implies && e^x &> 1 + \sum_{k=1}^{n+1} \frac{x^k}{k!} &(\text{Reindexing})\\  \implies && e^x &> \sum_{k=0}^{n+1} \frac{x^k}{k!}. \end{align*}

    This establishes the first inequality. For the second inequality we have proved the case n = 1 already. Assume

        \[ e^{-x} > \sum_{k=0}^n (-1)^k \frac{x^k}{k!} \qquad \text{for some odd } n \in \mathbb{Z}_{>0}. \]

    Since n is odd we may write n = 2m+1 for some nonnegative integer m. We want to show that the inequality must hold for the next odd integer, n+2. We have,

        \begin{align*}  &&\int_0^x e^{-t} \, dt&> \int_0^x \left( \sum_{k=0}^n (-1)^k \frac{t^k}{k!} \right) \, dt \\  \implies && -e^{-x} + 1 &> \sum_{k=0}^n (-1)^k \int_0^x \frac{t^k}{k!} \, dt \\  \implies && -e^{-x} &> -1 + \sum_{k=0}^n (-1)^k \frac{x^{k+1}}{(k+1)!} \\  \implies && e^{-x} &< 1 - \sum_{k=0}^n (-1)^k \frac{x^{k+1}}{(k+1)!} \\  \implies && e^{-x} &< 1 - \sum_{k=1}^{n+1} (-1)^{k-1} \frac{x^k}{k!} \\  \implies && e^{-x} &< 1 + \sum_{k=1}^{n+1} (-1)^k \frac{x^k}{k!} \\  \implies && e^{-x} &< \sum_{k=0}^{n+1} (-1)^k \frac{x^k}{k!} \\ \end{align*}

    We want to show that the inequality holds for n+2 so we integrate both sides again,

        \begin{align*}  && e^{-x} &< \sum_{k=0}^{n+1} (-1)^k \frac{x^k}{k!} \\  \implies && \int_0^x e^{-t} \, dt &< \int_0^x \left( \sum_{k=0}^{n+1} (-1)^k \frac{x^k}{k!} \right) \, dx \\  \implies && -e^{-x} + 1 &< \sum_{k=0}^{n+1} (-1)^k \int_0^x \frac{t^k}{k!} \, dt \\  \implies && -e^{-x} &< -1 + \sum_{k=0}^{n+1} (-1)^k \frac{x^{k+1}}{(k+1)!} + C \\  \implies && -e^{-x} &< -1 + \sum_{k=1}^{n+2} (-1)^{k-1} \frac{x^k}{k!} \\  \implies && e^{-x} &> 1 - \sum_{k=1}^{n+2} (-1)^{k-1} \frac{x^k}{k!} \\  \implies && e^{-x} &> 1 + \sum_{k=1}^{n+2} (-1)^k \frac{x^k}{k!} \\  \implies && e^{-x} &> \sum_{k=0}^{n+2} (-1)^k \frac{x^k}{k!}. \end{align*}

    Hence, if the inequality holds for odd n, then it also holds for the next odd number, n+2. Hence, it holds for all positive, odd integers.
    The exact same induction argument works for all of the even integers (starting with the n = 2 case we proved in part (c)). \qquad \blacksquare

Prove Leibniz’s formula

If h(x) = f(x) g(x) is a product of functions prove that the derivatives of h(x) are given by the formula

    \[ h^{(n)} (x) = \sum_{k=0}^n \binom{n}{k} f^{(k)} (x) g^{(n-k)}(x), \]

where

    \[ \binom{n}{k} = \frac{n!}{k! (n-k)!} \]

is the binomial coefficient. (See the first four exercises of Section I.4.10 on page 44 of Apostol, and in particular see Exercise #4, in which we prove the binomial theorem.)


Proof. The proof is by induction. Letting h(x) = f(x) g(x) and n = 1 we use the product rule for derivatives

    \[ h'(x) = f(x)g'(x) + f'(x)g(x) = \sum_{k=0}^1 \binom{1}{k} f^{(k)}(x) g^{(1-k)}(x). \]

So, the formula is true for the case n = 1. Assume then that it is true for n = m \in \mathbb{Z}_{> 0}. Then we consider the (m+1)st derivative h^{(m+1)}(x):

    \begin{align*}  h^{(m+1)}(x) &= \left( h^{(m)}(x) \right)' \\  &= \left( \sum_{k=0}^m \binom{m}{k} f^{(k)}(x) g^{(m-k)}(x) \right)' &(\text{Ind. Hyp.})\\  &= \sum_{k=0}^m \left( \binom{m}{k} f^{(k)}(x) g^{(m-k)}(x) \right)'. \end{align*}

Here, we use linearity of the derivative to differentiate term by term over this finite sum. This property was established in Theorem 4.1 (i) and the comments following the theorem on page 164 of Apostol. Continuing where we left off we apply the product rule,

    \begin{align*}  &= \sum_{k=0}^m \left( \binom{m}{k} \left(f^{(k+1)}(x) g^{(m-k)}(x) + f^{(k)}(x) g^{(m-k+1)}(x) \right) \right) \\[9pt]  &= \sum_{k=0}^m \binom{m}{k} f^{(k+1)}(x)g^{(m-k)}(x) + \sum_{k=0}^m \binom{m}{k} f^{(k)}(x) g^{(m-k+1)}(x) \\[9pt]  &= \sum_{k=1}^{m+1} \binom{m}{k-1} f^{(k)} g^{(m-k+1)}(x) + \sum_{k=0}^m \binom{m}{k} f^{(k)}(x) g^{(m-k+1)}(x), \end{align*}

where we’ve reindexed the first sum to run from k = 1 to m + 1 instead of from k = 0 to m. Then, we pull out the k = m+1 term from the first sum and the k = 0 term from the second,

    \begin{align*}  &= f^{(m+1)}(x) g(x) + \sum_{k=1}^m \binom{m}{k-1} f^{(k)}(x) g^{(m-k+1)}(x) + \sum_{k=1}^m \binom{m}{k} f^{(k)}(x) g^{(m-k+1)}(x) + f(x) g^{(m+1)}(x) \\  &= f^{(m+1)}(x) g(x) + f(x)g^{(m+1)}(x) + \sum_{k=1}^m \left( \binom{m}{k-1} + \binom{m}{k} \right) f^{(k)}(x) g^{(m-k+1)} (x). \end{align*}

Now, we recall the law of Pascal’s triangle (which we proved in a previous exercise) which establishes that

    \[ \binom{m}{k} + \binom{m}{k-1} = \binom{m+1}{k}. \]

Therefore, we have

    \begin{align*}  &= f^{(m+1)}(x) g(x) + f(x) g^{(m+1)}(x) + \sum_{k=1}^m  \binom{m+1}{k} f^{(k)}(x)g^{(m-k+1)}(x) \\  &= \sum_{k=0}^{m+1} \binom{m+1}{k} f^{(k)}(x) g^{(m-k+1)}(x). \end{align*}

Hence, the formula holds for m+1 if it holds m. Therefore, we have established that it holds for all positive integers n.

Prove formulas for the nth derivative of sin and cos

Let

    \[ f(x) = \cos x, \qquad g(x) = \sin x. \]

Prove that

    \[ f^{(n)} (x) = \cos \left( x + \frac{1}{2} n \pi \right), \qquad g^{(n)}(x) = \sin \left( x + \frac{1}{2}n \pi \right).\]


Proof. The proof is by induction. For n = 1 we have

    \[ f'(x) = -\sin x = \cos \left ( x+ \frac{1}{2} \pi \right), \qquad g'(x) = \cos x = \sin \left( x + \frac{1}{2} \pi \right). \]

These equalities follow from the co-relations of sine and cosine (Theorem 2.3 part (d) on page 96 of Apostol). Thus, the formulas are true for the case n = 1. Assume then that they are true for some n = k \in \mathbb{Z}_{>0}. For f(x) we then have

    \begin{align*}  f^{(k+1)}(x) &= \left( f^{(k)} (x) \right)' \\  &= \left( \cos \left( x + \frac{1}{2}k \pi \right) \right)' \\  &= - \sin \left( x + \frac{1}{2}k \pi \right) \\  &= \cos \left( \left( x + \frac{1}{2} k \pi \right) + \frac{1}{2} \pi \right) &(\text{Co-relations}) \\  &= \cos \left( x + \frac{1}{2} (k+1) \pi \right). \end{align*}

Similarly, for g(x) we have

    \begin{align*}  g^{(k+1)}(x) &= \left( g^{(k)}(x) \right)' \\  &= \left( \sin \left( x + \frac{1}{2}k \pi \right) \right)' \\  &= \cos \left( x + \frac{1}{2} k \pi \right) \\  &= \sin \left( \left( x + \frac{1}{2} k \pi \right) + \frac{1}{2} \pi \right) &(\text{Co-relations})\\  &= \sin \left( x + \frac{1}{2} (k+1) \pi \right). \end{align*}

Therefore, the theorem follows by induction for all positive integers. \qquad \blacksquare

Compute the derivatives of g(x) = xn f(x)

Assume f is a polynomial with f(0) = 1. Define

    \[ g(x) = x^n f(x). \]

Compute the values of g(0), g'(0), \ldots, g^{(n)}(0).


Assume n is a non-negative integer (otherwise g is undefined at x = 0). Then, we make the following claim:

Claim: The polynomial g(x) has derivatives at 0 given by the following

    \[ g^{(k)}(0) = \begin{cases} 0 & \text{if } 0 \leq k < n \\ n! & \text{if } k =n. \end{cases} \]

Proof. Since f is a polynomial we may write,

    \[ f(x) = a_m x^m + a_{m-1} x^{m-1} + \cdots + a_1 x + a_0. \]

Furthermore, since f(0) = 1 is given we know a_0 = 1. Now, multiplying by x^n we have

    \begin{align*}  g(x) &= x^n f(x) = x^n (a_m x^m + \cdots + a_1 x + 1) \\  &= a_m x^{m+n} + \cdots + a_1 x^{n+1} + x^n. \end{align*}

Next, we will use induction to prove that the kth derivative of g(x) for 0 \leq k < n is given by

    \[ g^{(k)}(x) = c_m x^{m+n-k} + c_{m-1} x^{m+n-k-1} + \cdots + c_1 x^{n+1-k} + \frac{n!}{\cdot (n-k)!} x^{n-k},\]

for constants c_1, \ldots, c_m. Since the derivative g'(x) is given by

    \begin{align*}   g'(x) &= (m+n) a_m x^{m+n-1} + (m+n-1) a_{m-1} x^{m+n-2} + \cdots + (n+1) a_1 x^n + n x^{n-1} \\  &= b_m x^{m+n-1} + b_{m-1} x^{m+n-2} + \cdots + b_1 x^n + \frac{n!}{(n-1)!}x^{n-1}, \end{align*}

for constants b_1, \ldots, b_m, we see that the formula holds for k = 1. Assume then that it holds for some k,

    \[ g^{(k)}(x) = b_m x^{m+n-k} + b_{m-1} x^{m+n-k-1} + \cdots + b_1 x^{n+1-k} + \frac{n!}{(n-k)!}x^{n-k}. \]

Then, taking the derivative of this we have,

    \begin{align*}   g^{(k+1)}(x) &= b_m (m+n-k) x^{m+n-k-1} + b_{m-1} (m+n-k-1) x^{m+n-k-2} + \cdots + b_1 (n+1-k) x^{n+1-k-1} + \frac{n!}{(n-k)!} (n-k) x^{n-k-1)} \\  &= c_m x^{m+n-(k+1)} + c_{m-1} x^{m+n-(k+1)-1} + \cdots + c_1 x^{n+1-(k+1)} + \frac{n!}{(n-(k+1))!} x^{n-(k+1)}. \end{align*}

Hence, the formula holds for all 0 \leq k \leq n. But then, if 0 \leq k < n we have

    \begin{align*}   &&g^{(k)}(x) &= b_m x^{m+n-k} + b_{m-1} x^{m+n-k-1} + \cdots + b_1 x^{n+1-k} + \frac{n!}{(n-k)!}x^{n-k} \\ \implies g^{(k)}(0) &= 0. \end{align*}

If k =n then x^{n-k} = x^0 = 1 for all x; hence,

    \[ g^{(n)}(0) &= \frac{n!}{0!} x^0 = n!. \qquad \blacksquare\]

Find and prove a formula for the derivative of a product of n differentiable functions

Let

    \[ g = f_1 f_2 \cdots f_n \]

be the product of n functions f_1, \ldots, f_n with derivatives f'_1, \ldots, f'_n. Find a formula for the derivative of g, and prove that it is correct by mathematical induction.

Furthermore, show that

    \[ \frac{g'(x)}{g(x)} = \frac{f'_1 (x)}{f_1(x)} + \cdots + \frac{f'_n(x)}{f_n (x)}, \]

for those x at which f'_i(x) \neq 0 for any i = 1, \ldots, n.


Claim: If g = f_1 \cdots f_n, then

    \[g' = f'_1 (f_2 f_3 \cdots f_n) + f'_2 (f_1 f_3 \cdots f_n) + \cdots + f'_n(f_1 f_2 \cdots f_{n-1}). \]

Proof. For the case n = 2, from the usual product rule we have

    \[ g = f_1 f_2 \quad \implies \quad g' = f'_1 (f_2) + f'_2(f_1). \]

Thus, our claimed formula holds in this case. Assume then that it is true for some integer k \geq 2. Then, if g = f_1 \cdots f_{k+1} we have,

    \begin{align*}  &&g &= (f_1 \cdots f_k) f_{k+1}  \\  \implies && g' &= (f_1 \cdots f_k)' f_{k+1} + f'_{k+1} (f_1 \cdots f_k) & (\text{product rule}) \\  \implies && g' &= \left(f'_1 (f_2 \cdots f_k) + f'_2 (f_1 f_3 \cdots f_k) + \\  &&& \left. \qquad \cdots + f'_k (f_1 \cdots f_{k-1})\right) f_{k+1} + f'_{k+1} (f_1 \cdots f_k) & (\text{Ind. Hyp.})\\  \implies && g' &= f'_1 (f_2 \cdots f_{k+1}) + f'_2 (f_1 f_3 \cdots f_{k+1}) + \cdots + f'_{k+1}(f_1 \cdots f_k). \end{align*}

Therefore, if the formula holds for k then it also holds for k+1; thus, it holds for all positive integers. \qquad \blacksquare

Next, we prove the formula for the quotient, \frac{g'(x)}{g(x)}.

Proof. Let g be defined as above, and let x be a point at which f_i(x) \neq 0 for any i = 1, \ldots, n. Then,

    \begin{align*}  \frac{g'(x)}{g(x)} &= \frac{f'_1(f_2 \cdots f_n) + \cdots + f'_n(f_1 \cdots f_{n-1})}{f_1 \cdots f_n} \\  &= \frac{f'_1 \cdots f_n}{f_1 \cdots f_n} + \cdots + \frac{f_1 \cdots f'_n}{f_1 \cdots f_n} \\  &= \frac{f'_1}{f_1} + \cdots + \frac{f'_n}{f_n}. \qquad \blacksquare \end{align*}

Find the zeros of sin and cosine

  1. Prove that \sin x = 0 if and only if x = n \pi for n \in \mathbb{Z}.
  2. Find all x \in \mathbb{R} such that \cos x = 0.

  1. Proof. First, since \sin x = - \sin x implies that if \sin x = 0 then \sin (-x) =0, it is sufficient to show the statement holds for all n \in \mathbb{Z}_{>0} (and noting that \sin 0 = 0).

    From Theorem 2.3 (Apostol, Section 2.5) we know \sin 0 = \sin \pi = 0. Hence, the statement holds for the case n =0 and n =1. Now, we use induction twice, first for the even integers, and then for the odd integers.

    Assume the statement is true for some even m \in \mathbb{Z}_{\geq 0}, i.e., \sin (m \pi) = 0. Then, using the periodicity of the sine function

        \[ 0 = \sin (m \pi) = \sin (m \pi + 2 \pi) = \sin ((m+2)\pi). \]

    Hence, the statement is true for all even n \in \mathbb{Z}_{\geq 0}.

    Next, assume it is true for some odd m \in \mathbb{Z}_{\geq 0}. Then,

        \[ 0 = \sin m \pi = \sin (m\pi + 2 \pi) = \sin ((m+2)\pi). \]

    Hence, it is true for all odd n \in \mathbb{Z}_{\geq 0}. Therefore, it is true for all nonnegative integers n, and hence, for all n \in \mathbb{Z}.

    Conversely, we must show that these are the only real values for which sine is 0. By the periodicity of sine, it is sufficient to show it true over any 2 \pi interval. We choose the interval (-\pi, \pi) and show \sin x = 0 \iff x = 0 for all x \in (-\pi, \pi). From above, we know \sin 0 =0. Then, from the fundamental properties of the sine and cosine, we have the inequalities,

        \[ 0 < \cos x < \frac{\sin x }{x} < \frac{1}{\cos x} \qquad x \in \left(0, \frac{\pi}{2} \right). \]

    Hence, both \sin x and \cos x are positive on \left(0, \frac{\pi}{2} \right). But, from the co-relation identities, we know

        \[ \sin \left( \frac{\pi}{2} + x \right) = \cos x. \]

    Thus, for x \in \left( \frac{\pi}{2} , \pi) \sin x \neq 0 (since \cos x \neq 0 for x \in \left( 0, \frac{\pi}{2} \right).) But, we also know \sin \frac{\pi}{2} = 1. Hence, for x \in (0, \pi) we have \sin x \neq 0. Since \sin is an odd function,

        \[ \sin (-x) = - \sin x \quad \implies \quad \sin (-x) \neq 0 \quad \text{for } x \in (0, \pi). \]

    Therefore, \sin x \neq 0 for x \in (-\pi, 0). Therefore, we have \sin x = 0 \implies x = 0 for x \in (-\pi, \pi). \qquad \blacksquare

  2. Claim: \cos x = 0 if and only if x = \frac{\pi}{2} + n \pi.

    Proof. Since \cos x = 0 \implies \sin \left( x + \frac{\pi}{2} \right) =0 we apply part (a) to conclude

        \[ x + \frac{\pi}{2} = n \pi \quad \implies \quad x = \frac{\pi}{2} + n \pi. \]

    (where we’ve just pulled an extra factor of \pi out of n \pi to make this addition so that our answer looks like the one in the book). \qquad \blacksquare

Fundamental properties of polynomials

Consider a polynomial of degree n,

    \[ f(x) = \sum_{k=0}^n c_k x^k = c_0 + c_1 x + \cdots + c_n x^n. \]

  1. Prove that if f(0) = 0 and n \geq 1 then there is an n-1 degree polynomial g(x) such that f(x) = x g(x).
  2. Prove that p(x) = f(x+a) is a polynomial of degree n for any a \in \mathbb{R}.
  3. Prove that if n \geq 1 and f(a) = 0, then f(x) = (x-a)h(x), for a polynomial h(x) of degree n-1.
  4. If f(x) has n+1 real zeros (i.e., there exist n+1 numbers x \in \mathbb{R} such that f(x) = 0), then c_k =0 for all k = 0, \ldots, n, and f(x) = 0 for all x \in \mathbb{R}.
  5. If

        \[ g(x) = \sum_{k=0}^m b_k x^k \]

    is a polynomial of degree m \geq n, and if

        \[ g(x) = f(x) \]

    for m+1 distinct x \in \mathbb{R}, then

        \[ m=n, \qquad b_k = c_k \quad \text{for each } k, \qquad g(x) = f(x) \quad \text{for all } x \in \mathbb{R}. \]


  1. Proof. First,

        \[ f(0) = 0 \quad \implies \quad c_0 + c_1 \cdot 0 + \cdots + c_{n-1} \cdot 0^{n-1} + c_n 0^n = 0 \quad \implies \quad c_0 = 0. \]

    Thus,

        \begin{align*}    f(x) &= c_1x + c_2 x^2 + \cdots + c_n x^n \\   &= x (c_1 + c_2 x + \cdots + c_n x^{n-1}) \\   &= x g(x) \end{align*}

    where g(x) = \sum_{k=0}^{n-1} c_{k+1} x^k is then an n-1 degree polynomial. \qquad \blacksquare

  2. Proof. We compute, using the binomial theorem,

        \begin{align*}  p(x) = f(x+a) &= \sum_{k=0}^n (x+a)^k c_k \\ &= c_0 + (x+a)c_1 + (x+a)^2 c_2 + \cdots + (x+a)^n c_n \\ &= c_0 + c_1 \left(\sum_{j=0}^1 \binom{1}{j} a^j x^{1-j}\right) + c_2 \left( \sum_{j=0}^2 \binom{2}{j} a^j x^{2-j} \right) + \cdots + c_n \left( \sum_{j=0}^n \binom{n}{j} a^j x^{n-j} \right) \end{align*}

    Now, we want to group together the coefficients on the like-powers of x. So we pull out the terms corresponding to x^0, x^1, etc. This gives us,

        \begin{align*} &= (c_0 + ac_1 + a^2 c_2 + \cdots + a^n c_n) + x(c_1 + 2ac_2 + \cdots + na^{n-1}c_n) \\ &\qquad \qquad + x^2\left(c_2 + 3ac_3 + \cdots + \binom{n}{n-2}a^{n-2} c_n \right) + \cdots + x^n c_n \\ &= \sum_{k=0}^n \left( x^k \left( \sum_{j=k}^n \binom{j}{j-k} c_j a^{j-k} \right) \right). \end{align*}

    In the final line, we rewrote the coefficients as sums to view them a little more concisely. Either way, since a and all of the c_i‘s are constants, we have \sum_{j=k}^n \binom{j}{j-k} c_j a^{j-k} is some constant for each k, say d_k and we have,

        \[ p(x) = \sum_{k=0}^n d_k x^k. \]

    Hence, p(x) is a polynomial of degree n. \qquad \blacksquare

  3. Proof. From part (b) we know that if f(x) is a polynomial of degree n, then p(x) = f(x+a) is also a polynomial of the same degree. Further,

        \[ f(a) = 0 \quad \implies \quad p(0) = f(a) = 0. \]

    So, by part (a), we have

        \[ p(x) = x \cdot g(x) \]

    where g(x) is a polynomial of degree n-1. Thus,

        \[ p(x-a) = f(x) = (x-a)\cdot g(x-a). \]

    But, if g(x) is a polynomial of degree n-1, then by part (b) again, so is h(x) = g(x+(-a))= g(x-a). Hence,

        \[ f(x) = (x-a) \cdot h(x) \]

    for h a degree n-1 polynomial, as requested. \qquad \blacksquare

  4. Proof. The proof is by induction. First, we consider the case in which n = 1. Then f(x) = c_0 + c_1 x. Since the assumption is that there are n+1 distinct real x such that f(x) = 0, we know there exist a_1, a_2 \in \mathbb{R} such that

        \[ f(a_1) = f(a_2) = 0, \qquad a_1 \neq a_2. \]

    Thus,

        \begin{align*}  c_0 + c_1 a_1 = c_0 + c_1 a_2 = 0 && \implies  && c_1 a_1 - c_1 a_2 &= 0 \\ && \implies && c_1 (a_1 - a_2) &= 0 \\ && \implies && c_1 &= 0 & (\text{since } a_1 \neq a_2)\\ && \implies && c_0 &= 0 & (\text{since } c_0 + c_1 a_1 = 0). \end{align*}

    Hence, the statement is true for n=1. Assume that it is true for some n = k \in \mathbb{Z}_{>0}. Then, let f(x) be a polynomial of degree k+1 with k+2 distinct real zeros, a_1, \ldots, a_{k+2}. Then, since f(a_{k+2}) = 0, using part (c), we have,

        \[ f(x) = (x - a_{k+2})h(x) \]

    where h(x) is a polynomial of degree k. Further, we know there are k+1 distinct values a_1, \ldots, a_{k+1} such that h(a_i) = 0. (Since f(a_i) = 0 for 1 \leq i \leq k+2 and (x - a_{k+2}) \neq 0 for x = a_i with 1 \leq i \leq k+1 since all of the a_i are distinct.) Thus, by the induction hypothesis, every coefficient of h is 0 and h(x) = 0 for all x \in \mathbb{R}. Thus,

        \begin{align*}   f(x) = (x - a_{k+2}) h(x) &= (x-a_{k+2}) \cdot \sum_{j=0}^k c_j x^j \\  &= \sum_{j=0}^k (x-a_{k+2})c_j x^j \\  &= c_k x^{k+1} + (c_{k-1} - a_{k+2} c_k)x^k + \cdots + (c_1 - a_{k+2} c_0) x + a_{k+2} c_0. \end{align*}

    But, since all of the coefficients of h(x) are zero, we have c_i = 0 for i = 1, \ldots k. Hence,

        \[ f(x) = 0 \cdot x^{k+1} + 0 \cdot x^k + \cdots + 0 \cdot x + 0 = 0 \qquad \text{for all } x \in \mathbb{R}. \]

    Thus, all of the coefficients of f are zero and f(x) = 0 for all x \in \mathbb{R}. Hence, the statement is true for the case k+1, and therefore, for all n \in \mathbb{Z}_{>0}. \qquad \blacksquare

  5. Proof. Let

        \[ p(x) = g(x) - f(x) = \sum_{k=0}^m b_k x^k - \sum_{k=0}^n c_k x^k = \sum_{k=0}^m (b_k - c_k)x^k \]

    where c_k = 0 for n < k \leq m (we have m  \geq n by assumption).
    Then, there are m+1 distinct real x for which p(x) = 0. (Since there are m+1 distinct real values for which g(x) = f(x), and so at each of these values p(x) = g(x) - f(x) = 0.) Thus, by part (d), b_k - c_k = 0 for k = 0, \ldots, m and p(x) = 0 for all x \in \mathbb{R}. But then,

        \[ b_k - c_k = 0 \quad \implies \quad b_k = c_k \qquad \text{for }k =0, \ldots, m \]

    and

        \[ p(x) = 0 \quad \implies \quad g(x) - f(x) = 0 \quad \implies \quad f(x) = g(x), \]

    for all x \in \mathbb{R}. Further, since b_k - c_k = 0 for k = 0, \ldots, m, and by assumption c_k = 0 for k = n+1, \ldots, m, we have b_k = 0 for k = n+1, \ldots, m. But then,

        \[ g(x) = \sum_{k=0}^n b_k x^k + \sum_{k=n+1}^m 0 \cdot x^k = \sum_{k=0}^n b_k x^k \]

    means g(x) is a polynomial of degree n as well. \qquad \blacksquare