Home » exponential

Tag: exponential

Determine the interval of convergence of ∑ xn / n! and show that it satisfies y′ = x + y

Consider the function f(x) defined by the power series

    \[ f(x) = \sum_{n=2}^{\infty} \frac{x^n}{n!}. \]

Determine the interval of convergence for f(x) and show that it satisfies the differential equation

    \[ y' = x + y. \]


(We might notice that this is almost the power series expansion for the exponential function e^x and deduce the interval of convergence and the differential equation from properties of the exponential that we already know. We can do it from scratch just as easily though.)

First, we apply the ratio test

    \begin{align*}  \lim_{n \to \infty} \frac{a_{n+1}}{a_n} &= \lim_{n \to \infty} \left( \frac{x^{n+1}}{(n+1)!} \right) \left( \frac{n!}{x^n} \right) \\[9pt]  &= \lim_{n \to \infty} \frac{x}{n+1} \\[9pt]  &= 0. \end{align*}

Hence, the series converges for all x. Next, we take a derivative

    \begin{align*}  y &= \sum_{n=2}^{\infty} \frac{x^n}{n!} \\[9pt]  y' &= \sum_{n=2}^{\infty} \frac{nx^{n-1}}{n!} \\[9pt]     &= \sum_{n=2}^{\infty} \frac{x^{n-1}}{(n-1)!} \\[9pt]     &= \sum_{n=1}^{\infty} \frac{x^n}{n!}. \end{align*}

Then we have

    \begin{align*}  x + y &= x + \sum_{n=2}^{\infty} \frac{x^n}{n!} \\[9pt]  &= \sum_{n=1}^{\infty} \frac{x^n}{n!} \\[9pt]  &= y'. \end{align*}

Now, to compute the sum we can solve the given differential equation

    \[ y' = x + y \quad \implies \quad y' - y = x. \]

This is a first order linear differential equation of the form y' + P(x)y = Q(x) with P(x) = -1 and Q(x) = x. We also know that f(0) = 0; therefore, this equation has a unique solution satisfying the given initial condition which is given by

    \[  y = be^{-A(x)} + e^{-A(x)} \int_a^x Q(t) e^{A(t)} \, dt. \]

Where a = b = 0 and

    \[ A(x) = \int_0^x P(t) \, dt = - \int_0^x dt = -x. \]

Therefore, we have

    \begin{align*}  f(x) &= e^{x} \left( \int_0^x t e^{-t} \, dt \\[9pt]  &= e^x \left( -te^{-t} \Bigr \rvert_0^x + \int_0^x e^{-t} \, dt \right) \\[9pt]  &= e^x \left( -xe^{-x} - e^{-x} + 1 \right) \\[9pt]  &= e^x - x - 1. \end{align*}

Prove some properties of complex numbers raised to complex powers

If z is a nonzero complex numbers and w \in \mathbb{C} let

    \[ z^w = e^{w \operatorname{Log} z}, \]

where

    \[ \operatorname{Log} z = \log |z| + i \arg(z). \]

  1. Compute 1^i, i^i, and (-1)^i.
  2. Prove that z^a z^b = z^{a+b} if a,b, and z are in \mathbb{C} with z \neq 0.
  3. What conditions on z_1 and z_2 must we have for the equation

        \[ (z_1 z_2)^w = z_1^w z_2^w \]

    to hold? Show that the equation fails when z_1 = z_2 = -1 and w = i.


  1. The computations are as follows,

        \begin{align*}  1^i &= e^{i \operatorname{Log} 1} = e^0 = 1. \\[9pt]  i^i &= e^{i \operatorname{Log} i} = e^{i \left( \frac{\pi i}{2} \right)} = e^{-\frac{\pi}{2}}. \\[9pt]  (-1)^i &= e^{i \operatorname{Log} (-1)} = e^{i (\pi i)} = e^{- \pi}.  \end{align*}

  2. Proof. Using the definitions we compute,

        \begin{align*}  z^a z^b &= \left( e^{a \operatorname{Log} z} \right) \left( e^{b \operatorname{Log} z} \right) \\[9pt]  &= e^{a \operatorname{Log} z + b \operatorname{Log} z} \\[9pt]  &= e^{(a+b)\operatorname{Log} z} \\[9pt]  &= z^{a+b}. \qquad \blacksquare \end{align*}

  3. First, if z_1 = z_2 = -1 and w = i then we have

        \[ (z_1 z_2)^w = 1^i = 1, \]

    but

        \[ z_1^w z_2^w = (-1)^i \cdot (-1)^i = e^{-\pi} \cdot e^{-\pi} = e^{-2\pi}. \]

    Thus,

        \[ (z_1 z_2)^w \neq z_1^w z_2^w. \]

    In order for (z_1 z_2)^w = z_1^w z_2^w we must have - \pi < \arg z_1 + \arg z_2 \leq \pi since

        \[ (z_1 z_2)^w = e^{w \operatorname{Log} (z_1 z_2)} = z_1^w z_2^w \cdot e^{2n \pi i}, \]

    and n = 0 only when - \pi < arg z_1 + arg z_2 \leq \pi.

Prove DeMoivre’s theorem using complex numbers

  1. Prove DeMoivre’s theorem,

        \[ (\cos \theta + i \sin \theta)^n = \cos (n \theta) + i \sin (n \theta), \]

    for all \theta \in \mathbb{R} and all n \in \mathbb{Z}_{>0}.

  2. Prove the triple angle formulas for sine and cosine,

        \[ \sin (3 \theta) = 3 \cos^2 \theta \sin \theta - \sin^3 \theta, \qquad \cos (3 \theta) = \cos^3 \theta - 3 \cos \theta \sin^2 \theta, \]

    by letting n = 3 in part (a).


  1. Proof. Since \cos \theta + i \sin \theta = e^{i \theta} we have

        \[ (\cos \theta + i \sin \theta)^n = (e^{i \theta})^n = e^{ni \theta} = \cos (n \theta) + i \sin (n \theta). \qquad \blacksquare \]

  2. Letting n = 3, we first apply DeMoivre’s theorem to get

        \[ (\cos \theta + i \sin \theta)^3 = \cos (3 \theta) + i \sin (3 \theta). \]

    On the other hand, we can expand the product,

        \[  (\cos \theta + i \sin \theta)^3 = \cos^3 \theta + 3 i \cos^2 \theta \sin \theta - 3 \cos \theta \sin^2 \theta - i \sin^3 \theta \]

    Equating real and imaginary parts from the two expressions we obtain the requested identities:

        \[ \cos  (3 \theta) = \cos^3 \theta - 3 \cos \theta \sin^2 \theta \quad \text{and} \quad \sin (3 \theta) = 3 \cos^2 \theta \sin \theta - \sin^3 \theta.  \]

Prove formula relating trig functions of real numbers to the complex exponential

  1. Prove that for \theta \in \mathbb{R} we have the following formulas,

        \[ \cos \theta = \frac{e^{i \theta} + e^{-i \theta}}{2}, \qquad \sin  \theta = \frac{e^{i \theta} - e^{-i\theta}}{2i}. \]

  2. Using part (a) prove that

        \[ \cos^2 \theta = \frac{1}{2}(1 + \cos (2 \theta)), \qquad \sin^2 \theta = \frac{1}{2} (1 - \cos (2 \theta)). \]


  1. Proof. We compute, using the definition of the complex exponential, e^{i \theta} = \cos \theta + i \sin \theta:

        \begin{align*}  \cos \theta &= \frac{1}{2} (2 \cos \theta) \\  &= \frac{1}{2} \left( \cos \theta + i \sin \theta + \cos (\theta) - i \sin (\theta) ) \\  &= \frac{1}{2} \left( \cos \theta + i \sin \theta + \cos (-\theta) + i \sin (-\theta)) \\  &= \frac{e^{i \theta} + e^{-i \theta}}{2}.  \end{align*}

    (Where in the second to last line we used that cosine is an even function and sine is odd, i.e., \cos \theta = \cos (-\theta) and \sin \theta = -\sin \theta.)
    For the second formula we compute similarly,

        \begin{align*}  \sin \theta &= \frac{1}{2i} (2i \sin \theta) \\  &= \frac{1}{2i} \left( \cos \theta + i \sin \theta - \cos (-\theta) - i \sin (-\theta)) \\  &= \frac{e^{i \theta} - e^{-i \theta}}{2i}. \qquad \blacksquare \end{align*}

  2. Proof. We can compute these directly using the expressions we obtained in part (a),

        \begin{align*}  \cos^2 \theta &= \left( \frac{e^{i \theta} + e^{-i \theta}}{2} \right)^2 \\  &= \frac{e^{2i \theta} + e^{-2i \theta} + 2}{4} \\  &= \frac{ \cos (2 \theta) + 2}{2} \\  &= \frac{1}{2} (1 + \cos (2 \theta)). \\ \sin^2 \theta &= \left( \frac{e^{i \theta} - e^{-i \theta}}{2i} \right)^2 \\  &= \frac{e^{2 i \theta} + e^{-2i \theta} - 2}{-4} \\  &= \frac{2 - \cos (2 \theta)}{2} \\  &= \frac{1}{2} (1 - \cos (2 \theta)). \qquad \blacksquare \end{align*}

Prove that the complex exponential function is never zero

  1. Prove that e^z \neq 0 for all z \in \mathbb{C}.
  2. Find all z \in \mathbb{C} such that e^z = 1.

    1. Proof. Let z = a + bi for real numbers a,b. Then from the definition of the complex exponential, and the fact that e^{x+y} = e^x e^y for complex numbers x and y from Theorem 9.3 (page 367 of Apostol) we have,

          \[ e^z = e^{a+bi} = e^a (\cos b + i \sin b). \]

      But from the properties of the real exponential function we know e^a \neq 0 for any a \in \mathbb{R}. Furthermore, we know \cos b + i \sin b \neq 0 for all b \in \mathbb{R} since

          \[ \cos b + i \sin b = 0 \quad \implies \quad \cos b = \sin b = 0 \quad \implies \quad \sin^2 b + \cos^2 b = 0, \]

      which contradicts the Pythagorean identity (\cos^2 + \sin^2 = 1). Hence, e^z \neq 0 for any z \in \mathbb{C}. \qquad \blacksquare

    2. We compute

          \[  e^z = 1 \quad \implies \quad e^a \cos b + i (e^a \sin b) &= 1. \]

      Then, setting real and imaginary parts equal this implies

          \[ e^a \cos b = 1 \quad \text{and} \quad e^A \sin b = 0. \]

      This implies a = 0 and b = 2n \pi. Thus,

          \[ z = 2n \pi i. \]

Compute some properties of compound interest rates

Consider a bank account which starts with P dollars and pays an interest rate r per year, compounding m times per year.

  1. Prove that the balance in the bank account at the end of n years is

        \[ P \left( 1 + \frac{r}{m} \right)^{mn}. \]

For fixed values of r and n, the balance at the end of n years as m \to +\infty is given by

    \[ \lim_{m \to +\infty} P \left( 1 + \frac{r}{m} \right)^{mn} = Pe^{rn}. \]

We say that money grows at the annual rate r with continuous compounding if the amount of money after t years is denoted by f(t) is given by

    \[ f(t) = f(0)e^{rt}. \]

Give an approximate length of time for the money in a bank account to double if r = .06 and compounds:

  1. continuously;
  2. four times per year.

Incomplete. Sorry, I’ll try to get back to this soon(ish).

Prove some properties of the function e-1/x2

Consider the function

    \[ f(x) = e^{-\frac{1}{x^2}} \qquad \text{when} \quad x \neq 0 \]

and f(0) = 0.

  1. Prove that for every positive number m we have

        \[ \lim_{x \to 0} \frac{f(x)}{x^m} = 0. \]

  2. Prove that if x \neq 0 then

        \[ f^{(n)}(x) = f(x) P \left( \frac{1}{x} \right) \]

    where P(t) is a polynomial in t.

  3. Prove that

        \[ f^{(n)}(0) = 0 \qquad \text{for all } n \geq 1. \]


  1. Proof. (A specific case of this general theorem is actually the first problem of this section, here. Maybe it’s worth taking a look since this proof is just generalizing that particular case.) We make the substitution t = \frac{1}{x^2}, so that t \to +\infty as x \to 0 and we have

        \begin{align*}  \lim_{x \to 0} \frac{e^{-\frac{1}{x^2}}}{x^m} &= \lim_{t \to +\infty} \frac{t^{\frac{m}{2}}}{e^t} \\[9pt]  &= 0 \end{align*}

    by Theorem 7.11 (page 301 of Apostol) since m > 0 implies \frac{m}{2} > 0 as well. \qquad \blacksquare

  2. Proof. The proof is by induction on n. In the case n = 1 we have

        \begin{align*}  f'(x) &= \left( \frac{2}{x^3} \right) e^{-\frac{1}{x^2}} \\[9pt]  &= 2 \left( \frac{1}{x} \right)^3 e^{-\frac{1}{x^2}} \\[9pt]  &= P \left(\frac{1}{x} \right) e^{-\frac{1}{x^2}}. \end{align*}

    So, indeed the formula is valid in the case n = 1. Assume then that the formula holds for some positive integer k. We want to show this implies the formula holds for the case k + 1.

        \begin{align*}  f^{(k+1)}(x) = \left( f^{(k)}(x) \right)' &= \left( P\left( \frac{1}{x} \right) e^{-\frac{1}{x^2}} \right)' \\[9pt]  &= P' \left( \frac{1}{x} \right) e^{-\frac{1}{x^2}} + \frac{2}{x^3} P \left( \frac{1}{x} \right) e^{-\frac{1}{x^2}} \\[9pt]  &= \left( P' \left( \frac{1}{x} \right) + \frac{2}{x^3} P \left( \frac{1}{x} \right) \right)e^{-\frac{1}{x^2}}. \end{align*}

    But then the leading term

        \[ P' \left( \frac{1}{x} \right) + 2 \left( \frac{1}{x} \right)^3 P \left( \frac{1}{x} \right) \]

    is still a polynomial in \frac{1}{x} since the derivative of a polynomial in \frac{1}{x} is still a polynomial in \frac{1}{x}, and so is the sum of two polynomials in \frac{1}{x}. Therefore, we have that the formula holds for the case k+1; hence, it holds for all positive integers n. \qquad \blacksquare

  3. Proof. The proof is by induction on n. If n = 1 then we use the limit definition of the derivative to compute the derivative at 0,

        \begin{align*}  f'(0) &= \lim_{h \to 0} \frac{f(0+h) - f(0)}{h} \\[9pt]  &= \lim_{h \to 0} \frac{f(h)}{h} &(f(0) = 0 \text{ by def of } f)\\[9pt]  &= 0 & \text{by part (a)}. \end{align*}

    So, indeed f'(0) = 0 and the statement is true for the case n = 1. Assume then that f^{(k)} (0) = 0 for some positive integer k. Then, we use the limit definition of the derivative again to compute the derivative f^{(k+1)}(0),

        \begin{align*}  f^{(k+1)}(0) &= \lim_{h \to 0} \frac{f^{(k)}(0+h) - f^{(k)}(0)}{h} \\[9pt]  &= \lim_{h \to 0} \frac{f^{(k)}(h)}{h} &(f^{(k)}(0) = 0 \text{ by Ind. Hyp.}) \\[9pt]  &= \lim_{h \to 0} f(h) P \left( \frac{1}{h} \right) \cdot \frac{1}{h} &(\text{part (b)}) \\[9pt]  &= \lim_{h \to 0} P \left( \frac{1}{h} \right) e^{-\frac{1}{h^2}}. \end{align*}

    This follow since \frac{1}{h} \cdot P \left( \frac{1}{h} \right) is still a polynomial in \frac{1}{h}, and by the definition of f(x) for x \neq 0. But then, by part (a) we know

        \[ \lim_{h \to 0} \frac{f(h)}{h^m} = 0 \qquad \text{for all } m \in \mathbb{Z}^+. \]

    Therefore,

        \[ f^{(k+1)}(0) = \lim_{h \to 0} P \left( \frac{1}{h} \right) e^{-\frac{1}{h^2}} = 0. \]

    Thus, the formula holds for the case k+1, and hence, for all positive integers n. \qquad \blacksquare