Prove that the error term in the Taylor expansion of satisfies the following inequality.

*Proof.* To prove this we will work directly from the definition of the error as an integral,

We know for we have, (we need for the expansion of to be valid),

Therefore we have

So, we can bound the error term by bounding the integral,

Yeah, that’s what I suspected- that you were thinking of an infinite series. I figured it out after browsing thru Courant. I understand your thinking, but we haven’t officially gotten to infinite series yet!

You stated we need x —> {0,1} for the expansion of 1/1+x^2. Can you please elaborate on this? Apostol expands 1/1-x with restriction x ≠ 1, then substitutes -x^2 for x in the polynomial – but then x will never be 1. Expanding 1/1+x^2 directly also seems reasonable. When |x|>1 I see the error term becomes unreasonable. Did I miss something?

So, perhaps the restriction |x| infinity.

My comment seems to have gotten messed up… What I said was perhaps the restriction on x is to insure the convergence if the Taylor series went to infinity? (I’m assuming the decomposed polynomial acts like 1-r^n/1-r when n —> infinity, rather then 1/1-r + En(r)).

I’ll need to think about this. You’re right that the expansion is valid for all (since it is just finitely many terms… I think when I wrote this I was thinking about the expansion for the geometric series which is only valid for ).