- #1
poissonspot
- 40
- 0
When I was first introduced to a derivation of the taylor series representation of the exponential function here (pg 25): http://paginas.fisica.uson.mx/horacio.munguia/Personal/Documentos/Libros/Euler The_Master of Us.pdf
I noted the author, Dunham mentioning that the argument was non rigorous. I suspected since that it had to do something with the step involving simplifying the expressions like $\frac{k(k-1)(k-2)}{k^3}$ to $1$ for infinitely large $k$. Later Baby Rudin clarified with the section on $e$ in chapter 3. However I am still faced with some urge to simply argue when faced with sums like $\sum\limits_{n=0}^M {n \choose k}(\frac{x}{n})^{k}$ that terms like $\frac{n(n-1)(n-2)}{n^3}$ go to 1 for sufficiently large n and then just consider the sum $\sum\limits_{k=0}^\infty \frac{x^k}{k!}$. Is there some kind of balance between Rudin's thorough treatment and Euler's intuitive but perhaps loose argument?
The best I could think of is as follows:
Letting $f_n(x)=\sum\limits_{k=0}^n {n \choose k}(\frac{x}{n})^k$
I don't think it is hard to show $|{{f_n(x)}-\sum\limits_{k=0}^n \frac{x^k}{k!}}|<o(1)$
By considering the terms $\frac{n(n-1)}{2}x^2$ like so: $(\frac{1}{2}-\frac{1}{2n})x^2$ and then summing up all the error terms like $-\frac{1}{2n}x^2$.
This leads me to ask about this sum:
$ 1+M(\frac{f}{M})^{\alpha}+\frac{M(M-1)(\frac{f}{M})^{2\alpha}}{2!}+\frac{M(M-1)(M-2)(\frac{f}{M})^{3\alpha}}{3!}+...=\sum\limits_{k=0}^M {M \choose k}(\frac{f}{M})^{k\alpha}$
How would you find the limit of this sum as $M->\infty$?
I noted the author, Dunham mentioning that the argument was non rigorous. I suspected since that it had to do something with the step involving simplifying the expressions like $\frac{k(k-1)(k-2)}{k^3}$ to $1$ for infinitely large $k$. Later Baby Rudin clarified with the section on $e$ in chapter 3. However I am still faced with some urge to simply argue when faced with sums like $\sum\limits_{n=0}^M {n \choose k}(\frac{x}{n})^{k}$ that terms like $\frac{n(n-1)(n-2)}{n^3}$ go to 1 for sufficiently large n and then just consider the sum $\sum\limits_{k=0}^\infty \frac{x^k}{k!}$. Is there some kind of balance between Rudin's thorough treatment and Euler's intuitive but perhaps loose argument?
The best I could think of is as follows:
Letting $f_n(x)=\sum\limits_{k=0}^n {n \choose k}(\frac{x}{n})^k$
I don't think it is hard to show $|{{f_n(x)}-\sum\limits_{k=0}^n \frac{x^k}{k!}}|<o(1)$
By considering the terms $\frac{n(n-1)}{2}x^2$ like so: $(\frac{1}{2}-\frac{1}{2n})x^2$ and then summing up all the error terms like $-\frac{1}{2n}x^2$.
This leads me to ask about this sum:
$ 1+M(\frac{f}{M})^{\alpha}+\frac{M(M-1)(\frac{f}{M})^{2\alpha}}{2!}+\frac{M(M-1)(M-2)(\frac{f}{M})^{3\alpha}}{3!}+...=\sum\limits_{k=0}^M {M \choose k}(\frac{f}{M})^{k\alpha}$
How would you find the limit of this sum as $M->\infty$?
Last edited: