Why e is "natural", and why we use radians.

In summary, the conversation discusses the natural use of base $e$ and radians in calculus. The best explanation for these concepts is given, including the use of exponential and logarithmic functions and their derivative formulas. It is also mentioned that using non-natural bases can result in less elegant formulas. The conversation also touches on defining $e$ through the integral of $1/t$ and its properties.
  • #1
ThePerfectHacker
231
0
Many books give bad answers or no answers at all to why we work with base $e$ and measure angles according to radians in calculus.

Here is what I tell my students, as far as I know, this is the best explanation I seen because it is essentially a one line explanation that is short and to the point.

Why $e$ is natural: Let $b>0$, $b\not = 1$ and consider the function $f(x) = b^x$. This is an exponential function with base $b$ and we can show that $ f'(x) = c\cdot b^x$ where $c$ is some constant. In a similar manner we can consider the function $g(x) = \log_b x$, logarithmic function to base $b$, we can show that $g'(x) = k\cdot x^{-1}$ where $k$ is some constant. It would be best and simplest looking derivative formula if those constants, $b$ and $k$, were both equal to one. This happens exactly when $b=e$. Thus, $e$ is natural base for exponent because the derivative formulas for exponential and logarithmic function are as simple as they can be.

Why radians are natural: A "degree" is defined by declaring a full revolution to be $360^{\circ}$, a "radian" is defined by declaring a full revolution to be $2\pi$ radians. More generally, we can define a new angle measurement by declaring a full revolution to be $R$ (for instance, $R=400$, results in something known as gradians). Let us denote $\sin_R x$ and $\cos_R x$ to be the sine and cosine functions of $x$ measured along $R$-units of angles. It can be shown that $(\sin_R x)' = C\cdot(\cos_R x)$ and $(\cos_R x)' = -C\cdot(\sin_R x)$ where $C$ is some constant. It would be best if this constant can be made $C=1$ which happens precisely when $R=2\pi$.
 
Physics news on Phys.org
  • #2
ThePerfectHacker said:
Many books give bad answers or no answers at all to why we work with base $e$ and measure angles according to radians in calculus.

Here is what I tell my students, as far as I know, this is the best explanation I seen because it is essentially a one line explanation that is short and to the point.

Why $e$ is natural: Let $b>0$, $b\not = 1$ and consider the function $f(x) = b^x$. This is an exponential function with base $b$ and we can show that $ f'(x) = c\cdot b^x$ where $c$ is some constant. In a similar manner we can consider the function $g(x) = \log_b x$, logarithmic function to base $b$, we can show that $g'(x) = k\cdot x^{-1}$ where $k$ is some constant. It would be best and simplest looking derivative formula if those constants, $b$ and $k$, were both equal to one. This happens exactly when $b=e$. Thus, $e$ is natural base for exponent because the derivative formulas for exponential and logarithmic function are as simple as they can be.

Why radians are natural: A "degree" is defined by declaring a full revolution to be $360^{\circ}$, a "radian" is defined by declaring a full revolution to be $2\pi$ radians. More generally, we can define a new angle measurement by declaring a full revolution to be $R$ (for instance, $R=400$, results in something known as gradians). Let us denote $\sin_R x$ and $\cos_R x$ to be the sine and cosine functions of $x$ measured along $R$-units of angles. It can be shown that $(\sin_R x)' = C\cdot(\cos_R x)$ and $(\cos_R x)' = -C\cdot(\sin_R x)$ where $C$ is some constant. It would be best if this constant can be made $C=1$ which happens precisely when $R=2\pi$.

Let's suppose to use 'non natural' bases for exponents and angles... in this case probably today the following 'beautiful formula' ...

$\displaystyle e^{i\ \theta} = \cos \theta + i\ \sin \theta\ (1)$

... would be not yet known (Emo) ...

Kind regards

$\chi$ $\sigma$
 
  • #3
chisigma said:
$\displaystyle e^{i\ \theta} = \cos \theta + i\ \sin \theta\ (1)$

=
$$\pi^{i\theta} = \cos \theta + i\sin \theta $$
Provided that we measure angles with full revolution $2\pi \log \pi$.

So you can still have Euler formula for different bases and angles provided that they match together nicely. But the derivative formulas will change into something ugly.
 
  • #4
Forgive me for being obtuse but your statement:

...consider the function $f(x) = b^x$. This is an exponential function with base $b$...

is completely mystifying to me.

In particular, I mean: how does one define such a function for, say: $b = \sqrt{2}$?

(I know the answer, someone who is taking calculus for the first time probably does NOT.)

In fact, proving the "base laws" for the functions $\log_b$ and $\exp_b$ depends on somehow producing the "natural base", which depends on producing the number $e$. So how do you do this?

If, in fact, you define:

\(\displaystyle e = x \in \Bbb R: \int_1^x \frac{1}{t}\ dt = 1\)

and show that for:

\(\displaystyle F(x) = \int_1^x \frac{1}{t}\ dt\)

$F(a) + F(b) = F(ab)$

I am prepared to believe that (given a proof that $F$ is injective):

$F^{-1}(a+b) = F^{-1}(a)F^{-1}(b)$

and that such a function $F^{-1}$ has all the properties of $b^x$ when $x$ is rational.

It follows (in my mind, anyway) that we have:

$e = F^{-1}(1)$.

This, to me, is the reason why $e$ is "natural", I find the definition:

\(\displaystyle e = \lim_{n \to \infty} (1 + n)^{1/n}\)

nearly impossible to motivate, whereas the derivative of a differentiable function $f$ where:

$f(a+b) = f(a)f(b)$ clearly has derivative:

$f'(x) = f'(0)\cdot f(x)$.

At this point, we are in a bit of a fix, which is why considering $f^{-1}$ proves to be more amenable to attack:

$(f^{-1})'(x) = \dfrac{1}{f'(f^{-1}(x))} = \dfrac{1}{f'(0)\cdot f(f^{-1}(x))}$

$= \dfrac{1}{f'(0)x}$

This is a function of the form $g(x) = \dfrac{1}{\alpha x}$, and we can use the Fundamental Theorem of Calculus to find values for $f^{-1}(x)$.

Taking $\alpha = 1$ leads, of course, to the above definition of $e$.
 
Last edited:
  • #5
Deveno said:
Forgive me for being obtuse but your statement is completely mystifying to me.

It is not mystifying at all. It is just not rigorous. But that is fine. I doubt that Euler himself thought about it the way we think about it today. And what you wrote will have little value to calculus students. You might as well stop teaching trigonometry with that attitude and define sine and cosine as power series, because that is after all the formal way of doing it, which is how Rudin does it in his book on the first pages.

The important part is to motivate the idea then provide the rigor later. So defining exponentials $b^x$ for arbitrary real numbers is okay because it can at least motivate why $e$ is the best number to choose for $b$.
 
  • #6
Deveno said:
In particular, I mean: how does one define such a function for, say: $b = \sqrt{2}$?
I believe, the most natural definition of $\left(\sqrt{2}\right)^a$ is $\lim_{x\to\sqrt{2}}\lim_{y\to a}x^y$ where $x$ and $y$ range over rational numbers.

Deveno said:
In fact, proving the "base laws" for the functions $\log_b$ and $\exp_b$ depends on somehow producing the "natural base", which depends on producing the number $e$.
It should be possible to prove $x^{a+b}=x^ax^b$ with the definition above, and this does not require $e$.
 
  • #7
ThePerfectHacker said:
It is not mystifying at all. It is just not rigorous. But that is fine. I doubt that Euler himself thought about it the way we think about it today. And what you wrote will have little value to calculus students. You might as well stop teaching trigonometry with that attitude and define sine and cosine as power series, because that is after all the formal way of doing it, which is how Rudin does it in his book on the first pages.

The important part is to motivate the idea then provide the rigor later. So defining exponentials $b^x$ for arbitrary real numbers is okay because it can at least motivate why $e$ is the best number to choose for $b$.

I'm not sure you understand my point. Let me re-phrase it:

"What does $b^x$ mean for irrational $x$?"

I do not believe rigor is necessary to teach facts, even if the justification cannot be given at that time. It is common for young children to be told that the area of a circle is pi times its radius squared without any justification given. Given the "assumption" that the formula is true, one can still use it profitably, and I see no reason not to under those circumstances.

In a similar vein, I see no problem with teaching the "SOHCAHTOA" approach to trigonometry, either. While that is perfectly adequate for calculating many trig functions of well-established ratios, it is somewhat inadequate for calculating the values of the *continuous* function $\sin(x)$, and when one acquires the mathematical sophistication for a *better* definition, one ought to employ that instead. Power series approximations will do this, but one can use integrals of algebraic functions to obtain inverse trig functions, which also accomplishes the same purpose. A third, and also perfectly acceptable approach, is to use differential equations to define these transcendental functions.

I am not arguing that one *has* to define them this way, just that one *can* and the tools learned in a first-year calculus course allow this. One need not take this path if one feels that the students "aren't ready".

But back to my main point, which I feel is *somewhat* important: how do you define the function:

$f(x) = b^x$

and how do you justify that it is differentiable?

I honestly don't know your answers, but if I was one of your students, I would feel they are pertinent.

Evgeny.Makarov said:
I believe, the most natural definition of $\left(\sqrt{2}\right)^a$ is $\lim_{x\to\sqrt{2}}\lim_{y\to a}x^y$ where $x$ and $y$ range over rational numbers.

I'm OK with this. Computing this (without a computer) might prove problematic. This does address continuity satisfactorily, differentiability is another story.

It should be possible to prove $x^{a+b}=x^ax^b$ with the definition above, and this does not require $e$.

I shouldn't think that it would. That is not a "change of base" formula. However, it *is* a good place to start. That said, where does $e$ come in?
 
  • #8
Deveno said:
That said, where does $e$ come in?
I agree with the OP that $e$ is such that $\left(e^x\right)'=e^x$.
 
  • #9
Evgeny.Makarov said:
I agree with the OP that $e$ is such that $\left(e^x\right)'=e^x$.

I think we all agree this is true. How do you arrive at that fact?
 
  • #10
$e$ is defined to be such constat that $\left(e^x\right)'=e^x$.

Now, I have not looked into the issue of defining $e^x$ for a long time. This is how I believe it was introduced to me, and if this approach works, I would consider it very natural.
 
  • #11
Deveno said:
"What does $b^x$ mean for irrational $x$?"

Exactly what most people think it means. You replace $x$ by a rational number $q$ which is close to $x$ and then compute $b^q$. That is what every student would do if you ask them to find $2^{\pi}$. That is what Euler thought of it and mathematicians back then. So I can write $b^x$ function without rigorously defining it.

and how do you justify that it is differentiable?

I do not. I just write it and then ask students what will the derivative of this function be?

All that stuff you wrote before it useless to make a first time student who sees the natural exponential to understand what makes it special to other exponential functions. He will get lost in the middle and not understand what you are doing it. My approach does not suffer from that. Now if you tell a student the natural exponential is its own derivative while other exponentials are almost their own derivative they see what makes the natural exponent the nicer base to use.
 
  • #12
Let me see if I have this straight: you differentiate a function without any justification that it is differentiable. Why do I think this is not a good idea?

But...granted; establishing that the conditions to apply some of the theorems one uses from elementary calculus might be "more technical stuff" than a given student may need or want. Let's move on.

How does one establish that:

$b^x = e^{x\log(b)}$?

I mean, I'm willing to accept that as a DEFINITION of $b^x$, but then I'd sure like to know what $e$ is.

Now, there is "that other definition" of $e$ (which is actually the first definition I ever encountered), but...

Again, you're asking people to take a lot "on faith". There is nothing wrong with this, per se, but calculus affords an opportunity to actually PROVE things that previously they were only TOLD. This moves the source of their knowledge from: "external authority" (books, teacher, expert in field) to: "internal understanding" (I know it's true because of other things I know are true).

In one view, math is another form of intellectual slavery. In another, it is intellectual freedom. I suppose it's obvious which one I favor.

Let me put this in another light: it is (or so I am arguing) preferable to have things we reason about be on "firm footing" (because arguments based on incorrect assumptions can be invalid, even if the reasoning itself is sound). University students are (by and large) people who have reached the age of majority, and as such are responsible for their own actions. This means (among other things) they will be expected to think for themselves. I feel it is irresponsible in a classroom setting, therefore, to "think what you're told to think is true", as it delays the level of responsibility they will be held to by society.

Engineers, for example, will be held accountable for the safety of structures they design or certify. Not making them aware of the mathematical principles that lay beneath their calculations could conceivably lead to fatalities. If I were an engineer, I would want to be **** certain I had a minimum of assumptions, to limit my own potential liability. If there were a typo in a formula in a handbook of mathematical expressions I had used, it is possible that would not help me much in a civil lawsuit.

Granted, few of your students may face such an extreme situation, and such a situation is even less likely to rest upon a definition of an exponential function. Nevertheless, I believe in rigor as a principle, not for its own sake as a form of intellectual prowess, but because of what it accomplishes: showing what we derive is a consequence of simpler facts.

Ignoring the logical priority of mathematical conclusions is not sound mathematics. It neatly sweeps "hard stuff" under the rug, as a matter of expedience. It's like fast-food convenience for the mind, in short: a sham.

Now, it would be one thing if the Fundamental Theorem of Calculus and the Inverse Function Theorem were "abstract generalities" not bothered with in a beginning calculus course. However, this is not so, and I believe (perhaps I am being "too esoteric" in this belief?) that the best way to understanding something is to become acquainted with it through USE.

So, if we have these tools available to define certain functions: why don't we USE them? ESPECIALLY since transcendental functions are difficult to understand EXCEPT through some form of limiting process (and isn't calculus all about the nifty things we can do with limits?).

I believe that teaching people to think for themselves only after they reach graduate school is "too late". Unless they have already individually acquired the skill on their own independently, they are set-up for failure, or at best, a very difficult time of it.

I do understand, however, that as a matter of pragmatism, you may have found that devoting too much time to these sorts of issues has been non-productive. So, you are passing the buck to some other teacher. One can only hope that somewhere, this comes to an abrupt halt.

**************

I didn't mean to obliterate your cognizant observation that $e$ is indeed a "natural" base. It is, and for the reasons you indicate. However, something "deep" is hiding behind it, something important (a fact about this slippery real number $e$). Transcendental real numbers *ought* to be "hard to define", in fact it took centuries before we were in a position to even conclude they existed (that is, that the algebraic numbers are not "cauchy-closed").

Personally, if all one takes calculus for is to calculate derivatives and integrals, meh: Wolfram|Alpha and a CRC handbook, skip the class. This business about limits and continuity, on the other hand, and the exploration of the "infinitesimal", this sly attempt to touch the face of infinity; if that does not make one blush, I humbly submit one has no soul.
 
  • #13
Deveno said:
Let me see if I have this straight: you differentiate a function without any justification that it is differentiable. Why do I think this is not a good idea?

I only read your first paragraph. I did not read anything else you wrote because I think your teaching standards are abysmal if you have a problem with writing down down $b^x$ in the way that makes sense to most people when they actually think about it.

Historically speaking all the calculus developed in a non-precise way, but it was motivated, and my presentation of it most closely parallels it. Go ahead why not spend the the first two months of class defining Dedekind cuts and what it means to even add two real numbers together. But you do not do that. You say that is too much. So if you are willing to accept that it is okay to teach calculus without defining what addition means, rather appealing to students intuitive feeling about what it is, then why not define exponentials in the most intuitive way possible?

I am willing to bet if you explained to a student why $e$ is natural using my explanation they will have a much better chance of repeating it back to you than your explanation.
 
  • #14
I'm not sure what it is you think I am getting at. It might be a bit more illuminating to read my entire post, if only to provide more context, but I cannot force you to do this.

Not every function is differentiable. Students should not, even if they do not get all the details think this is "mostly true". Continuity is *special* and differentiability even more so.

But again, I do not hear an answer to my original question: assuming, for the time being that $b^x$ *is* differentiable (and we certainly HOPE it would be so), how do you actually differentiate it?

What I imagine is something invoking the chain rule, and writing:

$b^x = e^{x\log(b)} = \exp \circ (x \cdot \log(b))$

and using the product (or multiplying by a constant) rule for the $x \log(b)$ part, and then re-writing:

$(b^x)' = (\exp)'(x \log(b))\cdot \log(b) = \log(b)\cdot b^x$.

To get to that conclusion, we need to know at least these two facts:

1. $(e^x)' = e^x$
2. what the function $\log$ is (in particular that it is a functional *inverse* of $\exp$).

Perhaps you show (or merely indicate the "plausibility" of) the fact that $e^0 = 1$. I would be *interested* in how you derive fact 1. There are some different ways to do this, and you haven't indicated which one you prefer. You also have not indicated how the number $e$ (rather than "some exponential function $\exp$") enters into this. I think that this is critical, since the whole POINT of your initial post is that $e$ represents a "special base".

Fact 2 also needs prior establishment of at least SOME facts about $e^x$ (like, that it is 1-1, so it *has* an inverse, contrast this with the contortions we must undergo for inverse trig functions).

I know that you feel that it is somehow more complicated to start with a way of describing $\log$ and deriving $\exp$ as ITS inverse function, but I really don't see this. The crucial property of $\log$:

$\log(xy) = \log(x) + \log(y)$

is an EASY consequence of the definition:

\(\displaystyle \log(x) = \int_1^x \frac{1}{t}\ dt\), by taking the interval:

$[1,xy]$ and splitting it into the two sub-intervals of $[1,x]$ and $[x,xy]$, and using the linearity of the integral (presumably you find this property not so challenging as to withhold it from your students).

At least you answered my original question of "what is $b^x$?" by answering it via some kind of "rational approximation". As I indicated to Evgeny, in his reply, I am perfectly OK with that. The real number system was designed to let us do things like that, so that's fair.

How you derive the fact:

$b^x = e^{x\log(b)}$ is a more subtle question. If in fact, you already have the inverse-pair $\exp$ and $\log$ previously established (you don't say if this is true or not), one could use THIS property of (rational, and in approximation, real) exponents:

$(a^b)^c = a^{bc}$.

Perhaps "most" (or even possibly "all") of your students do not care. I cannot change that, nor do I have the singular pleasure of addressing them. I feel you should care, especially since the tone of this thread indicates a pedagogical approach of choosing a "simple case" as desirable.

Historically, the logical foundations of calculus were seen as something of a "crisis", and provided the impetus for some truly worth-while mathematics by Cauchy, Weierstrass, Riemann and Dedekind (to name a few note-worthy examples). The "soundness" of Newton's (and Liebniz's) original justifications were viewed with some suspicion well into the 20th century, although with the advent of "non-standard analysis" these fears have been mostly allayed. Not of all mathematical progress is made at "the outer reaches" sometimes thinking about "the very beginnings" proves fruitful, as well.

If this is not so, then tell me: why do we even "bother" with "epsilon-delta" proofs? I bet even a beginning calculus text makes some (perhaps brief) mention of these strange animals (or so it seems to me with the wealth of questions asked about them on sites such as this one).

Personally, I think talking about Dedekind cuts or Cauchy sequences (or even the oft-maligined "infinite decimal" approach) is a worth-while undertaking, if only to show that these "real numbers" have some "concrete" realization as something we can use rational numbers to approximate (because, underneath it all, the rationals is what we "really" use). I understand if the syllabus you follow does not allow time for this, but it certainly would make "some theorems" less confusing (anything that uses glb's or lub's to establish the existence of some desired real number, for example).

*******************

I do get it, you know, that a "simple" explanation is easier to remember, and that perhaps as much as 80% of what a first-year calculus student learns will be forgotten in 3 months time (unless they *immediately* take a refresher course, or a "building-upon-it" continuation).

If I were a student of yours, asking these same questions, would I get the very same answers?

*******************

I have talked at perhaps too much length about this particular subject. It's hard for me to judge whether this becomes a purely "ad hominem" discussion, so let us abandon it. What I have neglected to say in all of this, is that the SECOND part of your post, is something I WHOLE-HEARTEDLY agree with, and that thinking in terms of "turns" is the "natural" way to measure angle (and very profitable for physicists, and engineers).
 
  • #15
Deveno said:
I'm not sure what it is you think I am getting at. It might be a bit more illuminating to read my entire post, if only to provide more context, but I cannot force you to do this.

Not every function is differentiable. Students should not, even if they do not get all the details think this is "mostly true". Continuity is *special* and differentiability even more so.

But again, I do not hear an answer to my original question: assuming, for the time being that $b^x$ *is* differentiable (and we certainly HOPE it would be so), how do you actually differentiate it?

How do you "prove" that $\sin x$ and $\cos x$ are differenciable? You draw a picture illustrating important limits. That aint a proof at all. However, the picture is more illuminating to understanding the analysis of trigonometric functions than a boring non-inspired unmotivated, but rigorous, power series definition.

How do you "define" $\sqrt{x}$? You have no problem saying that it is that positive number when squared results in $x$. This definition is not exactly rigorous as it assumes the reals has positive square roots. The proof of which requires the completeness property. Something which is never even mentioned in a calculus course at all. You have no problem using that definition at all.

But the moment I write $b^x$ and I say that means raise $b$ to the power of $x$, you suddenly have a problem. You ask how do I even define. I "define" it in the most natural way most people would expect, $b^q$, were $q$ is a rational number approaching $x$. There is nothing natural about writing $\exp(x\log b)$, it is confusing and entirely unmotivated for anyone who sees it for their first time. Now if you applied this standard consistently you will have a problem with how one "defines" $\sin x$ and $\sqrt{x}$. But you do not. You are okay with how it is presented in a calculus class and assume the necessarily properties, including being differenciable, when needed.

In the end you apply selective criticisms and make the concept of exponential functions exponentially more difficult for first-time students because you think it justifies it. Calculus is not supposed to be about justifying analytic concepts. It is supposed to introduce one to them and try to motivate the reasons for why it works and why it is true.
 
  • #16
ThePerfectHacker said:
How do you "prove" that $\sin x$ and $\cos x$ are differenciable? You draw a picture illustrating important limits. That aint a proof at all. However, the picture is more illuminating to understanding the analysis of trigonometric functions than a boring non-inspired unmotivated, but rigorous, power series definition.

How do you "define" $\sqrt{x}$? You have no problem saying that it is that positive number when squared results in $x$. This definition is not exactly rigorous as it assumes the reals has positive square roots. The proof of which requires the completeness property. Something which is never even mentioned in a calculus course at all. You have no problem using that definition at all.

But the moment I write $b^x$ and I say that means raise $b$ to the power of $x$, you suddenly have a problem. You ask how do I even define. I "define" it in the most natural way most people would expect, $b^q$, were $q$ is a rational number approaching $x$. Now if you applied this standard consistently you will have a problem with how one "defines" $\sin x$ and $\sqrt{x}$. But you do not. You are okay with how it is presented in a calculus class and assume the necessarily properties, including being differenciable, when needed.

In the end you apply selective criticisms and make the concept of exponential functions exponentially more difficult for first-time students because you think it justifies it. Calculus is not supposed to be about justifying analytic concepts. It is supposed to introduce one to them and try to motivate the reasons for why it works and why it is true.

I think differentiating trig functions is likewise somewhat thorny. Similar glib use of $\pi$ is made in this case, usually without any justification. Again, a "geometric" limit argument is the usual "first look" approach taken, and this is not out of character with the geometric flavor that much of calculus has.

I think that transcendental functions are "subtler" than many people realize...indeed, that even the completeness property of the reals is a subtle concept. These things are usually "glossed over" in favor of expediency. I remain unconvinced that this is a NECESSARY evil. It's convenient, yes, and perhaps even "motivational" (whatever that means), but the truth is kept "out of view".

Many calculus texts start with the assertion that "they will assume the reader is familiar with the basic properties of real numbers" and dispense with 100 years of deep (and HARD) mathematics in a few words.

Square roots are a bit thorny, as well. Many students know (or have at least seen a proof) that $\sqrt{2}$ is irrational, but they have no idea what a big can of worms that really is. It is easy to write $\sqrt{x}$ and think you know what it means. I'm sure it comes as a shock to some math majors when they realize that actually, they do NOT (and maybe non-math-majors who never take anything more demanding than calculus ever realize there is "more to it than that").

I believe in DEMONSTRATING that a function is differentiable, by showing the limit that defines its derivative EXISTS. I think this is good "practice" for later when one encounters a function whose differentiability status is unknown. Not all "real problems" have answer keys in the back of some book.

Of course, you have me at a disadvantage: you know your students, and I do not. I ask you for an explanation of what you mean in your original post, and you reply with remarks about "selective criticisms".

I cannot recall the particular thread, but there was a similar discussion some time back about defining the area of a circle, and arriving at a value for $\pi$. Some argued that a "proper definition" of trig functions would have to be delayed until after power series, as that was logically more satisfiying. I made the point that it could be done with the tools of integral calculus (although this might not be very "direct").

One often sees T-shirts with:

$e^{i\pi} + 1 = 0$

printed on them. The simplicity of this formula belies the depth of information it summarizes: how the circle ties together exponentials, logarithms, trigonometry, and an intrinsic notion of distance in 7 scant symbols. I'm not even confident I understand entirely everything it says...but the small portion I DO get, fills me with a kind of awe.

If, in the final analysis, you feel that I am looking "too deeply" at what you think should be taken more lightly, I have no ready answer. I have my reasons for thinking the way I do, but if my explanations are not sufficient, I suppose that is my failing. I *try* to be clear, but I have no assurances I ever am, for I only see the world with my own perceptions, and never that of another human being.

Indeed, I often fear I might be embarrassing my comrades here at MHB, for being too outspoken, or having views well outside of the conventional wisdom, or even worse, perhaps just plain being wrong. I have only a vague notion of INTERNAL consistency to guide me, and I find it enormously difficult to communicate this to other people. Surely, I reason to myself, Evgeny.Makarov or Ackbach or Jameson or MarkFL or chisigma (or some other soul who I have in my ignorance failed to mention) will point out to me what is wrong with my views, so I may improve my ability to express math to other people...but alas, I hear only silence.
 

FAQ: Why e is "natural", and why we use radians.

Why is e considered a "natural" number?

e is considered a "natural" number because it appears frequently in nature and in mathematical equations that describe natural phenomena. It is also the base of the natural logarithm, which is used to model continuous growth and decay.

What does it mean to use radians?

Using radians means measuring angles in terms of the radius of a circle. One radian is equal to the length of the arc of a circle with the same radius as the angle. This is a more convenient and universal unit for measuring angles, especially in calculus and other advanced mathematical concepts.

Why do we use radians instead of degrees?

Radians are preferred over degrees in mathematics because they simplify many equations and calculations. They also have a more intuitive connection to the unit circle and trigonometric functions, making them more useful in advanced math and physics.

How is e related to radians?

e is closely related to radians through the natural logarithm function, ln(x). The derivative of ln(x) is 1/x, which is the slope of the tangent line of the function at any point. When x is measured in radians, the slope of the tangent line is equal to 1, making the calculation simpler.

Can radians be used in real-life applications?

Yes, radians can be used in real-life applications, especially in the fields of mathematics, physics, and engineering. They are particularly useful for modeling and analyzing continuous and cyclical phenomena, such as waves and oscillations.

Back
Top