Can adding two undefined limits result in a defined limit?

  • Thread starter Thread starter azatkgz
  • Start date Start date
  • Tags Tags
    Limit
azatkgz
Messages
182
Reaction score
0
I've answered to test in this way

1)If \lim_{x\rightarrow a}f(x) and \lim_{x\rightarrow a}g(x)
do not exist,then \lim_{x\rightarrow a}(f(x)+g(x)) may exist or not.
2)if \lim_{x\rightarrow a}f(x) and \lim_{x\rightarrow a}(f(x)+g(x)) exists then \lim_{x\rightarrow a}g(x) must exist.
 
Physics news on Phys.org
And what are your thoughts on the problems?
 
1)I think it usually does not exist,but addtion limits of some functions may be any number ,like \frac{|x|}{x}+\frac{|x|}{x}.
2)Here,I thought that if \lim_{x\rightarrow a}g(x) does not exist then
\lim_{x\rightarrow a}(f(x)+g(x)) does not exist also.
 
azatkgz said:
1)I think it usually does not exist,but addtion limits of some functions may be any number ,like \frac{|x|}{x}+\frac{|x|}{x}.
2)Here,I thought that if \lim_{x\rightarrow a}g(x) does not exist then
\lim_{x\rightarrow a}(f(x)+g(x)) does not exist also.

You didn't give a solution the limit in 1). You basically said the limit of a function of x is another function of x, when x approaches something. I find that hard to believe.

You're adding two limits that don't exist. Is it possible that when adding two limits that don't exist to actually exist after adding them? Think in terms of graphs and how the graph looks like when a limit does not exist.

Using the practice from 1), you should be able to handle 2).
 
Prove $$\int\limits_0^{\sqrt2/4}\frac{1}{\sqrt{x-x^2}}\arcsin\sqrt{\frac{(x-1)\left(x-1+x\sqrt{9-16x}\right)}{1-2x}} \, \mathrm dx = \frac{\pi^2}{8}.$$ Let $$I = \int\limits_0^{\sqrt 2 / 4}\frac{1}{\sqrt{x-x^2}}\arcsin\sqrt{\frac{(x-1)\left(x-1+x\sqrt{9-16x}\right)}{1-2x}} \, \mathrm dx. \tag{1}$$ The representation integral of ##\arcsin## is $$\arcsin u = \int\limits_{0}^{1} \frac{\mathrm dt}{\sqrt{1-t^2}}, \qquad 0 \leqslant u \leqslant 1.$$ Plugging identity above into ##(1)## with ##u...
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top