Several unknowns but only one equation

  • Thread starter stabu
  • Start date
  • Tags
    Unknowns
In summary, the conversation discusses the idea of needing one equation for each unknown variable and how this rule applies to nonsingular systems of linear equations. However, in some cases, such as perturbation expansions, a single equation can lead to the discovery of multiple variables. This is achieved by expanding a Taylor polynomial and equating it to zero, resulting in a set of equations for each coefficient. This concept is commonly discussed in textbooks and is not a cause for concern.
  • #1
stabu
26
0
I have a query please, if anybody can shed some light, thanks:

So from an early age we get this idea of needing one equation for each unknown variable whose unique value we need to discover. Three unknowns? well, need three nonsingular equations: it's a kind of a rule of thumb I guess.

However, I have noticed that in some areas, notably perturbation expansions, and others, one arrives at a single equation, and can actually discover not one, but several variables form the one equation. Smells of a free lunch, eh? Of course, it can't happen just like that, In perturbation theory, a very common step is to expand a Taylor poly, rearrange into coefficients of rising powers of the key variable (say x), and then say (drums rolling ...) that if the expression equates to zero, then we can say that each coefficient (with its own unknowns) also equates to zero.

This enables us to pull out three, four, even more equations from the original expansion. Golly.

I've been over a few textbooks on this .. and they seem to treat it as a normal course of deduction. No flicker of the eyelids!

I admit this is a rough description ... I'll try and get more flesh on it later. Initially however, I wanted to post up about it ... to see if anybody recognises what I'm describing.

Thanks!
 
Mathematics news on Phys.org
  • #2
As you said yourself the rule only applies to a nonsingular system of linear equations, not to arbitrary systems of equations (this is one of the most basic facts in linear algebra). Consider, a^2 + b^2 = 0 with a,b unknown real variables. We have two unknowns, one equation and one unique solution (0,0). I don't really see the problem. Some people may state "you need n equations to determine n unknowns" but they either implicitly assume equations to mean nonsingular linear equations or they are stating something that is often false, but true in many simple cases. I don't really see what you have trouble understanding. You know that this only applies in a special case, and you can come up with cases where it doesn't apply.
 
  • #3
"Nonsingular" is the wrong word here. You mean "independent" equations.
 
  • #4
stabu said:
I have a query please, if anybody can shed some light, thanks:

So from an early age we get this idea of needing one equation for each unknown variable whose unique value we need to discover. Three unknowns? well, need three nonsingular equations: it's a kind of a rule of thumb I guess.

However, I have noticed that in some areas, notably perturbation expansions, and others, one arrives at a single equation, and can actually discover not one, but several variables form the one equation. Smells of a free lunch, eh? Of course, it can't happen just like that, In perturbation theory, a very common step is to expand a Taylor poly, rearrange into coefficients of rising powers of the key variable (say x), and then say (drums rolling ...) that if the expression equates to zero, then we can say that each coefficient (with its own unknowns) also equates to zero.
Yes, it is certainly true that if a polynomial (or power series) is equal to 0 for all x[/itex] then every coefficient must be 0. That's not one equation, that is an infinite number of equations- one for each value of x.

This enables us to pull out three, four, even more equations from the original expansion. Golly.

I've been over a few textbooks on this .. and they seem to treat it as a normal course of deduction. No flicker of the eyelids!

I admit this is a rough description ... I'll try and get more flesh on it later. Initially however, I wanted to post up about it ... to see if anybody recognises what I'm describing.

Thanks!
 

FAQ: Several unknowns but only one equation

What is the concept of "Several unknowns but only one equation"?

The concept refers to a mathematical problem in which there are multiple variables or unknowns, but only one equation is provided to solve for them.

How do you solve a problem with several unknowns and only one equation?

To solve such a problem, you must use algebraic methods to manipulate the equation and isolate the unknown variable. By substituting the value of the unknown into the equation, you can then solve for the other variables.

Can you provide an example of a problem with several unknowns and only one equation?

One example is the problem of finding the perimeter of a rectangle with only one given side length and one given area. This requires solving for the other unknown side length using the given equation of area = length x width.

Is it possible to solve a problem with several unknowns and only one equation?

Yes, it is possible to solve such a problem as long as the given equation is not redundant and the unknown variables are not dependent on each other. However, it may not always result in a unique solution.

What are the limitations of solving a problem with several unknowns and only one equation?

The main limitation is that the solution may not always be unique. In some cases, there may be multiple values that satisfy the equation, or there may be no solution at all. Additionally, this method may not be applicable for more complex problems with multiple equations and unknowns.

Back
Top