- #1
kaotak
Of course it's not a contradiction, it's really just my misunderstanding. But here's what I don't get:
(N.B.: I'll be using the terms "electric potential" and "voltage" interchangeably in this post.)
Consider a charge moving in the direction of an electric field. If it moves a distance of x meters in the direction of the field, then it goes to a lower electric potential. This makes sense, since it's analogous to gravitational potential energy. What doesn't make sense is this fact paired with the definition of voltage:
The voltage at any arbitrary point P is the amount of work per unit charge it takes to move a positive test charge from infinity to that point. Consider this diagram.
~~~~~~~~~~~~| <--x--> |~~~~~~~~~~~~~~~~
Infinity ------------ A --------- B ------------ Electric Field
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A charge that goes from A to B goes to a lower potential. The voltage at B is lower than the potential at A. HOWEVER, using the definition of electric potential above, you can say that the electric potential at B is actually greater than the electric potential at A, since it takes more work or equal work (if the electric field is doing the work for you) to move a positive test charge from infinity to B than it takes to move the positive test charge from infinity to A, because B is simply farther away than A is.
You can argue that A and B are both infinitely away from the point selected at infinity where the electric potential is zero, so thus the fact that B appears farther away from the 0-point doesn't matter. But I still don't get then why it would take more work to move a positive test charge to B than it would to A, since the electric field is only weaker at A than it is at B.
So can someone tell me why there's a lower potential at B than there is at A using the DEFINITION of voltage (work per unit charge from infinity to P) to explain this?
(N.B.: I'll be using the terms "electric potential" and "voltage" interchangeably in this post.)
Consider a charge moving in the direction of an electric field. If it moves a distance of x meters in the direction of the field, then it goes to a lower electric potential. This makes sense, since it's analogous to gravitational potential energy. What doesn't make sense is this fact paired with the definition of voltage:
The voltage at any arbitrary point P is the amount of work per unit charge it takes to move a positive test charge from infinity to that point. Consider this diagram.
~~~~~~~~~~~~| <--x--> |~~~~~~~~~~~~~~~~
Infinity ------------ A --------- B ------------ Electric Field
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A charge that goes from A to B goes to a lower potential. The voltage at B is lower than the potential at A. HOWEVER, using the definition of electric potential above, you can say that the electric potential at B is actually greater than the electric potential at A, since it takes more work or equal work (if the electric field is doing the work for you) to move a positive test charge from infinity to B than it takes to move the positive test charge from infinity to A, because B is simply farther away than A is.
You can argue that A and B are both infinitely away from the point selected at infinity where the electric potential is zero, so thus the fact that B appears farther away from the 0-point doesn't matter. But I still don't get then why it would take more work to move a positive test charge to B than it would to A, since the electric field is only weaker at A than it is at B.
So can someone tell me why there's a lower potential at B than there is at A using the DEFINITION of voltage (work per unit charge from infinity to P) to explain this?