Proton in a magnetic field, numerical simulation

  • #1
Appelros
13
0
TL;DR Summary
The magnetic field does no work but translating its force to velocity does, the solution is not elegant and inexact.
Hi! I am developing a fusion reactor simulator and when a proton spins in a perpendicular magnetic field the Lorentz force qv×B is applied at a right angle, however whenever you add a value to a vector in this way its magnitude increases, which contradicts that the velocity of the proton should remain constant since the magnetic field does no work. To solve this I simply scaled the vector back to its original size after adding the acceleration. All my units are set to 1 so the proton should spin in a circle with radius 1, which it does, but with two problems.

1. There are now two sets of acceleration vectors in my code, those that need scaling and those who don't. This feels very ad hoc.

2. While the proton spins in a circle of radius 1 it does so from 0.995 to -1.005 and 1.9999 to -1e-5. While the latter is a somewhat acceptable error rate the former is not (I'm assuming it should spin from 1 to -1). It continues to spin within these parameters for many rotations and under different calculation methods so it is not an error accumulation from my numerical algorithms.

Is there a more elegant solution that solves these problems?
 
Physics news on Phys.org
  • #2
Welcome to PF.

Appelros said:
when a proton spins in a perpendicular magnetic field the Lorentz force qv×B is applied at a right angle, however whenever you add a value to a vector in this way its magnitude increases,
When you add value to a vector in what way? The centripital force from the Lorentz Force is perpendicular to the velocity. Can you explain in more detail how you are calculating the changes in the velocity vector at each time step in your simulation?

Appelros said:
I am developing a fusion reactor simulator
A fusion simulator will not be making calculations on each proton in the simulation. You should be working more along the lines of a plasma physics simulation, no?

https://www.amazon.com/Introduction...olled-Fusion/dp/3319793918/?tag=pfamazon01-20
 
  • Like
Likes Vanadium 50
  • #3
berkeman said:
Welcome to PF.
Thanks!

When you add value to a vector in what way? The centripital force from the Lorentz Force is perpendicular to the velocity. Can you explain in more detail how you are calculating the changes in the velocity vector at each time step in your simulation?
Say one vector is along x, if we add along y we form a right angle triangle with the hypotenuse as the new vector, and the hypotenuse is always the longest side in right angle triangles.

In my simulation I first get F from the above Lorentz formula, then divide by m, then multiply with the timestep and add it to the velocity of a particle. Finally I scale it to match the magnitude it had before adding to it.

A fusion simulator will not be making calculations on each proton in the simulation. You should be working more along the lines of a plasma physics simulation, no?

https://www.amazon.com/Introduction...olled-Fusion/dp/3319793918/?tag=pfamazon01-20
Well, fusion reactors use very little fuel and there is development on very small fusion reactors, so running simulations on a particle level isn't too far fetched. Also I want to simulate atomic level reactors that might be buildable in the far future. Plus atomic simulations help collaborate results from fluid/gas approximations.

Thanks for the book recommendation! I'll check it out.
 
  • #4
What on earth are you trying to do?

Even at the LHC, where bunches weigh picograms they don't simulate every particle.

Further, nobody is seriously talking about pp fusion reactors. It is far too slow to be useful.
 
  • #5
Vanadium 50 said:
What on earth are you trying to do?

Even at the LHC, where bunches weigh picograms they don't simulate every particle.
As I said above I'd like to simulate futuristic reactors that are extremely small, maybe have a hi score of who can make the smallest one. Also if an approximation of one million particles agrees with the particle by particle simulation of a million particles it helps validate the approximation.

Further, nobody is seriously talking about pp fusion reactors. It is far too slow to be useful.
Yes, obviously D-T is the usual reaction. I use protons for calibration.
 
  • #6
This seems like you are putting the cart before the horse. Getting a puff of gas that is only a million particles is not simple. At LHC-quality vacuum, it's a volume of about a quarter of a mm on a side.

At homemade-quality vacuum, this would be even smaller.

And you should absolutely not be messing with tritium.
 
  • #7
Vanadium 50 said:
This seems like you are putting the cart before the horse. Getting a puff of gas that is only a million particles is not simple. At LHC-quality vacuum, it's a volume of about a quarter of a mm on a side.
Loop Quantum Gravity has even harder experimental setups and that is still an interesting field of study.

And you should absolutely not be messing with tritium.
Not even in a computer simulation?
 
  • #8
Appelros said:
Loop Quantum Gravity has even harder experimental setups and that is still an interesting field of study.
There's a non sequitur.

If you are trying to simulate something that can't be built, well, it's your time to waste. But there is a reason that people who need to simulate real devices don't do it this way.
 
  • #9
Vanadium 50 said:
There's a non sequitur.
No.

If you are trying to simulate something that can't be built, well, it's your time to waste. But there is a reason that people who need to simulate real devices don't do it this way.
If you don't see the value in being able to simulate individual particles for fusion research then I don't know what to tell you.

If you have any input on the problems I listed I'd be glad the hear it.
 
  • #10
Appelros said:
TL;DR Summary: The magnetic field does no work but translating its force to velocity does, the solution is not elegant and inexact.

Is there a more elegant solution that solves these problems?
There are far too many to detail here. The motion of a proton in a uniform field is exactly soluble.....is it your intent to put further complications into your model that will demand o simulation??
Your most important error in what you have detailed is how you obtain the new position. You need to use the average velocity during the timestep which will be much more accurate. Please work it out for a single step and show (us) why it is a vast improvement. You are correct that rescaling v each timestep is a very unsatisfactory solution.
 
  • Like
Likes Vanadium 50
  • #11
If you ever done large particle number N-body simulations you would quickly recognize how quickly a PC can get bogged down in calculations.
If you want your simulation to have any useful processing time using averages and currents would be far quicker.
 
  • Like
Likes Vanadium 50
  • #12
hutchphd said:
There are far too many to detail here. The motion of a proton in a uniform field is exactly soluble.....is it your intent to put further complications into your model that will demand o simulation??
Yes.

Your most important error in what you have detailed is how you obtain the new position. You need to use the average velocity during the timestep which will be much more accurate. Please work it out for a single step and show (us) why it is a vast improvement. You are correct that rescaling v each timestep is a very unsatisfactory solution.
Ok, if I understand you correctly this is what you mean:
```
v=[1,0,0]
d=[0,0,0.1]
vavg=(v+(v+d))/2 -> [1,0,0.05]
```
Sure it's an improvement but the magnitude still increases. Scaling coincides exactly with the Boris algorithm and I don't think taking an accuracy loss to be more "elegant" is a good idea.
 
  • #13
Mordred said:
If you ever done large particle number N-body simulations you would quickly recognize how quickly a PC can get bogged down in calculations.
If you want your simulation to have any useful processing time using averages and currents would be far quicker.
Every plasma course I have taken has covered mechanics of individual particles so it seems like a good place to start. Training a neural network on the physics is also something I see that can speed things up a lot.
 
  • #14
Those courses should have also included the related current equations using individual particles when your first learning is useful as it's often easier to understand than a field treatment. It quickly becomes far more practical to apply weighted averages in large multiparticle systems.
 
  • #15
If you are simulating one individual particle, there is an analytic solution.
If you are simulating a million, a) it is not enough to capture the bulk physics, and b) it is too many for a PC. There are half a trillion particle-particle pairs that need to be calculated. @Modred is absolutely right here.

Pooh-pooh others' experience if you want, but there is a reason that people who do this use voxels and/or small numbers of individual particles.
 
  • #16
As you haven't described your fusion reactor simulation you may find it also practical to develop your coordinate system with the beam itself.
This is done typically in cyclotrons the beam becomes the reference frame.

For cyclotrons
Frenet-Serret Frame/coordinates is typically employed.

Scatterings however is another detail that will also quickly bog down. Particularly when you try using Breit Wigner. There are methods to simplify scatterings. Don't know how far your taking the simulation
 
Last edited:
  • #17
Mordred said:
Those courses should have also included the related current equations using individual particles when your first learning is useful as it's often easier to understand than a field treatment. It quickly becomes far more practical to apply weighted averages in large multiparticle systems.
Yes, approximative models will take up a large part of my project. The particle resolution sims are I guess motivated by my mathematical side seeking an "analytical" solution.
 
  • #18
Vanadium 50 said:
If you are simulating one individual particle, there is an analytic solution.
If you are simulating a million, a) it is not enough to capture the bulk physics, and b) it is too many for a PC. There are half a trillion particle-particle pairs that need to be calculated. @Modred is absolutely right here.
You only have to pair particles that are close to each other.

Pooh-pooh others' experience if you want, but there is a reason that people who do this use voxels and/or small numbers of individual particles.
It seems you are projecting, I'm not doing that, you are the one discrediting my decades of mathematical knowledge and coding experience. I'm sure you are very knowledgeable in many fields but please try to be more polite.
 
  • #19
Mordred said:
As you haven't described your fusion reactor simulation you may find it also practical to develop your coordinate system with the beam itself.
This is done typically in cyclotrons the beam becomes the reference frame.

For cyclotrons
Frenet-Serret Frame/coordinates is typically employed.

Scatterings however is another detail that will also quickly bog down. Particularly when you try using Breit Wigner. There are methods to simplify scatterings.
Funny you should mention cyclotrons, I just today finished a simulation of one.

That is an interesting coordinate system approach, I'll keep that in mind in my further studies.

Don't know how far your taking the simulation
During COVID there was an app that gamified folding molecules to help vaccine research, I'm thinking of something similar.
 
  • #20
One other piece of advise though you may already considered it. Use pointers instead of variables you save on clock cycles and hence processing time.
 
  • #21
Mordred said:
One other piece of advise though you may already considered it. Use pointers instead of variables you save on clock cycles and hence processing time.
Well I use Julia and it doesn't really expose pointers.
 
  • #22
Fair enough I would still look into any method to reduce clock cycles. For example in binary operations a bit shift left is divide by 2 bit shift right multiply by two. Far faster than implying the ALU.
 
  • #23
Appelros said:
v=[1,0,0]
d=[0,0,0.1]
vavg=(v+(v+d))/2 -> [1,0,0.05]
```
Sure it's an improvement but the magnitude still increases. Scaling coincides exactly with the Boris algorithm and I don't think taking an accuracy loss to be more "elegant" is a good idea.
Are you calculating in 1D? I haven't a clue what this shows. Sorry
 
  • #24
Mordred said:
Fair enough I would still look into any method to reduce clock cycles. For example in binary operations a bit shift left is divide by 2 bit shift right multiply by two. Far faster than implying the ALU.
Yea optimization is certainly a major consideration, but first I have to get things working. Premature optimization is a waste, for example I recently scrapped almost all of one of my core files.
 
  • #25
hutchphd said:
Are you calculating in 1D? I haven't a clue what this shows. Sorry
No this is 3D vectors as they are written in Julia, python and matlab have similar notation. v is a vector with x value 1 and y/z 0. Why don't you write your solution? Does it have an error less than dt/2?
 

Similar threads

  • Special and General Relativity
Replies
30
Views
2K
Replies
4
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
2
Views
2K
Replies
14
Views
936
  • Classical Physics
Replies
2
Views
2K
  • Introductory Physics Homework Help
Replies
2
Views
996
  • Introductory Physics Homework Help
Replies
5
Views
1K
  • Introductory Physics Homework Help
Replies
8
Views
431
  • Classical Physics
Replies
24
Views
3K
  • Electromagnetism
Replies
4
Views
1K
Back
Top