Proton in a magnetic field, numerical simulation

In summary, the study of a proton in a magnetic field involves the analysis of its behavior and dynamics under the influence of magnetic forces. Numerical simulations are employed to model and visualize the interactions, allowing for the examination of key parameters such as the proton's trajectory, spin precession, and energy levels. These simulations provide valuable insights into fundamental physics concepts and have applications in fields like nuclear magnetic resonance and particle physics.
  • #1
Appelros
20
0
TL;DR Summary
The magnetic field does no work but translating its force to velocity does, the solution is not elegant and inexact.
Hi! I am developing a fusion reactor simulator and when a proton spins in a perpendicular magnetic field the Lorentz force qv×B is applied at a right angle, however whenever you add a value to a vector in this way its magnitude increases, which contradicts that the velocity of the proton should remain constant since the magnetic field does no work. To solve this I simply scaled the vector back to its original size after adding the acceleration. All my units are set to 1 so the proton should spin in a circle with radius 1, which it does, but with two problems.

1. There are now two sets of acceleration vectors in my code, those that need scaling and those who don't. This feels very ad hoc.

2. While the proton spins in a circle of radius 1 it does so from 0.995 to -1.005 and 1.9999 to -1e-5. While the latter is a somewhat acceptable error rate the former is not (I'm assuming it should spin from 1 to -1). It continues to spin within these parameters for many rotations and under different calculation methods so it is not an error accumulation from my numerical algorithms.

Is there a more elegant solution that solves these problems?
 
Physics news on Phys.org
  • #2
Welcome to PF.

Appelros said:
when a proton spins in a perpendicular magnetic field the Lorentz force qv×B is applied at a right angle, however whenever you add a value to a vector in this way its magnitude increases,
When you add value to a vector in what way? The centripital force from the Lorentz Force is perpendicular to the velocity. Can you explain in more detail how you are calculating the changes in the velocity vector at each time step in your simulation?

Appelros said:
I am developing a fusion reactor simulator
A fusion simulator will not be making calculations on each proton in the simulation. You should be working more along the lines of a plasma physics simulation, no?

https://www.amazon.com/Introduction...olled-Fusion/dp/3319793918/?tag=pfamazon01-20
 
  • Like
Likes Vanadium 50
  • #3
berkeman said:
Welcome to PF.
Thanks!

When you add value to a vector in what way? The centripital force from the Lorentz Force is perpendicular to the velocity. Can you explain in more detail how you are calculating the changes in the velocity vector at each time step in your simulation?
Say one vector is along x, if we add along y we form a right angle triangle with the hypotenuse as the new vector, and the hypotenuse is always the longest side in right angle triangles.

In my simulation I first get F from the above Lorentz formula, then divide by m, then multiply with the timestep and add it to the velocity of a particle. Finally I scale it to match the magnitude it had before adding to it.

A fusion simulator will not be making calculations on each proton in the simulation. You should be working more along the lines of a plasma physics simulation, no?

https://www.amazon.com/Introduction...olled-Fusion/dp/3319793918/?tag=pfamazon01-20
Well, fusion reactors use very little fuel and there is development on very small fusion reactors, so running simulations on a particle level isn't too far fetched. Also I want to simulate atomic level reactors that might be buildable in the far future. Plus atomic simulations help collaborate results from fluid/gas approximations.

Thanks for the book recommendation! I'll check it out.
 
  • #4
What on earth are you trying to do?

Even at the LHC, where bunches weigh picograms they don't simulate every particle.

Further, nobody is seriously talking about pp fusion reactors. It is far too slow to be useful.
 
  • #5
Vanadium 50 said:
What on earth are you trying to do?

Even at the LHC, where bunches weigh picograms they don't simulate every particle.
As I said above I'd like to simulate futuristic reactors that are extremely small, maybe have a hi score of who can make the smallest one. Also if an approximation of one million particles agrees with the particle by particle simulation of a million particles it helps validate the approximation.

Further, nobody is seriously talking about pp fusion reactors. It is far too slow to be useful.
Yes, obviously D-T is the usual reaction. I use protons for calibration.
 
  • #6
This seems like you are putting the cart before the horse. Getting a puff of gas that is only a million particles is not simple. At LHC-quality vacuum, it's a volume of about a quarter of a mm on a side.

At homemade-quality vacuum, this would be even smaller.

And you should absolutely not be messing with tritium.
 
  • #7
Vanadium 50 said:
This seems like you are putting the cart before the horse. Getting a puff of gas that is only a million particles is not simple. At LHC-quality vacuum, it's a volume of about a quarter of a mm on a side.
Loop Quantum Gravity has even harder experimental setups and that is still an interesting field of study.

And you should absolutely not be messing with tritium.
Not even in a computer simulation?
 
  • #8
Appelros said:
Loop Quantum Gravity has even harder experimental setups and that is still an interesting field of study.
There's a non sequitur.

If you are trying to simulate something that can't be built, well, it's your time to waste. But there is a reason that people who need to simulate real devices don't do it this way.
 
  • #9
Vanadium 50 said:
There's a non sequitur.
No.

If you are trying to simulate something that can't be built, well, it's your time to waste. But there is a reason that people who need to simulate real devices don't do it this way.
If you don't see the value in being able to simulate individual particles for fusion research then I don't know what to tell you.

If you have any input on the problems I listed I'd be glad the hear it.
 
  • #10
Appelros said:
TL;DR Summary: The magnetic field does no work but translating its force to velocity does, the solution is not elegant and inexact.

Is there a more elegant solution that solves these problems?
There are far too many to detail here. The motion of a proton in a uniform field is exactly soluble.....is it your intent to put further complications into your model that will demand o simulation??
Your most important error in what you have detailed is how you obtain the new position. You need to use the average velocity during the timestep which will be much more accurate. Please work it out for a single step and show (us) why it is a vast improvement. You are correct that rescaling v each timestep is a very unsatisfactory solution.
 
  • Like
Likes Vanadium 50
  • #11
If you ever done large particle number N-body simulations you would quickly recognize how quickly a PC can get bogged down in calculations.
If you want your simulation to have any useful processing time using averages and currents would be far quicker.
 
  • Like
Likes Vanadium 50
  • #12
hutchphd said:
There are far too many to detail here. The motion of a proton in a uniform field is exactly soluble.....is it your intent to put further complications into your model that will demand o simulation??
Yes.

Your most important error in what you have detailed is how you obtain the new position. You need to use the average velocity during the timestep which will be much more accurate. Please work it out for a single step and show (us) why it is a vast improvement. You are correct that rescaling v each timestep is a very unsatisfactory solution.
Ok, if I understand you correctly this is what you mean:
```
v=[1,0,0]
d=[0,0,0.1]
vavg=(v+(v+d))/2 -> [1,0,0.05]
```
Sure it's an improvement but the magnitude still increases. Scaling coincides exactly with the Boris algorithm and I don't think taking an accuracy loss to be more "elegant" is a good idea.
 
  • #13
Mordred said:
If you ever done large particle number N-body simulations you would quickly recognize how quickly a PC can get bogged down in calculations.
If you want your simulation to have any useful processing time using averages and currents would be far quicker.
Every plasma course I have taken has covered mechanics of individual particles so it seems like a good place to start. Training a neural network on the physics is also something I see that can speed things up a lot.
 
  • #14
Those courses should have also included the related current equations using individual particles when your first learning is useful as it's often easier to understand than a field treatment. It quickly becomes far more practical to apply weighted averages in large multiparticle systems.
 
  • #15
If you are simulating one individual particle, there is an analytic solution.
If you are simulating a million, a) it is not enough to capture the bulk physics, and b) it is too many for a PC. There are half a trillion particle-particle pairs that need to be calculated. @Modred is absolutely right here.

Pooh-pooh others' experience if you want, but there is a reason that people who do this use voxels and/or small numbers of individual particles.
 
  • Like
Likes Astronuc
  • #16
As you haven't described your fusion reactor simulation you may find it also practical to develop your coordinate system with the beam itself.
This is done typically in cyclotrons the beam becomes the reference frame.

For cyclotrons
Frenet-Serret Frame/coordinates is typically employed.

Scatterings however is another detail that will also quickly bog down. Particularly when you try using Breit Wigner. There are methods to simplify scatterings. Don't know how far your taking the simulation
 
Last edited:
  • #17
Mordred said:
Those courses should have also included the related current equations using individual particles when your first learning is useful as it's often easier to understand than a field treatment. It quickly becomes far more practical to apply weighted averages in large multiparticle systems.
Yes, approximative models will take up a large part of my project. The particle resolution sims are I guess motivated by my mathematical side seeking an "analytical" solution.
 
  • #18
Vanadium 50 said:
If you are simulating one individual particle, there is an analytic solution.
If you are simulating a million, a) it is not enough to capture the bulk physics, and b) it is too many for a PC. There are half a trillion particle-particle pairs that need to be calculated. @Modred is absolutely right here.
You only have to pair particles that are close to each other.

Pooh-pooh others' experience if you want, but there is a reason that people who do this use voxels and/or small numbers of individual particles.
It seems you are projecting, I'm not doing that, you are the one discrediting my decades of mathematical knowledge and coding experience. I'm sure you are very knowledgeable in many fields but please try to be more polite.
 
  • #19
Mordred said:
As you haven't described your fusion reactor simulation you may find it also practical to develop your coordinate system with the beam itself.
This is done typically in cyclotrons the beam becomes the reference frame.

For cyclotrons
Frenet-Serret Frame/coordinates is typically employed.

Scatterings however is another detail that will also quickly bog down. Particularly when you try using Breit Wigner. There are methods to simplify scatterings.
Funny you should mention cyclotrons, I just today finished a simulation of one.

That is an interesting coordinate system approach, I'll keep that in mind in my further studies.

Don't know how far your taking the simulation
During COVID there was an app that gamified folding molecules to help vaccine research, I'm thinking of something similar.
 
  • #20
One other piece of advise though you may already considered it. Use pointers instead of variables you save on clock cycles and hence processing time.
 
  • #21
Mordred said:
One other piece of advise though you may already considered it. Use pointers instead of variables you save on clock cycles and hence processing time.
Well I use Julia and it doesn't really expose pointers.
 
  • #22
Fair enough I would still look into any method to reduce clock cycles. For example in binary operations a bit shift left is divide by 2 bit shift right multiply by two. Far faster than implying the ALU.
 
  • #23
Appelros said:
v=[1,0,0]
d=[0,0,0.1]
vavg=(v+(v+d))/2 -> [1,0,0.05]
```
Sure it's an improvement but the magnitude still increases. Scaling coincides exactly with the Boris algorithm and I don't think taking an accuracy loss to be more "elegant" is a good idea.
Are you calculating in 1D? I haven't a clue what this shows. Sorry
 
  • #24
Mordred said:
Fair enough I would still look into any method to reduce clock cycles. For example in binary operations a bit shift left is divide by 2 bit shift right multiply by two. Far faster than implying the ALU.
Yea optimization is certainly a major consideration, but first I have to get things working. Premature optimization is a waste, for example I recently scrapped almost all of one of my core files.
 
  • #25
hutchphd said:
Are you calculating in 1D? I haven't a clue what this shows. Sorry
No this is 3D vectors as they are written in Julia, python and matlab have similar notation. v is a vector with x value 1 and y/z 0. Why don't you write your solution? Does it have an error less than dt/2?
 
  • #26
This is an exactly solvable problem found in many textbooks. It is called cyclotron motion. I need not recapitulate it here. Please look it up
 
  • #27
hutchphd said:
This is an exactly solvable problem found in many textbooks. It is called cyclotron motion. I need not recapitulate it here. Please look it up
The thread title clearly states this is about numerical simulations. Euler, midpoint and RK, with scaling, match Boris algorithm at an error of dt/2. You said there was a better way without scaling and hinted at averages, then ignored my reply interpreting your solution and showing it was worse, and now you're condescendingly dismissing the entire topic by handwavingly saying complex computer simulations should be done by analytically solving the smallest parts
 
  • #28
You said you wanted accuracy. Toward that end simulations should be done analytically if they can be.
Your microscopic simulation is not time reversal invariant. That should worry you. You introduce that asymmetry artificially by your second difference technique. (Hand-waving finished).
If you wish to communiucate math, please do it in LateX and not Julia.
Remember who is the supplicant here and do not characterise my attempts to help you. Clearly they were not helpful. They have ended. (*******)
 
  • Like
Likes Astronuc, Vanadium 50 and berkeman
  • #29
Appelros said:
my decades of mathematical knowledge
Then why are you shocked that ## \sqrt{x^2 + \Delta y^2} > x ##? You are approximating dy by Δy in a region where the approximation does not have the property you want and are shocked to find the outcome doesn't have the property you want either? Of course it doesn't!

As has been said multiple times, if you can use an analytic solution, do so.
 
  • Like
Likes hutchphd
  • #30
Appelros said:
Funny you should mention cyclotrons, I just today finished a simulation of one.
Really? And you verified that it gives the correct answer? I suspect that if you ran it assuming no driving voltage you'd still see the particle gain energy, for the same reason you see this system gain energy. If you use the same code, you'll get the same result.
 
  • #31
hutchphd said:
You said you wanted accuracy. Toward that end simulations should be done analytically if they can be.
In complex simulations the best that can be done analytically is solving for the force vectors at each time step, and whenever the perpendicular magnetic force vector is added to a velocity the velocity increases, which it shouldn't.

Your microscopic simulation is not time reversal invariant. That should worry you. You introduce that asymmetry artificially by your second difference technique.
That's why I came here for help.

If you wish to communiucate math, please do it in LateX and not Julia.
I tried to use code ticks ``` to signify code, seemed appropriate given the thread is about coding, but apparently they don't work here. Also double $ or # doesn't seem to work either... So I don't know how to input latex on this site, if anyone has a solution that would be great.
 
  • #32
Vanadium 50 said:
Then why are you shocked that ## \sqrt{x^2 + \Delta y^2} > x ##?
Uh, I'm not... You seem to misunderstand/misrepresent everything I say and frankly it is not pleasant communicating with you. The primary objective of any communication should be building rapport.
 
  • #33
Appelros said:
You seem to misunderstand/misrepresent everything I say and frankly it is not pleasant communicating with you. The primary objective of any communication should be building rapport.
Or in this case, pointing out your errors so that you can improve... :wink:
 
  • Like
Likes Vanadium 50
  • #34
No one has provided an improved solution to scaling. Rotation would be a candidate.
 
  • #35
Appelros said:
Hi! I am developing a fusion reactor simulator and when a proton spins in a perpendicular magnetic field the Lorentz force qv×B is applied at a right angle, however whenever you add a value to a vector in this way its magnitude increases, which contradicts that the velocity of the proton should remain constant since the magnetic field does no work. To solve this I simply scaled the vector back to its original size after adding the acceleration.
I created a basic particles-in-magnetic-field simulation a little over a year ago using c-sharp and Unity. I believe I ended up using a symplectic integrator such as the Boris Method. See part 3 here: https://www.particleincell.com/2011/vxb-rotation/

This should fix your issue with the magnitude increasing with each time step, but note that there really isn't a way of simplifying things enough to directly simulate more than a few thousand to a few hundred thousand particles at the same time. At least not with personal computers you find at home. With a parallelized program running on my video card I was able to simulate about 100k particles at a time without my program falling below 30 fps, but that was purely a particle-mover without any way to store or analyze the data. It also wasn't computing particle-particle forces, just forces from an external field on each particle. And even at the rate it was running I'm not sure it was fast enough to do anything useful.

Direct simulation is the most accurate, but by far the most resource intensive. You can go further up the scale to particle-in-cell, particle meshes, charged fluid, etc. Each step generally gets more abstract and less accurate while gaining the ability to simulation larger time frames and/or more particles.

Here's my thread asking for resources and posting some insights I gained working on my simulation: https://www.physicsforums.com/threads/resources-on-simulating-charges-in-magnetic-fields.1047935/
 
  • Like
Likes berkeman
Back
Top