What are the challenges faced by LHC in the initial data-taking of 2017?

  • I
  • Thread starter mfb
  • Start date
  • Tags
    2017 Lhc
  • Featured
In summary, stable beams were declared 30 minutes ago, but the initial collision rate is low, at only 0.2% of the design rate. This is because the machine operators must ensure safety and check for any potential dangers before filling in more protons. It will likely take a few weeks to reach the same collision rates as last year. Meanwhile, experiments are starting to collect initial data, and the low collision rate of 0.2% is actually ideal for certain analyses. However, at design values, this means that 99.9975% of collisions are discarded due to limitations in data processing. The beam dump, which is a block 70 cm x 70 cm x 7 meters in size and surrounded by
  • #36
Thanks explanations and links, most interesting.

If money wasn't a factor what would be the most optimal config to get the beam up to energy, how is this determined. I guess I could ask the same about rocket stages - is it the same physics principles based in thermodynamics?

How does the 20% figure come about?

LHC fanboy here.
 
Physics news on Phys.org
  • #37
20%? Do you mean the factor 20? The magnets have to adjust their magnetic field according to the beam energy very accurately (10-5 precision) to keep the particles on track, at very low fields (relative to the maximum) that can be challenging. You also have to take into account if your particle speed still changes notably during the acceleration process.

If money wasn't a factor you could build a 15 km long linear accelerator directly leading to the LHC. Then you can fill it in two steps (one per ring), in seconds instead of 20 minutes, and with more bunches. Or, if we get rid of any realism, make the linear accelerator ~300 km long and directly put the protons in at their maximal energy. Then you also save the 20 minutes of ramping up and 20 minutes of ramping down.
The beam dump would need some serious upgrades to handle a higher turnaround.
 
  • #38
Construction, design, beam steering, beam intensity and collision geometry...etc would be optimal with a LINAC in a world of no constraints?

Rings are the compromise solution to real world constraints?

Is there any possibility of building a research facility that would then become a alternative structure post research, eg build a big LINAC straight thru the Alps north and south which could then become a commercial transport tunnel when the research is competed.
 
  • #39
houlahound said:
Construction, design, beam steering, beam intensity and collision geometry...etc would be optimal with a LINAC in a world of no constraints?

Rings are the compromise solution to real world constraints?

Is there any possibility of building a research facility that would then become a alternative structure post research, eg build a big LINAC straight thru the Alps north and south which could then become a commercial transport tunnel when the research is competed.
They have degrees up to 50°C or something in the stones of the new Gotthard base tunnel. I just try to imagine how you would cool the entire tunnel to 0.3K or so, on 57 km! And this is just one mountain. My guess is it would be easier to construct a linear accelerator in Death Valley than under the Alps.
 
  • Like
Likes houlahound
  • #40
A circular machine has two advantages over a linac. The first is cost - it let's you use the small part that actually accelerates again and again on the same proton. Superconducting magnets are expensive, but accelerating structures are even more expensive. The second is beam quality - by requiring each proton to return to the same spot (within microns) every orbit you get a very high quality beam. This is done by setting up a complex negative feedback scheme: if a particle drifts to the left, it feels a force to the right, and vice versa. Linacs don't do this - a beam particle that drifts to the left keeps going to the left, and if your accelerator is long enough to be useful, it's likely that this drifting particle hits a wall.

Proposals for future linacs include something called "damping rings" so that before the final acceleration, you can get the beam going in a very, very straight line.

The factor of ~20 comes about for several reasons. One is, as mfb said, problems with persistent fields. If your magnets are good to 10 ppm at flattop, and the ring has an injection energy 10% of flattop, at injection it's only good to 100 ppm. Make that 5% and now it's 200 ppm. The lower the energy, the harder it is to inject. And even without this problem, it would still be harder to inject because the beam is physically larger (we say it has more emittance). Finally, there is some accelerator physics that makes you want to keep this ratio small. There is something called "transition", where you essentially go from pulling on the beam to pushing on it. At the exact moment of transition, you can't steer the beam, so you lose it after a fraction of a second. The bigger the energy range, the more likely you have to go through transition. The LHC is above transition, but if you injected at a low enough energy, you'd have to go through transition. That number is probably of order 50-100 GeV.
 
  • Like
Likes houlahound
  • #41
Vanadium 50 said:
This is done by setting up a complex negative feedback scheme: if a particle drifts to the left, it feels a force to the right, and vice versa. Linacs don't do this - a beam particle that drifts to the left keeps going to the left, and if your accelerator is long enough to be useful, it's likely that this drifting particle hits a wall.
I guess you mean quadrupole (+potential higher order) magnets? Long linear accelerators do this as well.
They just keep the beam together, they don't reduce the emittance (like damping rings do for electrons), but the LHC doesn't reduce that either.
houlahound said:
Is there any possibility of building a research facility that would then become a alternative structure post research, eg build a big LINAC straight thru the Alps north and south which could then become a commercial transport tunnel when the research is competed.
500 km through the Alps to replace about 35 km of LHC plus preaccelerators, built at convenient spots near Lake Geneva? Even if the tunnels would be wide enough to be used for transport afterwards (they are not), and even if there would be demand for a 500 km tunnel, that project would be way too expensive for particle physics or transportation. And that is just the tunnel - you need 500 km of accelerating structures. There is absolutely no way to fund that.
 
  • Like
Likes houlahound
  • #42
mfb said:
I guess you mean quadrupole (+potential higher order) magnets?

That, plus things like stochastic cooling. Yes, you can add correctors to linear accelerators, but the ratio of corrector lengths/accleration lengths is much higher in a circular accelerator. Perhaps the two most directly comparable accelerators are LEP and SLC at the Z pole. Despite the fact that the electrons underwent significant synchrotron radiation, LEP still ended up with a smaller beam energy spread than SLC.

So I think my statement that the requirement that the beam makes it around the ring at the same point that it started gives you better beam quality is an advantage that a circular design has over a linear design is borne out.
 
  • #43
For electrons, synchrotron radiation is a great cooling method. For protons it is not - protons in the LHC have a cooling time of days but they don't stay in the machine that long. The FCC would be the first proton machine where synchrotron cooling gets relevant.
They tried to get collisions with 600 bunches over the night, but didn't achieve it due to powering issues. The plan is to get 600 bunches next night.
 
  • #44
What does a simple conservation of energy equation look like at LHC at point of collision.

Proton energy = ionisation energy + rest mass + brehmstalung losses + relativistic energy + coulomb energy + nuclear binding energy + ...?
 
  • #45
Apart from the rest mass, all these things don't apply to protons colliding in a vacuum. The rest mass contributes 0.94 GeV to the total proton energy of 6500 GeV. You can call the remaining 6499.06 GeV "kinetic energy" if you like.The machine operators are preparing the machine for collisions with 600 bunches now.
 
  • Like
Likes member 563992
  • #46
Huh?

To have the proton smash surely you need to overcome both coulomb & binding energy at least?
They are not zero.
 
  • #47
Binding energy of what? There is nothing bound.

The coulomb potential between the protons is of the order of 0.001 GeV, completely negligible. Nuclear binding energies, if anything would be bound, would be of the same order of magnitude.
 
  • Like
Likes nikkkom
  • #48
Binding energy to break the nucleus apart in collision.
 
  • #49
There is just one nucleon in the nucleus, there is nothing to break apart.
The protons are not broken into pieces in any meaningful way. Completely new particles are created in the collision.
Stable beams with 600 bunches, 30% of the design luminosity for ATLAS/CMS, 125% of the (lower) design luminosity for LHCb.
 
  • Like
Likes vanhees71
  • #50
houlahound said:
Binding energy to break the nucleus apart in collision.

This would apply only to ion beams (Pb).
But anyway, please realize that at some 3-7 TeV energies per nucleon, any binding energy of nucleus is utterly insignificant. Even the entire rest energy of the nucleus is much lower than the "kinetic" energy of that magnitude (it's about 0.03% of it).
 
  • #51
mfb said:
Stable beams with 600 bunches, 30% of the design luminosity for ATLAS/CMS, 125% of the (lower) design luminosity for LHCb.
Just want to say: I love your running commentary. :oldbiggrin:
 
  • Like
Likes vanhees71, mfb and fresh_42
  • #52
Thanks.
I add it when something happened since the last post - or make a new post if it is a major milestone.The https://lpc.web.cern.ch/lumiplots_2017_pp.htm are now online. LHCb data seems to be missing.

ATLAS had some issues with its magnet in the muon system, it was switched off last night. As long as the luminosity is low, that is not a large loss, and analyses that don't need muons can probably use the data. The magnet is running again now.

We'll probably get some more collisions with 600 bunches next night..
Edit: There they are. 30% design luminosity again.

Edit2: Now we get more scrubbing. Afterwards hopefully 900 and 1200 bunches.
 
Last edited:
  • Like
Likes member 563992, fresh_42 and vanhees71
  • #53
Edit2: Now we get more scrubbing. Afterwards hopefully 900 and 1200 bunches.

Was just wondering since I noticed the beam went up:

07_June_2017-pp_luminosity_integrated_date_2017.png
 
  • #54
That should be the run from the night to Monday.
 
  • Like
Likes member 563992
  • #55
What units is luminosity measured in?

Graph shows fb, physical splanation please.

Fill number?
 
  • #56
Inverse femtobarn (fb-1). I wrote an Insights article about it.
1/fb corresponds to roughly 1014 collisions.

Fill number is just counting how often protons have been put in the machine. After beams are dumped, the number is increased by 1 for the next protons to go in. "Fill 5750" is more convenient than "the protons we had in the machine at June 5 from 8:23 to 13:34".
 
  • Like
Likes member 563992 and houlahound
  • #57
After a few days of scrubbing, we are back to data-taking.

Scrubbing went well. We had a record number of 3.37*1014 protons per beam, with 2820 bunches per beam (slightly exceeding the design value of 2808).
The heating of the magnets went down by ~40% in the most problematic region, enough to continue operation with higher beam intensities.

Currently with a short (1 hour) run with 10 bunches each, then they'll complete the 600 bunch step (~3 hours), and then go on with 900 and 1200 bunches. Each step gets 20 hours of stable beams to verify nothing goes wrong. These two steps combined should deliver about 0.5/fb worth of data. Progress in the first weeks is always a bit slow, but it starts to get an interesting dataset.

Edit: We got 900 bunches. 68% design luminosity, about 50 inelastic ("destructive") proton-proton collisions per bunch crossing (design: ~25). Unfortunately the beam was dumped after just 20 minutes of data-taking for safety reasons. Now they are working on a cooling issue, that will take several hours.

Edit2: More 900 bunches (980 actually), nearly 0.15/fb of data collected on Wednesday. We'll probably get 1200 late Thursday to Friday.
 
Last edited:
  • Like
Likes Lord Crc and odietrich
  • #58
We had some runs with 900-980 bunches in the last two days, about 65% the design luminosity. Each step gets 20 hours before the number of bunches is increased. 900 is done now, the next step is 1200 bunches, probably this evening.
Edit in the evening: Stable beams with 1225 bunches, 75% the design luminosity. A bit lower than expected.

ATLAS and CMS both reached 0.5/fb of data. Not much compared to last year's 40/fb, but we are still in the very early phase of data-taking.
The machine operators found another way to increase the number of collisions a bit. The bunches have to hit each other with a minimal crossing angle to avoid additional collisions outside the design point. That means the bunches don't overlap completely (see this image). With the HL-LHC in 2025+ it is planned to "rotate" the bunches, but that needs additional hardware not available now.
In long runs (many hours), the number of protons per bunch goes down over time - some are collided, some are lost elsewhere in the machine. That means the long-range interactions get less problematic, and the crossing angle can be reduced. This increases the number of collisions by a few percent. It does not change the maximal luminosity, but it reduces the drop of the luminosity over time.
The LHC could get a very unusual record this year: The luminosity record for any type of collider.
Electrons and positrons are much lighter than protons. That means they emit more synchrotron radiation when they travel around in a circular collider. Faster electrons radiate more and get slower in the process. That is a very effective "cooling" mechanism, as a result you can put the electrons very close together, increasing the luminosity. KEKB set the world record of 2.11*1034/(cm2*s) in 2009 - with electrons/positrons.
The LHC could reach this value in 2017 - with protons, where it is much harder. As far as I know, it would be the first proton-collider ever to set the absolute luminosity record.
KEKB is currently upgraded, and the new version (SuperKEKB) is supposed to reach 100*1034/(cm2*s), way above everything the LHC can achieve, but it will probably need until late 2018 to beat its old record, and several more years to reach its design value. There is a small time window where LHC could get the record for a while.
 
Last edited:
  • Like
Likes Lord Crc and QuantumQuest
  • #59
The LHC is progressing fast this week. The 20 hours at 1200 bunches were completed today, and the machine switched to 1550 bunches. Collisions in ATLAS and CMS reached ~100% the design luminosity this evening. If everything goes well, we get 1800 bunches on Monday, clearly exceeding the design luminosity.

The luminosity record last year was 140% the design value, with a naive scaling we need 2150 bunches to reach this, and 2820 bunches will give 180% the design luminosity. Similar to last year, I expect that the luminosity goes up more, as they'll implement more and more improvements. The absolute luminosity record is certainly realistic.

Both experiments collected 1.25/fb of data now, and the https://lpc.web.cern.ch/lumiplots_2017_pp.htm is going upwards rapidly.

Edit: They shortened the 1500 bunch step and went directly to 1740 after just ~10 hours. Initial luminosity ~110% the design value.
 
Last edited:
  • Like
Likes member 563992, arivero, vanhees71 and 1 other person
  • #60
2029 bunches!
125% of the design luminosity. Approaching the 2016 record.

ATLAS and CMS now have 2.2/fb, twice the data they had three days ago. It is also half the total 2015 dataset.

The machine operators expect that they can go to 2300 bunches without issues. Afterwards the heat load from the electrons in the beam pipe could get too high. Then we need more scrubbing or simply more data-taking at 2300 bunches - that acts as scrubbing as well.
 
  • Like
Likes vanhees71 and Drakkith
  • #61
mfb said:
Then we need more scrubbing or simply more data-taking at 2300 bunches - that acts as scrubbing as well.

What do you mean? What does data-taking have to do with scrubbing?
 
  • #62
Scrubbing = have as many protons as possible circling in the machine.
Data-taking = have as many protons as possible at high energy circling in the machine
The second approach has fewer protons, as higher energies means the magnets get more heat (that is the problem reduced by scrubbing).

Scrubbing runs have 2820 bunches, data-taking might be limited to 2300. The latter is not so bad - especially as it means more data keeps coming in. And that is what counts.2.3/fb, 10 hours in stable beams already. We might get 2300 bunches as early as Wednesday evening.

Edit 13:00 CERN time: 21 hours in stable beams, 0.5/fb in less than a day, the beam will get dumped soon. New records for this year. And enough time to go up another step, to 2173 bunches.

Edit on Thursday 12:00: 2173 bunches, initial luminosity was about 140% the design value. At the level of the 2016 record. 2.8/fb in total. We'll get 2317 bunches later today, probably with a new luminosity record.
 
Last edited:
  • Like
Likes member 563992, hsdrop, Amrator and 3 others
  • #63
We have a new all-time luminosity record! Over the night, a fill with 2317 bunches had more than 140% the design luminosity. Close to 150%.

Unfortunately, the LHC encountered multiple issues in the last day, so the overall number of collisions collected was very low (just 1 hour of stable beams since yesterday afternoon). One of these issues lead to a lower number of protons in the ring than usual - we can get new luminosity records once that is fixed.

The heat in the magnets is now close to its limit, I expect that we get data-taking at 2300 bunches for a while before the beam pipe is "clean" enough to put even more protons in.Edit: They decided that the margin is large enough. 2460 bunches! And a bit more than 150% the design luminosity.
 
Last edited:
  • Like
Likes member 563992, Imager, odietrich and 1 other person
  • #64
The current fill has 2556 bunches. Is this a record? I looked but didn't find the max from 2016.

Also, earlier in the fill there was a notice about a failed 'Hobbit scan'. What's a Hobbit scan?
 
  • #65
websterling said:
The current fill has 2556 bunches. Is this a record? I looked but didn't find the max from 2016.
It is a record at 13 TeV. I don't know if we had more at the end of 2012 where they did some first tests with 25 ns bunch spacing (most of 2012 had 50 ns, where you are limited to ~1400 bunches).

No idea about the Hobbit scan. They are not new, but apart from the LHC status page I only find amused Twitter users.The LHC started its first phase of machine development, followed by a week of technical stop. Data-taking will probably resume July 10th. https://beams.web.cern.ch/sites/beams.web.cern.ch/files/schedules/LHC_Schedule_2017.pdf.

In the last days we had a couple of runs starting at ~150% design luminosity with a record of 158%. The initial phase of rapid luminosity increase is over. While the machine operators will try to increase the luminosity a bit more, this is basically how most of the year will look like now.

ATLAS and CMS got 6.3/fb so far. As comparison: Last year they collected about 40/fb.
LHCb collected 0.24/fb. Last year it was 1.9/fb.
https://lpc.web.cern.ch/lumiplots_2017_pp.htm
In both cases, the final 2017 dataset will probably be similar to the 2016 dataset. In 2018 we will get a bit more than that, 2019 and 2020 are reserved for machine and detector upgrades. Long-term schedule. With the 2017 dataset you can improve some limits a bit, you can improve the precision of some measurements a bit, but many studies will aim for an update after 2018.

LHC report: full house for the LHC
 
Last edited:
  • Like
Likes websterling
  • #67
A nice start for EPS.

The first baryon with two heavy quarks. It needs the production of two charm/anticharm pairs in the collision, with the charm quarks with similar momenta, that makes the production very rare.

Now that ##ccu## (=quark content) is found (with a very clear signal peak), ##ccd## should be possible to find as well - the mass should be extremely similar, but it has a shorter lifetime and a larger background. ##ccs## will be much more challenging - it needs an additional strange quark, that makes it even rarer. In addition, its lifetime should be even shorter.

Baryons with bottom and charm together: A naive estimate would suggest one such baryon per 20 double-charm baryons. That is probably too optimistic. The lifetime could be a bit longer. Maybe with data from 2016-2018?
Two bottom: Another factor 20, maybe more. Good luck.

--------------------

ATLAS showed updated results for Higgs decays to bottom/antibottom. Consistent with the Standard Model expectation, the significance went up a bit, now at 3.6 standard deviations. If CMS also shows new results, we can probably get 5 standard deviations in a combination. It is not surprising, but it would still be nice to have a good measurement how often this decay happens.I'll have to look through the EPS talks for more, but I didn't find the time for it today.
The technical stop is nearly done, the machine will continue operation tomorrow.
 
  • Like
Likes Amrator
  • #68
mfb said:
In addition, its lifetime should be even shorter.

Why? The strange lifetime is much, much longer than the charm. I'd expect that, up to final state effects, the lifetimes would be about the same.
 
  • #69
When they find those mesons or hadrons, and claim a discovery, I am really shocked that it took so long to discover them... it's only 3.something GeV (~half+ the mass of the B mesons) and not so "extraordinary" (just a ccu)...
 
  • #70
I don't know how large the effect would be, but it should have a larger overall mass, although its decay products could have a higher mass as well. I didn't draw Feynman diagrams and I certainly didn't calculate it.
ChrisVer said:
When they find those mesons or hadrons, and claim a discovery, I am really shocked that it took so long to discover them... it's only 3.something GeV (~half+ the mass of the B mesons) and not so "extraordinary" (just a ccu)...
They found 300 in the whole Run 2 dataset. Double charm production is rare, and both charm in the same hadron is rare even for this rare process.
 

Similar threads

  • High Energy, Nuclear, Particle Physics
2
Replies
57
Views
13K
  • Sticky
  • High Energy, Nuclear, Particle Physics
Replies
28
Views
8K
  • High Energy, Nuclear, Particle Physics
2
Replies
69
Views
12K
  • Beyond the Standard Models
Replies
10
Views
2K
  • High Energy, Nuclear, Particle Physics
2
Replies
49
Views
9K
  • Beyond the Standard Models
Replies
18
Views
2K
  • Beyond the Standard Models
Replies
4
Views
2K
Replies
1
Views
1K
Replies
10
Views
2K
Back
Top