Data Collection Begins: Monitoring the Diphoton Excess

  • I
  • Thread starter mfb
  • Start date
  • Tags
    Data
In summary: But at lower energies, they come out more often, and they're more energetic. This means that they can remove imperfections from the beam pipe, which is not something you want happening too often. So, they do a thing called "scrubbing". Basically, they fill the ring with a bunch of particles, and when they hit the beam pipe, some of them are accelerated and removed. It takes about 3 to 4 days, but it's worth it, because it keeps the beam pipe clean.
  • #36
Dr.AbeNikIanEdL said:
What is the integrated luminosity people aim/realistically hope for analyses presented this summer?
I don't think the collaborations made that public, and it can also depend on the individual analyses. Two numbers as comparison:

  • ATLAS and CMS showed first results of last year 6 weeks after the end of (proton collision) data-taking. The date was set in advance, so the experiments were confident to get their fast analyses done within 6 weeks.*
  • The Higgs boson discovery in July 2012: The high-priority analysis of 2012. At the time the discovery was announced, they had data up to 2-3 weeks before the presentations. Both collaborations were really pushing to include as much data as possible, so that is probably a lower limit.

6 weeks before ICHEP would be 21st of June, or three weeks from now. 2-3 weeks before ICHEP would give 7-6 weeks from now. A good week of data-taking now is probably 1/fb, so if we have the technical stop but not the machine development block, the lower estimate would be 2 weeks of data-taking for 3.5/fb, if the technical stop is shortened and the machine development gets moved to end of July (and merged with the next one), we might get 6 weeks of data-taking to have 7.5/fb. Maybe even a bit more if everything runs very smoothly. Probably something between those values, unless some problem comes up.

*the analyses start earlier, usually even before data-taking, with simulated events, it's not like the whole analysis could be done within a few weeks. But some parts of the analysis (in particular, all final results...) need the full dataset, and that determines the timescale.

A decision about the technical stop will probably be made later today.
 
  • Like
Likes nrqed, Dr.AbeNikIanEdL and arivero
Physics news on Phys.org
  • #37
News:
  • The technical stop will be shortened as much as possible, from 5-7 days to something like 2-2.5, starting Tuesday.
  • The machine development block originally scheduled for this week gets shifted significantly. A second block is scheduled for July 25 - July 29, those two might merge - data collected that late won't be included in results shown at ICHEP in August anyway.
  • We are now at 2040 bunches. Further steps will probably take much more time.
  • Initial luminosity this afternoon was shown as ~80% of the design value for ATLAS and ~73% of it for CMS. The truth is probably somewhere in between. This is at the record set 2012, where we had 77% of the design luminosity, but at a lower energy back then. The collision rate per luminosity rises with energy, so we certainly have a new record in terms of collision rate.
Integrated luminosity for ATLAS/CMS: 1.7/fb, 0.23 of that from Wednesday.
 
  • Like
Likes Lord Crc and ProfuselyQuarky
  • #38
The current record for stable beams duration (and the previous record) both occurred when there were upstream problems that prevented a refill. Assuming that there is nothing preventing refilling the LHC, what (if any) is the criteria for doing a dump and refill?
 
  • #39
Most fills end due to machine protection - some heat load is too high, some connection got lost, and so on. Apart from that: the number of protons in the ring goes down over time, and the beam quality gets worse. Both leads to a decreasing luminosity over time for ATLAS and CMS, typically with a half-life of ~20-30 hours this year. After a while it becomes more efficient to dump the beam and re-fill. Ideally this takes 2-3 hours, sometimes it takes more.

2.1/fb in total, 0.35/fb from Thursday. Two fills today got lost quickly, the next attempt is on the way.
 
  • Like
Likes websterling
  • #40
how many protons go in each fill?
 
  • #41
Typically 115 billions per bunch, 2040 bunches per beam, and 2 beams => 4.7*1014 protons, or 0.8 nanogram, about the mass of a white blood cell.
The stored energy in that small amount of matter is 500 MJ, twice the kinetic energy of a 85 ton Boeing 737 at takeoff.
 
  • Like
Likes Imager
  • #42
The LHC is back after the technical stop. 2040 bunches as before, won't get more until the SPS vacuum leak issue is solved. That still allows to improve the beam focus a bit, so the initial luminosity was somewhere at 85% the design luminosity an hour ago.

ATLAS and CMS are at 3.0/fb integrated luminosity now, that is nearly the size of the 2015 dataset, and that should increase fast now.
 
  • Like
Likes Lord Crc and 1oldman2
  • #43
Thanks for the updates.

I've been looking at the beam status page (Vistars) every now and then and it seems to take about 2-3 hours from the beam is dumped to the next beam getting going again. If that's correct (and not just me misinterpreting or similar), why the long down-time?
 
  • #44
At least 2-3 hours, sometimes longer.

Here is a description from 2010. The main parts that need time:

- the magnets have to be ramped down to allow injection at 450 GeV (~20 min)
- the magnets have some hysteresis, their current state depends on what happened in the past. The curvature of the proton beam has to be correct to 1 part in a million, so you really want to be sure the magnets have the right magnetic field. If there was an issue with the magnets in the previous run, the magnets have to be brought to a known state again, which means they have to be ramped up and down once (~40 min, if necessary).
- the machine operators have to verify everything is in the expected state - for the machine, for the preaccelerators (same control room) and for the experiments (different control rooms, they have to call the experiments and those have to give permission for injection) - a few minutes.
- a "probe beam" is injected - very few protons, to verify that they cycle as expected and that the beam doesn't get lost - a few minutes.
- the 2040 bunches have to be made and accelerated by the preaccelerators. This happens in steps of 72 bunches now, and every group needs about a minute, if nothing goes wrong this takes ~30 minutes.
- the energy is ramped up from 450 GeV to 6500 GeV. Ramping up the dipole magnets needs about 20 minutes.
- the beams have to get focused, which involves ramping up superconducting quadrupole magnets. About 20 minutes again.
- once the machine operators verify again that everything is expected, they let the beams collide at the experiments (before they are separated) and find the ideal spot for the highest collision rate for ATLAS and CMS, and a lower rate for LHCb and ALICE. That takes about 10 minutes.

If you add those things, even in the ideal case it needs 2 hours. Usually something needs longer for various reasons.
The run that started last night is at 0.32/fb, adding another 10% to the total dataset this year. It is still ongoing, chances are good it will break some record later.
As comparison: The LHC produced more Higgs bosons today (literally: this Sunday) than the Tevatron did in 20 years.
 
  • Like
Likes Lord Crc, ProfuselyQuarky and fresh_42
  • #45
mfb said:
chances are good it will break some record later.

The plan is to terminate it in about four hours. If it goes this long, it may be the first intemtional termination this year.
 
  • #46
They had programmed dumps before this year (in particular, to go to more bunches), but I don't know if that included fills with 2040 bunches.

0.40/fb now, a new record for "per fill", "per 24 hours" and "per day". Will probably rise to ~0.45/fb for ATLAS.
 
  • #47
Thanks a lot mfb for the detailed response, very interesting.
 
  • #48
A CERN article about the recent data collection and records

The last run, yesterday to today morning, was 0.50/fb of data. ATLAS and CMS now have 4/fb in total.

Edit: Next run started ~7 pm, initial luminosity was about 93% design luminosity.

There is just one week of planned interruption until September (https://espace.cern.ch/be-dep/BEDepartmentalDocuments/BE/LHC_Schedule_2016.pdf ), so let's extrapolate a bit (optimistic)...

lumi.png
 
Last edited by a moderator:
  • Like
Likes arivero, vanhees71 and ProfuselyQuarky
  • #49
~6.3/fb integrated luminosity (average over ATLAS and CMS), more than 1.5 times the size of the 2015 dataset. Combining both years, both experiments now have more than 10/fb. For most analyses, this gives better statistics than the 20/fb at the lower energy in 2012, and more collisions are coming in.

The LHC operators made a large table as overview over the runs of the last month: slides, table alone. The last run, collecting a bit more than 0.5/fb, is not included there yet.
SB = stable beams, needed for data-taking
B1, B2 = number of protons in beam 1 and 2
Unit conversions:
L peak: 10 would be the LHC design luminosity here.
1000/pb = 1/fb.[/url]
 
Last edited:
  • #50
Thanks for the update.

How roughly much data would they need to say something definitive regarding the 750 GeV bump? I'm thinking "yeah something is there" vs "nope, just fluctuations".
 
  • #51
6/fb are very interesting already - significantly more than the 2015 dataset which produced the excess. In May, I speculated a bit, with way too pessimistic estimates for the luminosity evolution (the schedule had less time for data-taking and the collision rate was expected to grow significantly slower).
If it was just a fluctuation, we'll probably now, if it is a particle, we'll probably know as well. The amount of data shown at ICHEP will depend on how fast ATLAS and CMS can do their analyses, but I would expect at least 6/fb, probably more.
 
  • Like
Likes Lord Crc
  • #52
mfb said:
The amount of data shown at ICHEP will depend on how fast ATLAS and CMS can do their analyses, but I would expect at least 6/fb, probably more.

A little bird tells me that the ATLAS analysis is being performed with the first ~3 /fb then will be "topped up" with all data up to some cutoff date in about 2 weeks time. By my reckon it could be about 10 /fb.
 
  • #53
Aww, now it feels like I've put the commentators curse on the LHC :(

How bad is it? I just saw there was some issue with a power supply and now they're talking about reconnecting the 400kv line.
 
  • #54
Various issues prevented data-taking in the last four days, apart from a short run this morning (0.09/fb). Power supplies, water leaks, cabling problems, ...
The current estimate for the high voltage line is 20:00 (in 90 minutes). Can happen - it is an extremely complex machine, not all the components work all the time.
 
  • Like
Likes Lord Crc
  • #55
Bloody trees :(
 
  • #56
It is running again. Looks like a new luminosity record this morning, the displayed ATLAS value exceeded 95% of the LHC design luminosity.
0.2/fb collected already.

Edit: Peak luminosity was shown as 97.7% design luminosity for ATLAS (87.4% for CMS). Delivered luminosity was recorded as 0.576/fb and 0.548/fb respectively.

ATLAS now shows 7.08/fb collected data, CMS 6.77/fb. Two more days and we might have twice the 2015 dataset.They modified the injection scheme from the preaccelerators a bit, instead of 30 injections with 72 bunches each they now have 23 injections with 96 bunches each. Apparently that's still fine with the SPS vacuum, and it leads to slightly better beams.
 
Last edited:
  • Like
Likes Lord Crc and vanhees71
  • #57
The luminosity delivered to LHCb this year has now surpassed 2015.
 
  • Like
Likes mfb and vanhees71
  • #58
The LHC reached its design luminosity! The ATLAS value is shown as a bit more, the CMS value as a bit less, that is within the uncertainties of those values. The average is slightly above the design value of 10,000.

lhcdesignlumi.png
 
  • Like
Likes websterling, JorisL and ProfuselyQuarky
  • #59
The run from Sunday 17:30 to Tuesday 6:30 broke all records.

- initial luminosity: see previous post, first time the design value has been reached
- stored energy: 293 MJ in 5*1014 protons
- time in stable beams: 37 hours
- delivered luminosity in a run: 0.737/fb for ATLAS, 0.711/fb for CMS, 0.042/fb for LHCb
- delivered luminosity in 24 hours: don't know, but it is a new record as well.
- about 50% of the accelerated protons got destroyed during the run, most of them in the experiments. That's 0.4 nanogram of hydrogen.

Final luminosity was about 30% of the initial value. The machine operators dumped the beam to refill, the next run started already, with a slightly lower luminosity than the previous record run.7.7/fb data for ATLAS and CMS so far, twice the 2015 value.
 
  • Like
Likes arivero, Imager, ProfuselyQuarky and 3 others
  • #60
More than 10/fb for ATLAS and CMS, approaching three times the 2015 dataset size.

On this page, they showed an updated plot on the luminosity evolution. I extrapolated wildly again, and we are still on track for >30/fb by November 1st.

Probably a bit too optimistic as longer machine development and technical stops will come later. On the other hand, if we get lucky the preaccelerator vacuum problem gets fixed and allows higher luminosities.

Dotted green: the original plan for 2016.

lumievolution.png
 
  • Like
Likes vanhees71
  • #61
Performance in the last two weeks has been amazing. Monday->Monday set a new record with 3.1/fb collected in one week (that is nearly the same amount of data as the whole last year) while the LHC experiments could take data 80% of the time. About 12 weeks of data-taking remain for this year, 13.5/fb has been collected already (more than 3 times the 2015 dataset).

The LHC is now reliably reaching the design luminosity at the start of a run, and most runs are so long that they get dumped by the operators (instead of a technical issue) to get a new run - the number of protons goes down over time, after about 24 hours it gets more efficient to dump the rest and accelerate new protons.

The SPS vacuum issue won't get resolved this year, but there are still some clever ideas how to increase the collision rate a bit more.

lumievolution.png
 
  • Like
Likes Lord Crc and ProfuselyQuarky
  • #62
mfb said:
The SPS vacuum issue won't get resolved this year, but there are still some clever ideas how to increase the collision rate a bit more.

Slide 14 of the first set of slides from Monday's LPC meeting (http://lpc.web.cern.ch/lpc-minutes/2016-07-11.htm) has a table of the max. possible bunches (total and colliding at each point) given a train length, with and without moving the Abort Gap Keeper.

If the SPS will handle 144 bpi, we could see up to 2532b/beam, which translates to 24% increase in number of colliding bunches at ATLAS & CMS and 38% increase at ALICE and LHCb.
 
  • Like
Likes mfb
  • #63
18.3/fb (or 19.0, 19.2 or 19.4 according to other sources - I'll keep the previous one to have consistent comparisons) collected for ATLAS and CMS (compare this to ~4/fb the whole last year), the first machine development block of this year started today, stable beams will probably resume on Monday.

In the last days, the LHC implemented a "BCMS" scheme ("Batch Compression, Merging and Splitting") - a different way to prepare the bunches in the preaccelerators. As result, the beams are focused better, leading to a higher luminosity. We had several runs starting at 120% the design luminosity. The availability of the machine was great as well, so the LHC experiments could collect a huge amount of data.

I updated the luminosity plot including a paint-based extrapolation, taking into account planned downtimes. The light green line was a very early official estimate, the red line was an earlier extrapolation from me.
If the LHC can continue to run as good as in the last two weeks, it will beat even the optimistic red line extrapolation significantly.The ICHEP conference starts Wednesday next week, on Friday there are talks about the diphoton mass spectrum where we can learn more about the 750 GeV bump seen last year.
lumievolution.png
 
  • Like
Likes ProfuselyQuarky and Lord Crc
  • #64
Are any of the parallel sessions recorded and published by any chance?
 
  • #65
I was also thinking about the Higgs boson, any hints that there might be something interesting or unexpected there?
 
  • #66
Lord Crc said:
Are any of the parallel sessions recorded and published by any chance?

I didn't see anything on the conference website 38th International Conference on High Energy Physics about the sessions being recorded or streamed. But it did say that the Proceedings will be publicly available. Also, presented papers tend to show up on the arXiv.
 
  • Like
Likes Lord Crc
  • #67
Typically the slides are made available quickly after the sessions (sometimes before, but then people go through the slides instead of listening to the talks even more), with the corresponding arXiv uploads a bit before or after that.

websterling said:
But it did say that the Proceedings will be publicly available.
Proceedings appear several months later - at a point where all those analyses updated to the full 2016 dataset already, and no one cares about proceedings any more.
 
  • Like
Likes Lord Crc
  • #68
Yeah, I think they were recording only the Plenary speakers at the last ICHEP, our parallel sessions were not. But usually even the video takes time to be put on the website. A lot of the slides, especially those with preliminary experimental results, might not be made public either.
 
  • #69
Thanks for the info, guess I've been a bit spoiled by pirsa :)
 
  • #70
Since the Abort Gap Keeper was moved and verified before the MD period, the filling scheme should change at some point this week to one with 2200 bunches per beam (up from 2076).

This will mean a 9% increase in number of colliding bunches at ATLAS and CMS, 16% at ALICE and 18% at LHCb
 
  • Like
Likes mfb

Similar threads

Replies
13
Views
3K
Replies
57
Views
14K
Replies
49
Views
10K
Replies
3
Views
3K
Replies
30
Views
7K
Replies
21
Views
2K
Back
Top