The experiments start with a 40 MHz bunch crossing rate. At ~2 MB/event (ATLAS/CMS, lower for LHCb) that is 80 TB/s. You cannot even read out such a data rate. The experiments read out a small part and look for the most interesting collisions there (mainly looking for high-energetic processes). That reduces the event rate to ~100 kHz (ATLAS/CMS) or 1 MHz (LHCb). 200 GB/s are then fed into computer farms and analyzed in more detail. Again the data is reduced to the most interesting events, ~1 kHz for ATLAS/CMS and ~10 kHz for LHCb. Those are stored permanently. The information which possible physics process happened there (e. g. "the reconstruction found two high-energetic electrons") is also stored.
Individual analyses can then access those datasets. As an example, an analysis could look for events with two high-energetic electrons: Those might have a rate of 3 Hz during data-taking, which means you have something like 12 million events (~20 TB for ATLAS/CMS). That number varies a lot between analyses, some have just a few thousand, some have 100 millions. Those events are then processed by the computing grid, typically producing a smaller dataset (gigabytes) with just the information you care about. The GB-sized files are typically .root files and studied with C++ or python on single computers or a few computers at a time. Everything before that is code and data formats developed for the individual experiments.
ALICE has much lower event rates, so the earlier steps are easier there, the later steps look very similar.