Advances in computing technology and networking have been a saving grace for the world during the COVID-19 pandemic, as video conferencing and high-speed internet have made telework a huge success. Therefore, it was entirely appropriate that when the in-person meeting of Computing in High-Energy Physics (CHEP), scheduled to be in Norfolk, VA, had to be delayed for a year, that a virtual edition would be organised to take its place. Organised by CERN the 25th International Conference on Computing in High-Energy and Nuclear Physics (dubbed vCHEP) took place online from 17 to 21 May 2021 [1].

As a virtual event the organisers put a different focus on the content. Authors usually submit abstracts to the parallel sessions of the conference and plenary programme speakers are invited. This time we decided to invite papers of up to 10 pages to be submitted and chose a plenary programme from the most interesting and innovative of those. Over 200 papers were submitted, twice as many as expected, covering the R&D that is going on to tackle the huge issues of data rate and event complexity in future experiments in Nuclear and High-Energy Physics.

From this array of papers, the Programme Committee had a hard job to select just 30 that would form the vCHEP plenary programme, but the outcome was a broad programme with contributions from diverse areas and experiments. Scheduling the conference also proved a challenge as participants covered 20 h of time zones, from Brisbane to Honolulu. Morning sessions, for Europe and Asia–Pacific; plus afternoon sessions, for Europe and the Americas, allowed a rich programme to fit the constraints.

Almost 500 people attended the most popular session and in total more than 1100 people were registered. It was a showcase for the excellent work going on in the field, and 11 of the contributors were invited to submit their papers to this special edition of Computing and Software for Big Science for review and subsequent publication. These selections not only give a flavour of the breadth of topics addressed but also reflect the current open questions in software and computing in High Energy and Nuclear Physics.

One of the most popular topics addressed in this conference was the use of machine learning and artificial intelligence in very broad areas of application. There were more papers submitted to that theme than any other, showing that the field is continuing to innovate in this domain. Interest in using graph neural networks for the problem of charged particle tracking was very high, with three plenary talks. In the paper selected here [2], the use of edge-classifying interaction networks is discussed in relation to high pile-up environments such as that expected at the HL-LHC, which may also be suited to constrained computing environments such as the high level trigger. In another example of machine learning [3], the ProtoDUNE detector, which is testing technologies for the next generation DUNE neutrino experiment, present how they are leveraging deep learning algorithms for denoising raw data also using graph neural networks, to improve image reconstruction in a very difficult environment of the high-occupancy TPC. Third, in preparation for the phase-II upgrade of ATLAS, the liquid–argon calorimeters must deal with a pileup of 200 overlapping p–p interactions [4]. The results of the use of neural network algorithms implemented directly in hardware on FPGAs has been shown to be in good agreement with software calculations, and with a high performance suitable for the online environment.

Many other experiment codes are also being rewritten to take advantage of new architectures. IceCube are simulating photon transport in the Antarctic ice on GPUs [5] and presented detailed work on their performance analysis that led to recent significant speed-ups. Keeping up with benchmarking and valuing these heterogeneous resources is an important topic for many experiments and a report from the HEPiX Benchmarking group pointed to the future for evaluating modern CPUs and GPUs for a variety of real-world HEP applications [6]. Also related to computing and storage facilities, there was R&D presented on how to optimise delivering reliable, affordable storage for HEP based on CephFS and the CERN developed EOS storage system [7], which will be critical to provide the massive storage needed in the future.

vCHEP was also the first event of its kind with a dedicated parallel session on quantum computing. Meshing very well with initiatives on quantum computing in the community, it showed how serious investigations of how to use this technology in the future are being undertaken and some interesting results on using Quantum Support Vector Machines [8] to train networks for signal/background classification for B-meson decays was highlighted.

Advances in software, particularly related to simulation and detector reconstruction, have always been prominent topics at these conferences. ATLAS presented their new fast simulation framework [9], which combines traditional parametric simulation with Generative Adversarial Networks (GANs) to provide a ‘best of breed’ for each piece of the simulation and even better agreement with Geant4 than before. The sPHENIX experiment at RHIC, showed the results of implementing the experiment-independent set of open source tools of the ACTS (A Common Tracking Software) package [10], to address the need for fast and accurate track reconstruction in the high-occupancy TPC of the experiment with a per-event detector-hit multiplicity of order 100,000, while keeping within a constrained memory and reconstruction-time budget. Looking further to the future, muon colliders have an interesting discovery potential, but the experiments will face a very large beam-induced background in the detectors, making full detector simulation and reconstruction extremely challenging. In the paper featured here [11], the authors discuss strategies to optimise the track reconstruction workflows, leveraging work done at CLIC, from the detector design to the use of a Conformal Tracking algorithm to manage the high background environment.

Finally, sustainability of skills and knowledge is particularly important for the future, and broad training in modern and evolving software techniques is essential. The paper on software training [12] deals with community efforts to build domain-specific software skills as well as the use of online software training to equip the future generation of physicists with the advanced software skills that are necessary to sustain careers both in and outside of HEP.