|
I have been involved in the development and management of the central
computing services at CERN since 1974, taking a very active part in the
evolution from super-computers through general purpose mainframes to
clusters of RISC workstations and finally (at least for now) to PC-based
computing fabrics. Recently my specialisation has been in the area of
large-scale data handling for physics analysis - particularly the hardware
and software of storage systems, and cluster management. I was involved from
an early stage in planning the data handling services for the
experiments that will use CERN's new accelerator - the
Large Hadron Collider (LHC). The very larger computing requirements for
LHC experiments have led to a current interest in Computational and Data
Grids - which we hope will enable us to unite the power of giant computing
clusters and storage facilities at CERN and other High Energy Physics data
centres around the world. I currently lead the
LHC Computing Grid Project, set up in
2001 to prepare the computing environment for LHC. The project includes the
development and support of the common tools, libraries and frameworks
required by the physics applications, the preparation of the computing
facility at CERN, and the coordination and operation of the global LHC
Grid.
|