Scientific programme

Preliminary Scientific Programme 1999

The School is based on the presentation of approximately 27 hours of lectures and on 20 hours of related practical work on PCs or workstations.

The programme of the School is organised round four themes:

The following lectures are now confirmed. Any additional lectures will be announced later.

Advanced Topics

Quantum Technologies: quantum computing and quantum cryptography

Quantum computation is a new computational paradigm invented by Richard Feynman and others in the early 1980s in which numbers would be represented by quantum mechanical states of some suitable atomic-scale systems. This idea brings a new feature to computation: the ability to compute with quantum mechanical superpositions of numbers. For certain types of computational problems this new feature would make quantum computation very much more efficient than classical computation. Important examples are the quantum algorithms for integer factorization and the discrete logarithm problem that were invented by P. Shor in 1994. Because the security of modern public-key cryptosystems is based on the difficulty of solving these problems with conventional computers, it is very important to determine if practical quantum computation is feasible. An introductory exposition of quantum computation will be given as well as a description of how quantum computation with laser-cooled trapped calcium ions is being developed at Los Alamos. The lecturer will define the resources (number of ions, computational time required, laser properties etc.) required to effect simple computations such as factorization of small integers, and he shall also discuss the explorations of the foundations of quantum mechanics made possible by a quantum computer.

For further details see:


The Mass Storage Challenge for LHC

The lectures will survey potential technologies which could be used for storing and managing the many PetaBytes of data which will be collected and processed at LHC. This will cover current mainline hardware technologies, their capacity, performance and reliability characteristics, the likely evolution in the period from now to LHC, and their fundamental limits. It will also cover promising new technologies including both products which are emerging from the home and office computing environment (such as DVDs) and more exotic techniques. The importance of market acceptance and production volume as cost factors will be mentioned. Robotic handling systems for mass storage media will also be discussed. After summarising the mass storage requirements of LHC, some suggestions will be made of how these requirements may be met with the technology which will be available.

For further information, see:


Current Experience and Plans for System Managed Mass Storage Systems at CERN

The lectures will cover the results of the CERN investigation and survey of the market offering and of the technology evolution in this field. They will contain a summary of the practical experience with CERN developed systems (SHIFT and STAGE), the HPSS activity, a survey of existing alternatives (Nstore, Eurostore, SHIFT++ etc.) and conclude with a discussion on CERN current plans for the future.


LHC Experiments Data Communication and Data Processing Systems

The design of a trigger and data acquisition system for a general-purpose experiment at the Large Hadron Collider poses a challenge that has no precedent in the history of experimental physics. The physics requirements, the detector characteristics and the high collision rate expected at LHC luminosities of 10**32 to 10**34 cm-2 s-1 inherently constrain many aspects of the architecture of a high-efficiency data acquisition system. The detector signals must be amplified, shaped and eventually digitized. The analog or digital information for each channel must be held in local buffers during the decision time of the event selection system, operating at the bunch crossing frequency of 40 MHz. Then the data fragments must be synchronized, collected and compressed to form a full event while the rate of storable events is reduced by subsequent trigger levels. Telecommunication and computing networks will be extensively used to interconnect the signal digitizers to a large complex of computing elements used to analyze event data and select the collisions of physical interest. The course introduces the requirements and the basic concepts of trigger and readout systems at LHC experiments. The current status of the system design and the prototype programme will be described as well.


Software Building

The goal of this track is to combine exposure to software engineering principles coupled to the software technologies and packages that are relevant for LHC experiments. It is also to give students a taste of working on large software projects that are typical of LHC experiments. The idea is to use the software engineering and LHC++ lectures as a framework with a single case study, based on LHC++, for all phases of the software process. The lectures explain the different phases of the sw process, components of the LHC++ software suite and the exercises use an LHC++ application as a case study. The exercises will involve some programming (in C++). The emphasis from the software engineering point of view is that of developing applications of LHC++ not LHC++ itself.



A problem statement is given to the students which they must analyse then design, implement and test software in C++ using the LHC++ suite. Essentially, the case study will require students to develop three programs in succession during the practical exercises:

  1. Populate an event database using HepODBMS according to a defined object model.
  2. Build an event tag database from data prepared in I. Select some interesting event attributes and copy them to the the event tag database.
  3. Read event tag database built in II and display the contents. Use the LHC++ interactive graphical tools to apply more cuts.

For further information on LHC++ see:

and software engineering:


Internet Software Technologies

This track explores the general field of the Software-based technologies in use or planned over Internets and Intranets. The track is composed of three distinct though connected topics: Distributed Computing using Agents, Transaction Technologies, and Advanced Web Software Topics.

The first course focuses on a promising technique for supporting distributed computing: the use of agents written in Java. This method is applied to the specific field of Distributed Physics Analysis. The class will comprise three lectures where the agent technology will be introduced and the Java programming will be presented. Then, students move to exercises where they write physics analysis algorithms in Java, and agent-based job submission systems, then finally merge their outcome into a global system.

The second course "Web-based Transaction Technologies" describes the mechanisms and techniques for supporting client-server interactions based on web forms. It starts with a presentation of the HTML language, then carries on with scripting languages (JavaScript) and the CGI interface. Two hours of exercises where students develop a simple transition system based on forms and associated CGI programs complement the four hours of lectures.

The third course is devoted to a selection of more advanced web-based software topics. This includes a presentation of the Cascaded Style Sheet (CSF) system and the Dynamic HTML (DHTML). The course also addresses the XML language as well as the SMIL language for the support of synchronised multimedia documents.

Lecturers and lectures:

Text: Jackie Turner
Web: Mario Ruggier
Last update: 21 Jan 1999