CERN School of Computing 1997

Outlines of lectures and tutorials



GEANT4 Experience

J.Allison

GEANT4 is a large project which uses modern methods of programming, namely object oriented methods and the C++ language. It is probably the largest project of its kind currently in progress in particle physics today. The developers are almost all particle physicists, or programmers with a particle physics background, who have had to learn these new techniques "on the job" - not only new programming methods, but also new methods of code management. This could have been (could be?) a recipe for disaster. How has it worked? What problems were encountered? How did we design the structure of the program? Does it work?

By the time of the School, GEANT4 will have issued an alpha (at-your-own-risk) release, and undergone a Review by the LHC Committee Review Board. We should have a good idea of its successes and shortcomings.

Reading list:


OO Databases

P. Binko

  1. Introduction to OO databases
  2. Persistent objects in LHC era
  3. Objectivity/DB architecture
  4. Data model
  5. Mass storage system: interface between MSS and OODBMS

Reading list:


Application of the STL to Reconstruction of High-energy Physics Data

T. Burnett

An extremely important consideration in designing analysis and reconstruction code is the use of efficient data structures. The basic theme of these lectures will be application of techniques based on the Standard Template Library (STL) to problems encountered in reconstruction of data from detectors. Objective: become familiar enough with the STL to apply it to data-handling problems. Assumptions: Laboratory computers with C++ compilers supporting STL (preferably Microsoft VC++ 4.2 or later). Students have some familiarity with C++ and object-oriented design.

Basic Syllabus:

Laboratory exercise:

For various detector types and geometries, design appropriate containers to apply some simple reconstruction algorithms. The lecturer will provide a framework allowing simple control and visualisation.

Reading list:


Human Aspects of Computing in Large Physics Collaborations

V. Chaloupka

In the last several years, dramatic progress and profound changes occurred in the use of computers. Far from being limited to "computing", i.e. "number-crunching", computers are used for ever-broadening variety of purposes: communication, documentation, visualisation. The enormously increased power of desktop workstations practically eliminated the whole category of "mainframes". When a sufficiently large number of computers became connected together, the initial communication and data transfer tools (EMAIL, TELNET, FTP etc.) culminated in the invention and almost explosive growth of the World-Wide-Web. At the same time, software programming methodologies were significantly enriched by the concept of Object-Orientation, and available tools range from Computer-Assisted Software Engineering to various structured techniques.

Many of these changes originated in large organisations, with critical needs for communication, documentation and coherence within a large and heterogeneous group of scientists, engineers and computer programmers. It is not by accident that the WorldWideWeb was invented at CERN where there are severe demands on all aspects of data and information processing and sharing. However, in spite of the phenomenal success of WWW, the original promise of fundamental improvement in the effectiveness in large scientific collaborations remains largely unfulfilled. It appears that all the new hardware and software tools are just that - tools, and to fully realise their potential, we must study the human tendencies and attitudes which must be understood and modified before real progress can occur. In these lectures, we will discuss the link between human interactions in large communities on the one hand, and the modern, state-of-the-art computing and communications tools on the other hand. Some specific aspects covered include:

Focusing this study on physics is timely and appropriate. There seems to be growing realisation that in addition to purely technological progress, there has to be an increased emphasis on the human aspects of our technological endeavours. The present day field of High Energy Physics is an excellent candidate for a study of these issues: the geographically distributed nature of the collaborations, which is also mirrored by current trends in other "Big Science" projects and in industry, necessitates an increased attention to the human factors involved.

Reading list:


Information Systems for Physics Experiments

M. Donszelmann and B. Rousseau

  1. Information Systems Definitions and generalities

  2. HEP Application Domains
  3. Overview of the technology, with examples of applications in HEP
  4. Specific Applications - WIRED: WWW Interactive Remote Event Display

    - CEDAR: CERN EDMS for Detectors and AcceleratoR

    - LIGHT: LIfe cycle Global HyperText

Exercises

Reading list:

If you know C++ or an Object Oriented language and want to learn Java:

If you are a true beginner in the OO field:

For JavaScript:

and for making Web sites:


Making Links in Unstructured Data: an Introduction to Hypermedia

W. Hall


Making Links in Structured Data: an Introduction to Databases

S. Malaika


Making Links in Web Database Applications

S. Malaika


Making Links in the Future

W. Hall

Reading list:


Visualisation of Multidimensional and Multivariate Data

W.T. Hewitt

This talk will review methods for visualisation of multidimensional and multivariate data, covering techniques such as scatter plots, Chernoff faces, Andrews plots and parallel coordinates. Further examples of scalar, vector and tensor fields will be shown using fluid flow as a case study.


Systems and Architectures for Visualisation

W.T. Hewitt

This talk will review a number of general purpose visualisation systems, such as AVS/Express, Explorer and IBM data explorer. A number of exemplars will be presented and compared. The final part of the presentation will assess the architecture of these systems for use in a distributed and collaborative working environment.

Reading list:

Specific Systems


LHC Trigger Design

J.R. Hubbard

Trigger design and trigger architectures will be discussed in the context of the LHC experiments. These lectures will present a "top-down" analysis of the LHC trigger requirements and design, based on the physics requirements of the LHC experiments. The LHC Level-1 trigger algorithms, based on specific trigger hardware, will be described and compared. Higher-level trigger algorithms, based on commercial switching networks and processor farms, will be presented, as well as the expected algorithm execution times. Full trigger menus and expected trigger rates will also be presented. Trigger architectures and implementations under consideration for the LHC experiments will be compared, first using very simple "paper models", then using complete modelling based on fully simulated events.

  1. Trigger design issues and trigger architectures Trigger design depends on the data volumes and event topologies expected at the LHC. The frontend readout should be designed to facilitate trigger implementation. This lecture will discuss event buffers, switching networks, processor farms, and supervisors required for different trigger strategies. Data transfers bandwidths will be discussed, including the possible use of regions-of-interest and pre-processing. Interfaces from the data buffers to the switches and from the switches to the processor farms will also be discussed.

  2. Physics requirements for LHC triggers The first step in determining a trigger strategy is to review the physics requirements of the system. This lecture is not meant as a "physics" lecture. The objective is to review the expected physics channels to determine which trigger algorithms are needed at Level 1 and at the higher trigger levels. The catalogue of physics processes would include Higgs decays, SUSY particles, gauge bosons, heavy vector bosons, top quarks, and B physics. Inclusive triggers would also be considered in order to satisfy the requirements of unexpected new physics.

  3. Trigger algorithms and rates This lecture will describe the trigger algorithms foreseen for the LHC experiments. Level-1 trigger rates (muon, electron/gamma, hadron, jet, and missing-Et) will be presented, as a function of threshold, at low luminosity (10^33) and at high luminosity (10^34). Higher-level trigger algorithms will be described, together with the data required for each algorithm, the trigger rate expected, and an estimate of the algorithm execution time. The LHC trigger strategies will be compared to the strategies followed by the collider experiments at the Fermilab Tevatron.

  4. Trigger menus Full trigger menus, based on the physics requirements, will be presented for luminosities 10^33, 3x10^33, and 10^34 /cm2/s. These "sample" trigger menus are meant as "existence proofs"; the final allocation of trigger bandwidth will be made just before data taking begins. Estimated rates will be given for each of the trigger items. Options for the boundary between level-2 processing and level-3 processing will be discussed, especially in what concerns B physics, B-jet tags, and missing-Et. There will be a short discussion of the possible use of neural networks in the LHC triggers

  5. Trigger modelling Trigger modelling can be used to determine the influence of different trigger strategies on physics performance and on cost. This lecture will describe a "paper model" technique which uses full trigger menus, but takes (estimated) average values for parameters such as data transfer volumes and rates, algorithm execution times, and processing overheads. The "paper model" results can be used to guide the full modelling studies and switching-network emulation (using MACRAME). Sequential and parallel processing schemes will be compared, as well as single-farm and multiple-farm architectures.

Reading list:


Network-based Remote Instrument and Experiment Control

W. E. Johnston

This set of lectures will cover some of the basic computing technology and current issues for using the Internet for collaborative remote instrument and experiment control. A set of related case studies reflecting experience in this area will be part of the presentation.

Lecture 1:

Lecture 2:

Lecture 3:


Software Process and Quality (Organisational aspects)

A. Khodabandeh

Software Process and Quality

The production of software is a labour intensive activity. This is certainly the case in the field of Particle Physics, given the scale of the software projects that are related to current and future experiments.

For most of the scientists and engineers involved in software production, the business is science or engineering, not computing. As software scope continues to grow so does the feeling that its development and maintenance are out of control. The situation is made even worse by a lack of software engineers and from an uneven software culture.

To be able to control the production of software it is essential to improve (a) the knowledge of the PEOPLE involved, (b) the organisation and improvement of the software development PROCESS (SPI) and (c) the TECHNOLOGY used in the various aspects of this activity. The goal is better systems at lower cost, and happier users of the software.

After having put in perspective the three aspects of software production (people, process, technology) we will look in depth at one special aspect of the technology which is software metrics as a component of software quality improvement with a tool called Logiscope. This close-up of the tool will also include live demonstrations.

Software Metrics Laboratory Exercises:

In the hands-on sessions, students will practice with Logiscope, a tool for software metrics, on a SUN Solaris computer. Logiscope is a toolbox for improving programming quality and test coverage. It can analyse more than 80 language variations including C, C++ and Fortran.

Logiscope features include:

The instructors will provide some sample code in C++ as the starting point for the exercises. For the remaining part of the session, students should bring example of code (own or being used) for analysis (further instructions will follow).

Reading list:


The LHC Computing Model

M. Mazzucato

Reading list:


Modern Object-Oriented Software Development

S. Smith


Discrete-Event Simulations

R. Spiwoks

Simulations are a widespread method to understand and to design complex systems. They are applied where the complexity of a system inhibits a closed-form description or where the cost of experiments or of prototypes inhibits measurements. Simulations are based on an abstract model of a real system described in terms of objects and their behaviour. In discrete-event simulations the object's behaviour is expressed in terms of state changes which can occur only at discrete events in time. This method is very suitable for computers and a wide variety of programming languages for this purpose are available. As an example of such a language, MODSIM II will be described in some detail. The design of data acquisition systems for future experiments in high energy physics will be given as an example of an application of discrete-event simulations.

The lectures will give answers to the following questions:

Reading list:


Visualisation in High Energy Physics

L. Taylor

Visualisation plays a crucial role in enabling physicists to understand complex multi-dimensional data. With the advent of powerful computing hardware, sophisticated scientific visualisation software has become a standard part of the analysis toolkit of High Energy Physics experiments.

We describe the concepts underlying successful HEP visualisation systems, for both statistical analyses of multi-event data sets and for the display of individual events in the detectors, using examples from current experiments. Future HEP experiments, such as those at the CERN Large Hadron Collider, are considerably more complex than those currently running. We discuss how new computing technologies will facilitate the difficult visualisation tasks of these experiments.

Reading list


M.Ruggier, J.Turner - Last update: 3 JUN 1997