|
![](../../../../../images/For_common_parts/1ptrans.gif) |
|
|
CSC2007
Physics Computing Theme
Coordinators:
Rudi Frühwirth,
HEPHY Vienna
Pere Mato,
CERN
|
The track will introduce the fundamental concepts of Physics Computing and
will then address the ROOT Technologies,
on on-line Data Acquisition.
The first series of lectures gives
an overview of the software and hardware components required for the
processing of the experimental data, from the source - the detector - to
the physics analysis. The emphasis is on the concepts, but some
implementation details are discussed as well. The key concept is data
reduction, both in terms of rate and in terms of information density.
The various algorithms used for data reduction, both online and offline,
are described. The flow of the real data is the main topic, but the need
for and the production of simulated data is discussed as well.
The
second series of lectures introduces the data analysis framework ROOT,
covering all basic parts that are needed for a future LHC data analysis.
The lectures will present by example how key requirements like
performance, reliability, flexibility, platform independence,
ease-of-use, and support for extensions are put into practice. Combined
with the accompanying tutorials they will give an overview of the
software techniques ROOT brings to life and hands-on experience of using
ROOT.
The third lecture series
focuses on on-line Data Acquisition Techniques.
Glossary of the different acronyms:
http://www.gridpp.ac.uk/gas/ |
Overview
Series |
Type |
Lecture |
Description |
Lecturer |
|
|
|
|
|
General
Introduction to Physics Computing
|
Lectures |
Series |
General
Introduction to Physics Computing
The two
lectures give an overview of the software and hardware
components required for the processing of the experimental
data, from the source - the detector - to the physics
analysis. The emphasis is on the concepts, but some
implementation details are discussed as well. The key
concept is data reduction, both in terms of rate and in
terms of information density. The various algorithms used
for data reduction, both online and offline, are described.
The flow of the real data is the main topic, but the need
for and the production of simulated data is discussed as
well. |
Rudi Frühwirth |
Lecture 1 |
Event filtering
The
first lecture deals with the multi-level event filters
(triggers) that are used to select the physically
interesting events and to bring down the event rate to an
acceptable figure. Some examples of the hardware and
software that is deployed by the LHC experiments are
presented. |
Lecture 2 |
Reconstruction
and simulation
The
second lecture describes the various stages of event
reconstruction, including calibration and alignment. The
emphasis is on algorithms and data structures. The need for
large amounts of simulated data is explained. The lecture
concludes with a brief resume of the principles of physics
analysis and the tools that are currently employed. |
|
|
|
|
|
ROOT Technologies
|
Lectures |
Lecture 1 |
Basics
To lay the foundation for the lectures of the coming days,
we start by introducing the purpose of ROOT and its primary
contexts of use. This will cover e.g. the C++ interpreter
CINT and the just-in-time compiler ACLiC. |
Axel Naumann
Bertrand Bellenot |
Lecture 2 |
Persistency
As ROOT is non-democratic, we will have to answer the
question of who owns whom (object ownership). The exabytes
of LHC data will be saved using ROOT's i/o. We will explain
how ROOT persistency is integrated into C++ and the basics
of ROOT's storage structure. As things change, modified
classes must be taken into account for persistency. With
ROOT, this mechanism is called schema evolution. |
Lecture 3 |
Tree
For huge amounts of data and within the special context of
high energy physics (event-based data, write-once,
read-many), TTrees combined with TClonesArrays provide ideal
collections for data storage.
We will explain why in this context they are superior to
e.g. STL collections, and which efficiency optimizations
they provide for processing data (splitting, data access
without library). Two mechanisms for combining TTrees,
friends and chains, will be introduced. |
Lecture 4 |
Analysis
Typical
ingredients of analyzes are data selection and histogramming
of values. ROOT provides facilities and tools for that; we
will present TTree::Draw(), TSelector, and the highly
efficient, interactive, parallel analysis facility PROOF. |
Exercises |
Exercise 1 |
We will
play with a few example macros, to get a feeling for the
pros and cons of compiled versus interpreted mode. We will
try to come up with code that is beyond CINT's abilities. |
Exercise 2 |
There is no
LHC data yet, so we will create our own. We will modify the
class layout used to store data to see schema evolution in
action. We will go through the steps of building a library
including a dictionary. |
Exercise 3 |
We will
store millions of your own objects into a TTree, and compare
its performance with STL. |
Exercise 4 |
Some data in the exercises' TTree is not
under your control.
You will determine its underlying
probability distribution. We will close with a wrap-up. |
Pre-requisite Knowledge |
Mandatory
pre-requisite |
Install ROOT if you don't have it; start it up.
Create a one-dimensional histogram with 10 bins spanning the
range 0..5.
Fill it with the values 4., 4.2, 5.8, 3.8, 4.7, and 2.7.
Draw it.
Fit it with a Gaussian using the default options.
Check that the mean of the fit should is 4.0 - otherwise
you've done something wrong. |
Desirable pre-requisite
and
references to further
information |
If you need to install ROOT, use the recommended version
mentioned at:
http://root.cern.ch/root/Availability.html
To learn how to create, fill, draw, and fit histograms look
at the User's Guide at:
http://root.cern.ch/root/doc/RootDoc.html
chapters "Histograms" and "Fitting Histograms".
To see examples on how to create, fill, draw, and fit
histograms look at the macros in $ROOTSYS/tutorials, esp.
hsimple.C and fit1.C.
The reference guide for ROOT's histogram base class TH1 is
located at :
http://root.cern.ch/root/html/TH1.html |
|
|
|
|
|
On-line Data Acquisition
|
Lectures |
Lecture 1 |
A general
introduction to data acquisition systems will be given by
focusing on the four LHC experiments. The principle data
flow, the qualitative/quantitative requirements and the
architecture of these data acquisition systems will be
discussed. Their relations with the other on-line systems
for triggering, high-level filtering, and control will be
explained. |
Klaus Schossmaier |
Lecture 2 |
The functional
elements of data acquisition systems (e.g. readout, event
building, control, interfaces) will be addressed in terms of
components, concepts, and technologies. In addition, testing
techniques, performance measurements as well as some
practical aspects of running on-line systems will be
covered.
|
Lecture 3 |
A case study of the ALICE data acquisition
system will be presented. The chosen technologies will be
discussed and the software framework (called DATE) including
the add-on software for performance monitoring and data
quality monitoring will be introduced. Also some simulation
results will be shown. |
Exercises |
Exercise 1 |
A demonstration of the ALICE data
acquisition system framework DATE will be given. |
Prerequisite
Knowledge |
Desirable
prerequisite
and
references
to further information
|
A good knowledge of programming languages, Linux operating
system, and computing technologies is considered as useful
to benefit most from
this series of lectures.
References:
CERN
Summer Student Lecture Programme - 2005
-
Trigger
and Data Acquisition Systems, by Paris Sphicas
Part 1
-
Trigger
and Data Acquisition Systems, by Paris Sphicas
Part 2
CERN
Summer Student Lecture Programme - 2002
-
Trigger
and Data Acquisition Systems, by Clara Gaspar
Part 1
-
Trigger
and Data Acquisition Systems, by Clara Gaspar
Part 2
-
Trigger
and Data Acquisition Systems, by Clara Gaspar
Part3
Additional
material will be available in the CSC handbook provided at
the school.
|
|
|