iCSC2008
Special Topics
Details of all lectures
Transition
between HPC and Data Analysis themes:
Using HPC concepts in Data Analysis software
|
|
Tuesday 4 March |
|
15:10-16:25 |
HPC
Theme closing |
Using HPC concepts in data analysis software |
Alfio Lazzaro |
A short session (15
minutes) to connect to the
data analysis software and techniques lecture
Some possible applications of parallel processing
in data analysis code are briefly presented (e.g. how to speed up Maximum Likelihood fits).
|
Audience and
Pre-requisite
Attendees are expected to have some experience in
data analysis.Having
followed part of the HPC theme will help |
|
Overview of advanced aspects of data
analysis software and techniques
|
|
Wednesday 5 march |
|
09:00 - 09:55 |
Lecture 1 |
Overview of advanced aspects of data
analysis software and techniques
|
Alfio Lazzaro |
Summary
In this lecture we give an overview of the advanced data
analysis techniques based on multivariate techniques, which
are recently used in many High Energy
Physics data analysis. The topic is relevant to
many Particle Physics analysis, as well as in several other
fields.
We will give an overview on the different techniques and
their relative merits. |
Audience
This lecture targets an audience with experience in data
analysis, in particular interested in techniques of
signal/background discrimination |
Pre-requisite
This lecture can be reasonably followed without having
attended to the other lecturers of this school |
Keywords
|
Details
In the past years , many advanced techniques in statistical data
analysis have been used in High Energy Physics (such as maximum likelihood fits, Neural Networks,
and Decision Trees). In the past, the most common technique
was the simple cut and count analysis. This technique
consists in the following steps: several cuts are applied on
well studied discriminating variables, background estimation
is performed using Monte Carlo simulation samples or events
outside the signal region, and then the final measurement is
done counting the events after cuts minus the estimated
background events.
This simple technique is hampered by its low efficiency (defined as ratio
between the number events after and before the cuts) and does
not provide a good discrimination between signal and background
events. For this reason it was replaced by more
sophisticated techniques, such as the multivariate maximum
likelihood for the measurements done at the BaBar
experiment, running at Stanford Linear Accelerator Center (SLAC)
in California.
The maximum likelihood (ML) technique permits to achieve higher
efficiency, the possibility to take in account errors with a
better precisions, and consider correlations between the
discriminating variables used in the analysis. Anyway, in
future experiments, like LHC experiments at CERN, it may be
crucial to have better discrimination between signal and
background events to discover new phenomenas, which suffer
higher background. Neural Networks and Decision Trees are
good techniques to reach this goal. Another important issue
to take into account lies in the fact that these techniques are in most
cases very CPU-time consuming. It is possible to speed them
up using concepts of High Performance Computing (HPC).
In this lecture we will give an overview of the advanced
data analysis techniques mentioned above, introducing some
software packages commonly used in HEP. This will be
preceded by a short session at the end of the previous
theme, giving briefly examples of possible HPC optimizations. |
Scalable Image and Video coding
|
|
Wednesday
5 march |
|
10:30 - 12:00 |
Lecture2 |
Scalable Image
and Video coding
|
Jose Dana Perez |
The
aim of this lecture is to describe the basis
of image and video coding and compression, with a
special emphasis on the latest developments. We will
see how to encode and compress this particular type
of data using lossy algorithms that take
advantage of the limitations of the human visual
system.
We
will focus on scalable image and video coding,
which is a cutting-edge area of research, an area
were few fully recognized standards have emerged
yet.
Sometimes, specialized
developers
need to design systems which require an image or
video (de)coder. Understanding the internals of some
coding systems may help them in to select the
most appropriate approach (streaming systems,
pattern recognition systems, etc.) and
algorithm (JPEG, JPEG2000, MPEG-2, MPEG-4, WMV,
etc.).
We
will present techniques used in well-known
algorithms and the audience will have the
opportunity to learn the fundamentals through
practical examples.
|
Audience
TBW |
Pre-requisite
TBW
|
|
|
|