2002
CERN School of Computing
Vico
Equense, Italy
15-28
September 2002
Final Programme
(Bulletin 2)
The 2002 CERN School of Computing is, like every year, a good occasion
to have high level tutorials on the recent evolutions in Computing. This year,
the CSC 2002 programme has focused on four main arguments: Grid Computing, From
Detectors to Physics papers, Security and Networks, Tools and Methods. These
arguments, although particularly relevant for High Energy Physics Experiments,
have broad importance in many other fields of applied Computing. Grid is now an
extremely exciting technology promise that is about to become reality and, if
successfully adopted, will dramatically change the way we access computing
resources. The HEP community is largely investing in Grids and CERN is playing
a major role in this programme. Security and Networking is also a very
important topic nowadays. Many companies are experimenting the tremendous impact
of these arguments on their computing infrastructure. HEP had, traditionally,
an "open network" born on the needs of many thousands of researchers
around the world to freely communicate amongst themselves. This openness had to
be moderated by the necessity of shielding the community from hackers and
malicious programs, without blocking the free circulation of scientific
information that is now part of HEP researchers' life. The other two arguments
are relevant for understanding how much computing technologies and methods are
part of an experiment in HEP. From Data Acquisition to the Word Processor,
passing by simulation and analysis programs, histogramming and visualization
packages, all the fundamental steps which start from the concept of an
experimental measure to the publication of a physics result are deeply related
to computing tools and resources. This explains why HEP has been always on the
forefront of computing technologies. The 2002 CERN School of Computing will
present the state of the art in all these fields.
F. Ruggieri
Chairman of the Advisory Committee
CERN, the European Organisation for Nuclear Research, undertakes research into the structure of matter. This research is largely performed by teams of visiting scientists based at their parent university or research centre. CERN staff are drawn from 20 European Member States, but scientists from any country may be invited to spend a limited period at CERN. CERN has 2,700 staff members and some 7,000 visiting scientists from all over the world carry out research at the Laboratory each year.
Research is carried
out with the help of two large accelerators, a Proton Synchrotron (PS) of 28
GeV and a Super Proton Synchrotron (SPS) of 450 GeV. The SPS has a large
experimental area for fixed-target experiments at some distance from the accelerator.
CERN's flagship from 1989 to 2000 was the Large Electron Positron collider,
LEP, which came into operation at 45 GeV per beam in July 1989, moved up to 70
GeV per beam towards the end of 1995, and to over 100 GeV per beam in July 2000.
LEP had four associated large experimental facilities, which have established,
among other things, that there are three light neutrino types. LEP was switched
off definitively in November 2000 but data analysis continues.
CERN is preparing
to install a proton-proton collider inside the tunnel until recently occupied
by LEP. This is the Large Hadron Collider (LHC), which was approved at the end
of 1994. The LHC will produce collisions at 14 TeV using superconducting
magnets to keep the particle beams in orbit. It is scheduled to be switched on
in 2007.
The prime function
of CERN is to provide Europe's particle physicists with world-class research
facilities, which could not be obtained within the resources of individual countries.
Development work goes on continuously to improve the accelerators, detection
systems and other auxiliary installations, and the range of equipment available
at CERN is among the finest assembled at any site in the world.
The Institute for the Technology of
Composite Materials (ITMC), established in 1993, is located in the Department
of Materials and Production Engineering of the Naples University “Federico II”.
The main research activities of ITMC are focused on the development and applications of the composite materials, on the technologies and diffusion and rheological properties of the polymers and finally on the materials for biomedical applications.
The ITMC Technological Transfer Area
carries out a role of interface between the industries need and the Institute
research activities: it promotes the innovation diffusion and supports the
industries in the preparation of R&D projects financing and management.
In the last years a qualified External
Service has been activated in the field of Analysis and Reduction of the Asbestos
Risk and the ITMC has been included among the three National Laboratories for
the accreditation for public and private laboratories operating in the specific
sector.
HOTEL
ADDRESS:
Hotel Oriente
Corso L. Serio 7/9
80061 Vico Equense
(NA)
Italy
Tel. + 39 081 801 5522
Fax.
+39 081 879 03 85
Participants will be lodged,
with full board, at the Hotel Oriente in comfortable double bedrooms, equipped
with bath/shower and W.C. Breakfast and dinner times will be as
follows:
Breakfast |
07.00 - 08.30 |
Lunch |
13.00 |
Dinner |
20.00 |
Lunch and Dinner will be served
in one sitting and participants are therefore requested to be on time.
Morning and afternoon coffee
will be at the following times:
Mornings |
11.00 – 11.30 |
Afternoons |
16.30 – 17.00 |
Drinks at the hotel bar, whether alcoholic
or non-alcoholic, are not included in the fee and must be paid for at the time
of purchase. The only exceptions are
coffee, tea and soft drinks during the morning and afternoon breaks.
One free beverage is provided at lunch and
at dinner.
All Lectures and exercises will take place
at the Hotel Oriente.
Lectures will take place in the Lecture room
on the main floor near the Reception.
Exercises will take place in the Tutorials room one floor down from the main floor.
The complete schedule and lecturers biographies can be found at this URL: http://cern.ch/csc-internal
Any changes to the programme
will be announced verbally and notices will be posted on a notice board, which
will be placed outside the lecture room.
F. Gagliardi, CERN, Geneva, will act as Director of the School and he will be assisted by M. Montanino, IMCB-CNR. Administrative matters will be handled by J.M. Franco-Turner, CERN, together with C. Del Barone, C. De Rosa and R. Saviano
P. Martucci, CERN, will be the
Systems Manager in charge of the computing infrastructure.
The School Secretariat will be located in a
room adjacent to the Tutorials room.
The Secretariat will be open during lecture hours as well as approximately
30 minutes before and after. Any
additional information will be displayed on a pin board.
All those participating in the
School will be provided with name badges.
These badges should be worn at all times on the Hotel premises to enable
all participants (lecturers and students) to get to know each other more
easily.
The organisers of the School
decline all responsibility for the possible loss of personal belongings of
participants. It is advisable to take
out an appropriate baggage insurance.
A welcome drink will be offered
on Monday, 16 September at 19.00. An evening banquet will be held at the end of
the School.
Excursions are planned as
follows:
Wednesday, 18 September -
afternoon
Sunday, 19 September - all day
excursion
Wednesday, 25 September-
afternoon
Further details will be given at
the School.
The Hotel is a base for the
Ramblers Holidays Association which organizes rambles/walks on the Sorrento
Peninsula and in the Lattari Mountains. Any interested students should come
equipped for such activity (walking shoes, etc.)
These will be published on a CD-ROM. One copy will be distributed to each participant, free of charge.
We would like to thank the following for their help and support of the 2002 CERN School of Computing:
Assessorato all'Università ed alla Ricerca Scientifica, Innovazione Tecnologica e Nuova Economia, Sistemi Informativi e Statistica, Musei e Biblioteche
Associazione Albergatori di Vico Equense
ATP s.r.l Materie Plastiche
Banco di Napoli Spa, Vico Equense
CNR - Progetto Finalizzato "Materiali Speciali per Tecnologie Avanzate"
Comune di Vico Equense
Istituto Nazionale de Fisica Nuclears (INFN)
Provincia di Napoli
Regione Campania
SELFIN Spa, Naples
Telecom Italia
Ufficio del Turismo di Vico Equense
Predrag Buncic,
CERN, Geneva
Predrag Buncic obtained a degree in physics from
Zagreb University in 1989. Then he worked on tracking algorithms for the NA35 experiment and obtained a master degree in
particle physics from Belgrade University in 1994. In the period 1995-1999 he
worked for the NA49 experiment on development of a persistent, object-oriented
I/O system and data manager (DSPACK) designed to handle data volume on 100TB
scale, and coordinated the NA49 computing efforts at CERN. At present he works
for the Institute für Kernphysik, Frankfurt in the Alice experiment on the
Alice production environment (AliEn). He is section leader of the database
section in the Alice Offline Team.
Federico
Carminati, CERN, Geneva
Federico
Carminati obtained an Italian doctor’s degree in High Energy Physics at the University
of Pavia in 1981. After working as an experimental physicist at CERN, Los
Alamos and Caltech, he was hired at CERN were he has been responsible for the
development and support of the CERN Program Library and the GEANT3 detector
simulation MonteCarlo. From 1994 to 1998 he participated in the design of the
Energy Amplifier under the guidance of Prof. C. Rubbia (1984 Nobel Physics
Laureate) in the development of innovative MonteCarlo techniques for the
simulation of accelerator driven fission machines, and of the related fuel
cycle. In January 1998 he joined the ALICE collaboration at LHC assuming the
leadership of the ALICE software and computing project. Since January 2001 he
holds the position of Work Package Manager in the European DataGRID project. He
is responsible for the High Energy Physics Application Work Package whose aim
is to deploy large scale distributed HEP applications using the GRID
technology.
Robert Cowles, SLAC, Stanford
With more
than 30 years of experience in computing and as the Computer Security Officer
at SLAC, the lecturer can ground the more abstract discussions with practical,
real-world examples. In addition to seminars in the US and Europe, he has
taught regular classes on Internet and web security for the University of
California and Hong Kong University.
Education: BS Physics from University of Kansas, 1969; MS Computer
Science from Cornell University, 1971.
Ian Foster, Argonne National Laboratory,
Argonne
Dr. Ian
Foster is Senior Scientist and Associate Director of the Mathematics and
Computer Science Division at Argonne National Laboratory, Professor of Computer
Science at the University of Chicago, and Senior Fellow in the
Argonne/U.Chicago Computation Institute.
He currently co-leads the Globus project with Dr. Carl Kesselman of
USC/ISI as well as a number of other major Grid initiatives, including the
DOE-funded Earth System Grid and the NSF-funded GriPhyN and GRIDS Center
projects. He co-edited the book “The
Grid: Blueprint for a New Computing Infrastructure".
Bob Jacobsen, University of California,
Berkeley
Bob Jacobsen is
an experimental high-energy physicist and a faculty member at the University of
California, Berkeley. He's a member of
the BaBar collaboration, where he led the effort to create the reconstruction
software and the offline system. He has
previously been a member of the ALEPH (LEP) and MarkII (SLC) collaborations.
His original academic training was in computer engineering, and he worked in
the computing industry before becoming a physicist.
Bob Jones, CERN,
Geneva
After studying computer science at university,
Bob joined CERN and has been working on online systems for the LEP and LHC
experiments. Databases communication systems graphical user interfaces and the
application of these technologies to data acquisition systems was the basis of
his thesis. He is currently responsible for the control and configuration
sub-system of the ATLAS data acquisition prototype project.
Carl Kesselman, University of South California,
Marina del Rey
Dr. Carl Kesselman is a Senior Project Leader at the University of Southern California's Information Sciences Institute and a Research Associate Professor of Computer Science, also at the University of Southern California. Prior to joining USC, Dr. Kesselman was a Member of the Beckman Institute and a Senior Research Fellow at the California Institute of Technology. He holds a Ph.D. in Computer Science from the University of California at Los Angles. Dr. Kesselman's research interests are in high-performance distributed computing, or Grid Computing. He is the Co-leader of the Globus project, and along with Dr. Ian Foster, edited a widely referenced text on Grid computing.
Pascale Primet, ENS, Lyon
Pascale Primet
is an assistant professor in Computer Sciences. She has been giving lectures in
Advanced Networks, Quality of Service and Operating System for more than ten
years; member of the INRIA Reso project. She is Manager of the Work Package
Network (WP7) of the EU DataGRID project and scientific coordinator of the
French Grid project E-TOILE.
Fons Rademakers,
CERN, Geneva
Fons
Rademakers received a Ph.D. in particle physics from the University of
Amsterdam. Since 1990 he is working on large-scale data analysis systems at
CERN. He is one of the main authors of the PAW and ROOT data analysis
frameworks and since July 2000 he works in the offline computing group of the
ALICE collaboration where he is in charge of the framework development.
Paolo Tonella, Istituto Trentino di Cultura,
Trentino
Paolo Tonella
received his laurea degree cum laude in Electronic Engineering from the
University of Padua, Italy, in 1992, and his PhD degree in Software Engineering
from the same University, in 1999, with the thesis "Code Analysis in
Support to Software Maintenance".
Since 1994 he has been a full time researcher of the Software
Engineering group at IRST (Institute for Scientific and Technological
Research), Trento, Italy. He participated in several industrial and European
Community projects on software analysis and testing. He is now the technical
person responsible for a project with the Alice, ATLAS and LHCb experiments at
CERN on the automatic verification of coding standards and on the extraction of
high level UML views from the code. In
2000-2001 he gave a course on Software Engineering at the University of
Brescia. Now he teaches Software
Analysis and Testing at the University of Trento. His current research
interests include reverse engineering, object oriented programming, web
applications and static code analysis.
Robert Harakaly –
ENS, Lyon
Robert Harakaly holds a Ph.D in Physics from the
Safarik University, Kosice, Slovak Republic and is presently a research
engineer at CNRS-UREC, Lyon, France. He is member of the WP7 Networking work
package (WP7) of the European DataGrid project. His interests include GRID
networking, network monitoring, reliable multicast and network security and he
has eight years experience with different UNIX systems and computer networks.
Erwin Laure – CERN,
Geneva
Paul Messina -
Caltech, Pasadena
E. Burattini, Cybernetics Institute,
Naples
“Grid” computing has emerged as an important new field, distinguished
from conventional distributed computing by its focus on large-scale resource
sharing and innovative applications. In
this track, we provide an in-depth introduction to Grid technologies and
applications. We review the “Grid
problem,” which we define as flexible, secure, coordinated resource sharing
among dynamic collections of individuals, institutions, and resources—what we
refer to as virtual organizations. In such settings, we encounter unique
authentication, authorization, resource access, resource discovery, and other
challenges. It is this class of problem
that is addressed by Grid technologies.
We present an extensible and open Grid architecture, in which protocols,
services, application programming interfaces, and software development kits are
categorized according to their roles in enabling resource sharing. We review major Grid projects worldwide and
describe how they are contributing to the realization of this architecture. Then, we describe specific Grid technologies
in considerable detail, focusing in particular on the Globus Toolkit and on
Data Grid technologies being developed by the EU Data Grid, GriPhyN, and PPDG
projects in Europe and the U.S. The hands-on exercises will give participants
practical experience of the Globus toolkit for basic Grid activities
The track addresses the
Information Technology challenges encountered in the transformation of the raw
data coming from the HEP experiments into physics results. Particular attention
will be devoted to the problem and opportunities arising from the distributed
environment in which both the development of the programs and the analysis of
the data will take place. This novel situation calls for the application of
innovative technologies, both at the level of software engineering, computing
infrastructure and data processing. Software engineering has to be rethought in
the context of a highly dynamic environment, where the communication between
the different actors is mainly remote. Computing techniques, such as
programming language choice and adoption of advanced software tools have to
support this approach. Management, planning and evaluation of the performance
of a project has to be adapted to this situation of loosely coupled and
dispersed human resources, which is not uncommon in other fields, but extreme
in HEP. Data processing also has to adapt to this situation of distributed
computing resources, where the ultimate goal is to transparently access them
without having to confront the underlying dynamics and complexity of the
system. These lectures will address how the different leading edge technologies
both in the field of software engineering and of distributed computing can be
successfully applied to a large collaboration of non-professional but demanding
intensive computing users. Finally different solutions to the problem adopted
by other experiments will be presented.
The development of modern distributed computing and complex data management systems, such as exemplified by the GRID, relies increasingly on two components where specific advances are necessary to satisfy these stringent requirements. These two areas are Computer Security and Network Performance. This track addresses each of them, in the form of two series of lectures, and via a selection of topics at the forefront of the technology. The security part starts with background knowledge and move to specific technologies such as cryptography, authentication, and their use in the Grid context. The Networking part focuses on two aspects that are of primary importance in a Grid context: TCP/IP enhancements and network monitoring. The aim is to present the fundamentals and the evolutions of the TCP/IP stack and to explore advanced Network measurement and analysis tools and services for end-to-end performance measurement and prediction.
This track presents modern techniques for software design and modern tools for understanding and improving existing software. The emphasis will be placed on the large software projects and large executables that are common in HEP. The track will consist of lectures, exercises and discussions. The first discussion session will occur after several hours of exercises have been completed. The last discussion session will be held at the end of the track.
The first 3 lectures will cover software engineering, design, methodology and testing. This is followed by three lectures on working with large software systems, including methods for analysing their structure and improving it. The final 2 lectures will focus on the tools that are commonly used in software design and testing.
In the exercise sessions, the students will have a
chance to use the tools that are described in the lectures. They will work with CVS and configuration
management tools. They will be asked to
use the test and debugging tools on some simple examples. By showing how these tools can locate known
problems, students will learn how to use them on new problems. Students will then be given a functional program
and a brief description of what it does.
The goal is to extend the program to handle a larger problem
domain. It is expected that the example
programs and exercises will be primarily in C++.
Lecture: GC.1/L: |
Introduction to Grids |
Lecture GC.2/L: |
Overview of the Globus toolkit |
Lecture GC.3/L: |
Globus components |
Lecture GC.4/L: |
Globus components |
Lecture GC.5/L: |
Globus components |
Lecture GC.6/L: |
Globus components |
Lecture GC.7/L: |
Other issues and future |
Lecture GC.8/L: |
Wrap-up & Feedback session |
Exercises:
Ex. GC.1/E:
Last minute modication. Now lecture "Introduction for hands-on exercises"
Ex. GC.2/E:
basic job submission
Ex. GC.3/E: advanced jobs
exploring the MDS info. service for SEs and Ces submitting more complicating
jobs making use of the replica catalog,and MPI/MPICH-G, etc
Ex. GC.4/E:
advanced jobs
Ex. GC.5/E: project work
students work in groups on a mini-project using the Globus toolkit and related software to solve a physics related problem. The students should build on the knowledge gained in the lectures and previous exercises to develop an
application capable of solving a given physics problem
Ex. GC.6/E: project work
Ex. GC.7/E:
project
work
Ex. GC.8/E:
project
work
From
data to analysis
DP.1.1/L: Introduction
The challenges of data processing in an LHC collaboration. Main parameters involved and order of magnitudes. Typical dataflow. Structure and organisation of an Off-line project. Management and sociological issues. Examples taken from LHC experiments will be presented. Problems and issues of technology transition. Example the transition from FORTRAN to C++.
DP.1.2/L: Software Development
Planning and organisation of the work in a distributed environment. New trends in software engineering and their application to HEP. Rapid prototyping and architectural design. Software tools
DP.1.3/L: Offline Frameworks
Need of an offline framework. Definition of an offline framework. Framework components and structure, layers and modularity. Interfaces to external component and basic services. Evolution and maintenance of a framework. Practical examples from LHC.
DP.1.4/L: Practical Use of Grid Technology
Basic problem of transparent access to large data samples. Solutions offered by GRID technology. Use of GRID technology in HEP, from dreams to reality. Current testbeds and first results. Practical examples from LHC.
Distributed
data handling, processing and analysis
The problems and issues of handling distributed data in typical HEP experiment. Access patterns. File catalogue vs file system. Generic API for data access. Logical, physical and transport file names. File catalogue implementation (AliEn). Distributed data processing and analysis. Introduction to the PROOF system that provides for the distributed processing of very large collections of data. PROOF uses a parallel architecture to achieve (near) interactive performance. Introduction to the ROOT I/O system. Discussion of the PROOF three-tier parallel architecture. Description of interface of PROOF to GRID (especially AliEn, see lecture 5).
Lectures:
DP.2.1/L: Distributed
data handling and analysis.
DP.2.2/L: Distributed
data processing and analysis
Exercises
(4 hrs):
DP.2.1/E,
DP.2.2/E, DP.2.3/E, DP.2.4/E
An introduction to the AliEn architecture. Using the AliEn API from C++. An introduction to PROOF and its use in the analysis of data created using the AliEn service
Current
approaches
The computing systems of several currently-running experiments are described, with emphasis on their experience in building, commissioning and operating them. Several approaches to ongoing development of new experiments will be described. The choices made will be discussed
DP.3.1/L: Experience with current approaches.
SN.1.1/L: Your Workstation
Threats
·
Destruction
·
Modification
·
Embarrassment
Responsibilities
·
Backup
& Virus protection
·
Patching
and configuration management
·
Email
security
SN.1.2/L: Cryptography and PKI
Symmetric and Asymmetric encryption
Public Key Infrastructure
·
X.509
Certificates
·
Certificate
Authorities
·
Registration
Authority
·
Obtaining
a certificate
·
Protecting
your private key
SN.1.3/L: Grid Security
Registering your identity
Authentication models
Authorization to use resources
Proxy Certificates and delegation
MyProxy server
Community Access services
Threats
Vulnerabilities
How you can make the Grid more
secure
Exercises
(2 hrs)
SN.1.1/E: Generate a key pair;
Perform steps
necessary to send email that is signed and encrypted either using PGP or using
X.509 certificates.
SN.1.2/E: Register with a MyProxy server and use a web
Grid portal to submit a job for execution.
High
performance Grid Networking
These lectures present the fundamentals of the TCP/IP stack and the
limits of the protocols to meet the network requirements of the Grid
application and middleware. The evolution of the network layer and of the
transport layer are examine in order to understand the tendencies in the high
performance networking. Emphasis is placed on the practices that permit
end-to-end performance measurement and improvement.
SN.2.1/L: Grid Networks requirements. IP
protocol. TCP protocol : main features, limits
SN.2.2/L: IP Service Differentiation - Elevated services
- Non elevated services : ABE, EDS, QBSS.
SN.2.3/L: High Performance Transport protocol
and TCP optimization
Exercises
(2 hrs)
SN.2.1/E: Configure and use tools and
services for Grid status and networks performance measurement.
SN.2.2/E: Measure and understand end-to-end performance of TCP connections over different types of links.
Software
Engineering
An
introduction to the principles of Software Engineering, with emphasis on what
we know about building large software systems for high-energy physics. These lectures cover the principles of
software engineering, design, methodology and testing.
TM.1.1/L: Introduction
to Software Engineering
TM.1.2/L: Software
Design
TM.1.3/L: Long-term Issues of Software
Building
TM.2.1/L: Static code analysis, slicing
Program
slicing is a static analysis technique that extracts from a program the
statements relevant to a particular computation. Informally, a slice provides
the answer to the question "What program statements potentially affect the
computation of variable v at statement s?" Programmers are known to formulate
questions of this kind when performing activities such as program understanding
and debugging. In this lecture, the
basic notions of program dependences will be introduced, so as to allow a formal
definition of the program slicing problem. A program slicing algorithm will be
then described. Finally, some variants of slicing and the available tools will
be presented.
TM.2.2/L: Reverse Engineering
During
software evolution, knowledge about the high level organization of the system
is important. In fact, it can help locating the focus of the change and
hypothesizing ripple effects. Often, available architectural views do not accurately
reflect the existing system. Their automatic extraction is thus desirable. In
this lecture, reverse engineering techniques based on the specification of
architectural patterns will be presented. The validation of the extracted model
through the reflection method is then described. Finally, dynamic approaches to
the identification of functionalities within components will be considered.
TM2.3/L: Restructuring
Exercises:
(2 hrs)
TM.2.1/E
and TM.2.2/E:
Introductory work on analysis and
re-engineering
Automation
in code analysis and restructuring is fundamental in making the techniques
studied from a theoretical point of view usable in practice. Among the
available tools, during the exercises the tool TXL (http://www.txl.ca/) will be
used. It supports code transformation and analysis and it comes with grammars
for several widely used programming languages.
Exercises will focus on the implementation of some simple code transformations based on the restructuring techniques presented during the theoretical lectures. Basic refactoring operations for object oriented systems such as moving methods, replacing variables and renaming entities will be considered.
Tools and Techniques
These lectures present tools and techniques that are valuable when developing software for high energy physics. We discuss how to work more efficiently while still creating a high quality product that your colleagues will be happy with. The exercises provide practice with each of the tools and techniques presented, and culminate in a small project.
Lectures:
TM.3.1/L: Tools
TM.3.2/L : Techniques
Exercises(6 hrs):
TM.3.1/E
TM.3.2/E
TM.3.3/E
TM.3.4/E
TM.3.5/E
TM.3.6/E Project combining all three parts of the
track
M. Delfino |
CERN, Geneva |
|
R. Edgecock |
RAL, Didcot |
|
F. Etienne |
CPPM, Marseille |
|
F. Flückiger |
CERN, Geneva |
(Deputy School Director) |
J. Franco-Turner |
CERN, Geneva |
(School Administrator) |
F. Gagliardi |
CERN, Geneva |
(School Director) |
R. Jacobsen |
University of California, Berkeley |
|
R. Jones |
CERN, Geneva |
|
J. Marco de Lucas |
IFCA, Santander |
|
P. Martucci |
CERN, Geneva |
(Technical Manager) |
P. McBride |
FNAL, Batavia |
|
F. Ruggieri |
INFN-CNAF, Bologna |
(Chairman) |
M. Montanino |
IMCB-CNR, Naples |
Sergio ANDREOZZI
I am currently working at CNAF-INFN
(Bologna, Italy) in a grid computing related project. Inside this project named DataTAG, I am focusing on advanced
resource discovery mechanisms using web services technologies. This topic is a key research area for the
improvement of both resource usage efficiency and loosely coupled interworking
among different grids. I am also a PhD
student at Department of Computer Science, University of Bologna. My research activity is strictly related to
my work at CNAF. My teaching activity is going to be in the field of
Databases. In the past I have written
code in C/C++, PHP and SQL. The
operating systems I am most familiar with are Linux and Windows. I am also skilled in networking, especially
network QoS (e.g. DiffServ).
Maite BARROSO
LOPEZ
I work for the DataGrid project,
as a member of the Fabric Management Work Package (WP4). My present activities
are: - I am the WP4 representative in the DataGrid Integration Team. This
implies packaging all WP4 software, testing and integrating it in the
development testbed before every official delivery. The programming languages
used are C, C++, Perl, Java and shell script. - I also work for the WP4
Installation and Configuration Management tasks, developing the global
configuration schema, to express/structure the node and fabric configuration
information using the High Level Configuration Description Language (HLDL). The
HLDL is a language for expressing configurations of systems. The description
resides in the Configuration Database (CDB). It is validated and transformed
into Low Level configuration Description (LLD) or some other form of
description that is used to configure the managed systems.
Andrea BELLI
My current work involves
development of a distributed search and categorization engine that will enable
just in time, flexible allocation of data and computational resources. The idea
is to aim at making petabytes of information distributed on vast amounts of
geographically distant locations highly accessible. Such a search and
categorization engine will operate without a central database; instead it will
index documents locally in each Grid node using whatever computational
resources are available right there at the storage vicinity, allowing querying
on demand from any other node in the Grid. I’ve an expertise in Java
programming, and I’m interested in JXTA community.
Rüdiger
BERLICH
ruediger@ep1.ruhr-uni-bochum.de
My current work involves the
parallelisation of Evolutionary Algorithms. The implementation of my library is
general enough to allow its deployment beyond the boundaries of particle physics
and across systems ranging from SMP machines over clusters of workstations to
GRID environments. I have got programming experience in C++, C and FORTRAN. My
Operating System of choice is Linux, which I have used since 1992. Apart from
scientific work done at Crystal Barrel and Aleph (both CERN) as well as BaBar
(SLAC), I have gained industry experience through my employment at SuSE Linux
AG. I have held both technical and management positions in this company in the
US, the UK and Germany.
Michal BLUJ
My present work is related with
two areas: an offline physical analysis and a computer activity supported
High-Energy Physics. Specifically, I have developed the physic analysis to
search for non-standard Higgs boson
decays into multi-b-quark final states in data collected by Delphi
detector at LEP. Now, I am starting the
Monte Carlo studies of capability of finding non-standard Higgs boson at future experiment CMS at LHC.
My computer activity is within EUroCrossGrid project, where I am involved to
creation of Warsaw testbed and the
development of a physic application running in the distributed grid environment. The application, which our
(Warsaw) group is working on, will be the offline application to LHC Physics TDR.
I have written code mainly in Fortran77, and (more rarely) in C, C++.
The operating system I am most familiar
is UNIX (Linux).
Tomasz BOLD
I am involved in building a new
component for the luminosity measurement of the ZEUS detector at HERA. For this
purpose I have developed several systems to cooperate with the detector
electronics, main ZEUS DAQ. Especially I prepared the system to control
processes responsible for data taking. Also I developed the software for
interactive and automated control of electronics and online reconstruction of
beam parameters. My main task is
integration with the central ZEUS DAQ system.
Development is based on knowledge of C/C++/FORTRAN/PAW/DDL/CERNLIB/Qt/ROOT.
I extensively use CVS for source management. I use Linux, UNIX, LynxOS as
development platforms.
Daniele BONACORSI
I am currently involved in the
experimental work of the Bologna CMS group. I am collaborating at the computing
activities concerning the management of the CMS Bologna computing farm, a Linux
cluster of dual processor PIII equipped with a disk-server in RAID 5. This farm
has been continuously improved during the time, and so far it has been
intensively used for MonteCarlo productions of simulated events within the
Official CMS Production framework. I am responsible of the development of the
Drift Tubes simulation and reconstruction software (C++) for the Muon Trigger
of the CMS experiment, within the ORCA project (Object-oriented Reconstruction
for CMS Analysis). I am also involved in EU-DataGRID activities, performing
tests of the EDG software on the CMS DataTAG testbed at CNAF (Italy); in
particular, I am developing and testing a framework which performs the ORCA
production and analysis steps in a grid environment. I have written code in
C++, Perl and scripting languages, Tcl/Tk and Fortran. The operating system I
am most familiar with is UNIX (especially Linux).
Diana BOSIO
I have just started working at
CERN, in the database group of the IT division. I am currently involved in the Grid, collaborating in both
the European DataGrid and the LHC Computing Grid projects. More, specifically I am involved in the WP2 activity,
working on data management . I will
also provide a contact point for the experiments, providing support for the
installation procedure of the modules
developed by WP2, and also to receive their comments and requirements. The computers I am familiar with are PCs running
Linux or Windows, Suns running Solaris and
Macs running OS 9. The programming languages I am familiar with include
C, FORTRAN, Mathematica, and Maple.
Hugo CAÇOTE
My current work involves the
development of a scalable and flexible system, responsible for the supervision
(monitoring and controlling) of CERN’s Computer Center equipment and Services. In the Management Station the
project employs a commercial SCADA system to where the status information
collected form the nodes is sent. In the Management Station the data can be
subject to analysis (trending, correlation,..) and actions can be launched on
the monitored nodes. The first prototype of the Project performs monitoring in
1000 nodes (100 parameters in each node) of the Computer Center. I am specifically involved on the study and
implementation of the communication protocol between the nodes and the SCADA
System. Future work will involve the development of a SNMP driver, automatic
configuration of the nodes through the Supervision System and the use of
correlation engines on the local nodes.
I have experience in written code in C, C++, Java, CORBA, Perl and the
Operating System I am most familiar with is UNIX and Linux.
Arnaud CAMARD
My work for time being consists
in the study of muons produced in the ATLAS electromagnetic calorimeter
testbeam. In particular, I study uniformity of final barrel modules. I also
study the TTCrx chip that sends LHC Clocks and level 1 trigger to all
electronics channels. I use Labview and C programming to send commands and
receive data from an oscilloscope (via GPIB Bus). Now, I should start a physics
study of long lived time supersymetric particles that can only studied with EM
calorimeter (difficult to see in central tracking detectors, decay before
reaching muon systems). To make this study, I need to learn more C++
programming to use ATHENA, the standard Framework in ATLAS collaboration. In my
current work, I program mostly using C language, Labview for electronics
applications, and PAW to finalize physics studies. But I know also basics of
shell programming, perl, and C++, and use almost only Linux operating system.
Sylvain
CHAPELAND
My current work is part of the
European Data Grid Project, in the fabric management work package. I have been
developing for one year a monitoring system scalable to several thousands of
nodes and able to deliver performance and exception monitoring data. On each
node, some sensors are running, and a collector transmits their output to a
central place, where measurements are stored in a database. Correlation of
metrics allows automatic actions to be taken on failure. Two prototypes were
deployed on CERN Linux nodes. Results from these tests were used to design the
final system being implemented. The code is written in C and C++, with some
Tcl/Tk scripts to generate HTML pages with plots, updated regularly or on CGI
queries. Latest prototype is deployed on data challenge cluster, DataGrid
testbed, and used as a base for CERN IT batch and public services monitoring
(total exceeding 1000 nodes). I am familiar with Unix and Windows systems, and
program mainly in C, C++, Tcl/Tk, and Visual Basic.
Dinker CHARAK
I am involved in online
computing development and support for experiments (CDF, D0, CMS, etc) in
Fermilab. This involves real-time OS support (mainly VxWorks 5.4 kernels) and
real-time application support for these kernels. I am primary contact for
handling support issues for VxWorks 5.4 on various targets, FISION (used for
VME Bus access), etc. I am also involved in software development for new data
acquisition system/triggering framework for D0 experiment at Fermilab. I am
involved in port of c++ online software between from Windows NT/VC.NET to
Linux/g++/KCC using ACE interfaces for maximal platform independence. I have
written code in c/c++.
Andrea CHIERICI
I am currently involved in Data-GRID
and INFN-GRID projects; I am also involved in the Italian Tier-1 project. My
main activity is on farming and testing for datagrid, following the development
of a tool that will be able to install, configure and maintain a workstation
from the connection to the network to it’s usage as a computation resource. The
operating systems I know are Solaris, Linux (I am a RedHat certified Engineer)
and Win9x/NT/2000. I have written code in C, Perl and can program shell
scripts.
Paula CHIN KEE FIGUEIREDO
At present I am working as a
Fellow in the CERN IT division. I am participating in the development of a data
management tool. CERN IT division
provides the computing infrastructure for the laboratory. The system that
manages and stores data, concerning IT Computing Services and its users’
accounts, is CCDB (Computer Centre Data Base). This Oracle based system is in
production since 1987 and includes a data repository and interfaces to manage
the stored data. It manages 27 services with a total of more than 89000
accounts. A CCDB and dependent tools
re-design and re-implementation is ongoing. The objective is to re-design the
system considering the current requirements of IT services and using database
features and more modern technologies.
I have been participating in a working group to define requirement for
the new system and will be one of the developers and maintainer of the new
system. The understanding of the current functionalities and database design is
essential for its maintenance as well as for the next system design. A reverse
engineering of the database was made in order to obtain the database design and
the same exercise is on going for the existing Oracle Forms.
Karim CHOUIKH
My current work on the CASTOR
project(CERN Advance storage manager) is to extend the CASTOR system monitoring
being implemented by to include history functions; storage of system performance
parameters in an database for the production of historical reports via the web.
Assist in developing, running and understand the results of test suites for the
LHC mock data challenges on specific hardware configurations.
Benjamin COUTURIER
I am currently part of the
CASTOR team in the IT division at CERN.
CASTOR is the Mass Storage System used to store on tapes the physics
data from the experiments. I am working
on a monitoring application for the system developed in C (for the daemon) and
Python (for the client GUI application),
that’s runs on several platforms.
I will also be involved in new developments for CASTOR (bug fixes or new
features like the proposed CASTOR File system), as well as the support for users from the physics community. In previous jobs, I also have done
object-oriented software design (using UML), and development using Java and C++
on Linux, Solaris and Windows.
Stephen DALLISON
I have recently completed
research for my Ph.D. on the OPAL experiment and have now taken up an
appointment at Manchester university working on core networking services required for Grid operations. This involves
the use of traffic engineering techniques to
configure managed bandwidth and quality of service between grid sites.
This will facilitate high performance
data transport mechanisms including advanced TCP and non-TCP applications. I am proficient at writing code in Fortran
and have developed a number of programs using C and C++, I am also familiar
with HTML and purl. I am experienced using the UNIX/LINUX and OS9 operating
systems as well as Windows based software.
Péter Pal
EMBER
Since 1998 I'm working on my Ph.
D, on the topic of Prompt Gamma Activation Analysis. The major part of the
thesis consists of the development and testing of a multidetector coincidence
measuring method. My current work involves the creation of a simple and user
friendly software that is able to extract the necessary information from the
great amount of data produced by the detectors. The program is written in C++,
and is planned to contain a command line part for batch processing, and a
separate user interface with OPEN-GL graphics to visualize the spectra, and the
coincidence gates. The command line part should work under Windows and Unix
also, while the graphical part is for Windows only. It is planned to merge the
program with the online data collection and spectrum analysis programs we
already have. I am also familiar with java/JSP, and Pascal languages.
Gareth FAIREY
My current work is with
DataGrid, where I am working on network monitoring, particularly in trying to
understand the behavior of Gigabit Ethernet traffic, as well as setting up the
necessary infrastructure (both hardware and software) for such tests. This has
involved running performance tests with various NICs as well as debugging and
further developing Perl wrapper scripts for them. Aside from that, I've worked
on configuring appropriate firewalls on the PCs involved. The operating system
I am most familiar with is Linux where I am working on understanding the
networking code at the moment. I have also begun using NetBSD and learning
Python.
Alessandra
FANFANI
I am currently working within
the CMS Collaboration. I am involved in MonteCarlo production of large amounts
of simulated events and analysis of high level trigger performances in muon
events. The production chain of simulated
events in CMS involves both traditional (Fortran, GEANT3 based program)
and distributed object oriented techniques (C++,Objectivity/DB). I am also working within the EU-Datagrid
project, performing tests of the EDG middleware in a CMS scenario, focusing on
problematic related to production of simulated data and their analysis in a distributed environment.
I have written code in C++, Fortran, perl, sh/csh .The operating system I am
most familiar with is Unix (Linux).
Alessandro
FARNESI
Actually I’m involved in the
GRACE project, funded by the European Commission within the 5th Framework
programme for the Research and Technological Development of the European Union.
CERN is one of the partners of this project. The project proposes development
of a distributed search and categorization engine that will enable
just-in-time, flexible allocation of data and computational resources and aims
at making terabytes of information distributed on vast amounts of
geographically distant locations highly accessible thanks to the development of
a decentralized search and categorization engine built on top of Grid technology.
Such a search and categorization engine will operate without a central
database. Instead it will index documents locally in each Grid node using
whatever computational resources are available right there at the storage
vicinity. The resulting index will be also stored locally and will allow
querying on demand from any other node in the Grid. Programming languages known
are Fortran, Pascal; O.S.:UNIX (Linux), Windows; MAC OS.
Joao FERNANDES
I am an electrical engineer
working with the Integration/Control Group of the CMS Tracker. The group’s responsibilities include the
integration of all the electronic components of the Tracker as well as the
implementation of the Front-End and Slow controls. I am working on the integration of the Tracker’s Detector Control
Unit (DCU). The rad-hard DCU will
perform the monitoring of 100,000 current and the same number of voltage
channels and 30,000 temperature channels.
I am developing the software that will define the detector monitored
temperature states as a function of the DCU data. The state will be transmitted to the Detector Control System
(DCS). In my work U use C++, Java and the
XML protocols (SOAP) for message transmission, under Linux platforms.
Enrico FERRO
Currently I am mainly working
for the DataGRID project in the Installation Task of the Fabric Management Work
Package (WP4) and in the testbed (WP6). The Installation Task is providing
tools for automatic system administration of farms with thousands of hosts,
while for the testbed I am working to automate the basic configuration of the
different kind of nodes required by DataGRID. I have some experience with the
languages Pascal, C, C++, Java, SQL, Perl and shell script. The operating
systems I am most familiar with are Linux and Windows.
Joachim FLAMMER
I am a member of the CERN
Trigger/DAQ group of the ATLAS experiment at CERN and I am working for the
online software group. The online software encompasses the software to
configure, control and monitor the Trigger-DAQ but excludes the processing and
transportation of physics data. My current work involves code development
and system testing for the Integrated
Graphical User Interface (GUI) in Java and the Run Control package in C++. The purpose of the IGUI is to provide a single GUI for end-users
to monitor and control the overall DAQ system. The run control system controls the data taking activities by coordinating
the operating of the DAQ sub-systems, back-end software components and external systems. I have been using C, C++, FORTRAN and Java
to develop code and the operating systems UNIX (Solaris, Linux, SGI) and
Windows, being most familiar with Linux.
Giovanni
FRANZONI
I am currently a PhD student in
high energy physics at Milan Bicocca University. I am involved in the research
program of the group headed by Prof. Antonino Pullia, presently focused on
building the electromagnetic calorimeter (ECAL) of the CMS experiment. I am
part of the group of people taking care of the ECAL simulation for the physics
studies necessary to design a combined test of the electromagnetic calorimeter
and the tracker. At the moment I am studying previously generated MC events
before going into the new simulation of the setup for the test. I will
participate to the studies for the calibration of the ECAL. I am also involved
in the design of the CMS ECAL cooling. I know the Windows environment and the
Linux OS and since my thesis activity I am familiar with Root and C++. In the
near future I will be involved in the building of a small farm of PC's in the
Physics Department of Milano Bicocca.
David FRONT
My current assignment at CERN IT
department, is to test and implement the replacement of the existing flat file
system of LCG fabric monitoring system with one (or two) relational
Database(s). The main challenge is to design a scalable solution for up to
10000 nodes, with reasonable performance, supporting heavy queries. This
assignment involves Oracle DB, MySql DB, SQL, PL/SQL, C, perl and Linux.
Additionally, I have much experience with design of various management
applications for tele/communication systems, both at the manager and at the
agent side. Main technologies: ADSL, fiber optics, fixed wireless, router, SDH,
SONET, IP, Main protocols: SNMP, DOCSIS, proprietary protocols. Additional Operating systems: Windows (NT),
Psos, Java Additional languages: Java, Tcl/Tk, assembler, UML, REXX.
Marek GARBACZ
graduated on 1999 and the
subject of my master thesis was 'Analysis and Implementation of Chosen Parallel
Programming Paradigms". I programmed using MPI library for HP Convex
Exemplar machine as well as using HP C compiler parallel programs and thread
library. Apart from that I experimented with shmem library for Fortran and C on
SGI Origin machine, trying to implement the functionality of this library on
top of newly developed communication library FCI (Fast Communication Interface,
developed by SIS Zurich). My current work (within CrossGrid project) includes
implementing software engineering standards: standard operational procedures
that are to be used within the project, project repository structure and access
rules, requirements gathering policy, design tools, etc. Apart from that I'm
involved in designing the architecture of the software that is to be developed
within the project. The programming languages I'm familiar with are: C, C++,
Perl, Tcl/Tk, Java.
Ruben Domingo GASPAR APARICIO
I am working within the mail service (12000 users, cluster of 14 Unix machines, 2800 distribution lists, Usenet node). Co-responsible person for the maintenance of the system, 24hours call-out, developer of new features for the web interfaces (servlets, and Perl cgi), maintenance of the distribution list infrastructure (monarch, Majordomo, LDAP), routing issues (sendmail, DNS configuration), store issues (IMAP servers), News configuration (INN).The tasks are mainly carried out using Perl, Java (Servlets), JavaScript, and shell scripting (normally ksh). The operating system is mainly Solaris, but we have also an OSF.1 machine. Taking part in the Exchange project whose aim is to migrate the actual system to the Windows Exchange platform (W2000). For the time being I’m implementing a SMTP routing sink using VC++ and the High Energy Physics LDAP directory. The tasks are mainly carried out using VBScript, VC++, VB, and new .NET technologies (C#) on W2K.
Elena GIANOLIO
My current work involves all
aspect of the System administrator tasks including security and network. I am in
charge of both
hardware and software maintenance of all our 100 computers and of both help and
support to the division members and visitors.
I am also involved in the development of programs based on distributed DB, and I
have to develop programs that have to manipulate data
from the HR database. I am the first contact for any problem concerning the
computers in the TH division and the webmaster of our web
server. I have been involved in the maintenance of several programs in C and
FORTRAN or shell languages like PERL, c-shell, bash,
t-shell. I have knowledge of several operating systems such as Unix, Linux,
Nice. I am very interested in the evolution of the Grid project
and in the application of the new technologies in the high-energy experiments.
Obviously I am very much interested in the security
section and in the tools and method part of the school.
Pietro GOVONI
I am a PhD student from Milano
Bicocca University. The group in which I am making my research activity, headed
by prof. A. Pullia, is involved in the
electromagnetic calorimeter of the CMS experiment. Presently I am studying, by
Montecarlo simulations, the combined use of the tracker with the calorimeter. The aim of this study is to assess
the feasibility of a combined testbeam to measure, with tracker elements
upstream a calorimeter module, the
combined response inside a high magnetic field. Meanwhile, I am also
involved in the design of the cooling system with the aim of keeping the
crystal temperature stable at the 0.1 C level. I have a good knowledge of both
FORTRAN and C++ programming language
and of the Linux/Unix operating
system. I am also familiar with the ROOT package for the data analysis.
Mario Rosario GUARRACINO
I am currently working on two
projects founded by the Italian National Council for Research and aimed to the
development of grid computing applications to process, manage and access
PET/SPECT images. Such applications will provide medical doctors with a portal
from where they can submit jobs, check job status, view and analyse processed
images. The second, which has just started, is concerned with the integration
of existing software for business and science grid applications. I have
experience in various Unix-like environments of shell scripting, Fortran and C
programming, and basic knowledge of OO programming.
Marcus HARDT
At Forschungszentrum Karlsruhe
(FZK) I am a scientific employee in the department for Grid-Computing and
e-Science and I am currently involved in two Grid projects: 1) In the LHC Computing Grid (LCG) Project
our group is exploring cluster management methods and storage technologies that
help setting up the German tier-1 center for LHC. My task is to configure a
SourceForge service, used as a collaborative software development
platform. 2) CrossGrid focuses on
interactive grid applications such as weather forecast, medical simulations,
flood prediction systems as well as high energy physics data-mining
applications. The CrossGrid architecture is based on the DataGrid
infrastructure. My tasks in this project are: deployment of the local testbed,
integration/coordination with DataGrid releases, integration and packaging of
CrossGrid releases and assuring platform independence. My programming skills are concentrated in C,
C++, Shellscript (bash) and Perl with which I am familiar under Linux,
Free-/OpenBSD, OS/2 and DOS (as available).
Nils-Joar HOIMYR
My work at CERN consist of the
distribution, support and integration of computing applications for
engineering. Over the last years I have
been responsible for the IT-part of the Engineering Data Management System
(EDMS) service, that involved the installation, customisation and running of a client server information
system (CADIM) and the EDMS application servers. A major part of the work has
been to interface EDMS with other tools such as the electronic design
application Cadence and file conversion utilities. I have also taken part in
the development of a Web-interface to EDMS.
For systems integration I mainly use scripting languages (Perl, Shell
scripts) and API's (ECI, LogiView, Skill, SQL) but I have also written code in
Fortran, LISP and Tcl/TK. I have a good knowledge of Unix (OSF1, Irix) and
Linux operating systems and Web-servers
(Apache). I also have a certain experience with Windows.
Jelena ILIC
The project I am currently
involved in includes writing, testing and debugging of the C++ code for the
online and offline data analysis in the Barrel of CMS ECAL detector. I
particularly develop the algorithm for photon conversion reconstruction (in the
CMS Tracker), as well as algorithm for Pi0 rejection in the Barrel of CMS ECAL
detector in the presence of tracker material.
I have written many applications and scripts in Pascal, C, C++, Fortran
and Matlab. The operating systems I am most familiar with are Unix (Linux) and
Windows.
Saima IQBAL
I am working on the Evaluation
of Oracle9i as the ORDBMS (Object Relational Database Management System) for
CMS and LHC. The object types of Oracle9i maps closely to the class mechanisms
found in C++ and Java. So, by using this facility of Oracle I worked with my
group to write down the PL/SQL wrapper for existing C++ classes, already in use
by CMS, to interact with Event data in Oracle Database. I am involved in
creating the simple objects, embedded objects and objects with REF. To load the
object data in Oracle database and their overhead measurements. I had also
developed Oracle9i-based architecture to store Peta-Byte of CMS data and
publish CMS internal note. This architecture proposes the use of Oracle Data
Warehouse technology for the CMS event store, based on Star Schema that
represents the MultiDimensional data model. Recently I write down the Java
classes to load and access the Event(Tags) data from Oracle database. Its first
prototype is successfully checked and also trying to deploy it in Grid
environment. This web service is created in JSP( Java Server Pages)which
creates Servlets automatically. Before selecting JSP for web service I made
comparison study of web services created in APPLETS, SERVLETS, Java Scripts,
PERL CGI, FrontPage. For this study, I write down code in each of above language
and create simple prototypes of web services. The operating system I am using
and most familiar with is Unix(Linux). I have also experience of doing
programming in C++.
Antonio Jose
JIMENO YEPES
My work is related to the
development and maintenance of the LHC Web Site. In addition, I am working on
the lifecycle of the Layout Reference Database for the LHC providing some
services like tree navigation through the layout of installable and search
facilities. I am working as well on the creation of a catalogue of elements for
the LHC that will be used as the reference for several systems like DMU, layout
database, electrical circuits database. I am working as well on the electrical
circuits database (input/output of circuit definition, ). I am developing a
knowledge-based search mechanism for the EDMS. I have written in C, C++, java,
javascript, visual basic, vba script, lisp, prolog, FORTRAN, assembly 80x86 and
68000, SQL, PL/SQL, HTML. I am familiar with several operating systems like
Windows 9x, NT, 2000, XP and UNIX (AIX, SunOS, LINUX (Mandrake, SUSE, Red
Hat)).
Emil KNEZO
Just recently I joined the
CASTOR (CERN Advanced Storage Manager) development team at CERN where I started
to work in the area of development of software interfaces between EU DataGrid
middleware and CASTOR. Last three years
my work was connected with designing of the ATLAS Second Level Trigger based on
Ethernet technology and building of the Trigger testbed. I designed and
implemented an SMP-version of a user-level thread scheduler and messaging
system (SMP-MESH). SMP-MESH is delivering 88% of the Gigabit Ethernet
throughput into the application layer with only 5% communication CPU-load. This
was achieved using interrupt-less zero-copy communication. Further, I wrote a
software package for testing and debugging of the FPGA Read-Out Buffer (ROB)
Emulators. FPGA ROB Emulators will be used to increase the Trigger testbed size
up to the 10% of the final system. My
favorite operating systems are Linux(x86), Solaris and HP-UX. I have written
code in C, C++, Java, x86 Assembler, Fortran, Haskell, Bash and Perl.
Tapio LAMPEN
I work at Helsinki Institute of
Physics in a team devoted to software and physics issues of the CMS experiment.
Our institute participates actively also in the EU DataGrid and NorduGrid, the
Nordic Testbed for Wide Area Computing project as well as in distributed
computing in the CMS experiment. My
work consists of developing a detector alignment algorithm based on
reconstructed tracks. The algorithm is intended to be used in the ultimate
position and orientation calibration of the CMS Tracker as a part of the ORCA
software. The algorithm uses the natural smoothness of reconstructed particle
trajectories as a constraint to find corrections for position and orientation of
the sensors. I have several years of
experience of working with the CMS software (written C, C++ and Fortran) and of
the different Unix-like operating systems.
Dietrich LIKO
In my current work I took
responsibility of the Run Control of the ATLAS Online Software. In the ATLAS
T/DAQ project the Online Software is
responsible for the control of the Data Acquisition System. The Run Control is mainly concerned with the
synchronisation of the various detector readout components and T/DAQ subsystem.
It uses a State machine model to abstract the diverse nature of the underlying
systems. The Run Control is also concerned with the Supervision of the
datataking by an expert system (CLIPS). I am involved in the maintenance of the
current programs and I am participating in the requirements collection and
design, that should lead to the final system. My experience includes offline
(Fortran, C++) and online programming (C,C++,Java). I have mainly experience
with Unix and VMS Operating Systems.
Giuseppe LO RE
I am involved in the Off-line
Group of the ALICE Experiment. My work concerns the implementation of the
reconstruction code for the LHC Pb-Pb collisions, in particular, for the
reconstruction of primary and secondary vertices. I carry out this activity
using the ALICE Object Oriented framework, AliRoot, based on ROOT and written
in C++ except some preexisting package like GEANT and Pythia. Recently, I have begun to collaborate with
the ALICE-EDG Test Group, testing the EDG tools with the ALICE applications and
writing some ALICE JDL jobs. The
operating system and the language I am more familiar are respectively Linux and
C++.
Nina LOKTIONOVA
I've got my background and PhD
in computing modeling for High Energy Physics. But now my main field of
interests is computing. I've had three years of experience as a Linux expert in
H1 collaboration at DESY. My responsibilities include both hardware and
software support of about 180 desktop PCs. I also deal with Solaris, IRIX,
Windows NT and I've been involved in installation of VME-boards. The
programming languages I am familiar with are Fortran, Perl and Shell and I've
had basic knowledge of C/C++.
Ernesto LÓPEZ
My current work involves the
development, testing and debugging of the simulation of silicon drift detectors
performance inside the AliRoot package. AliRoot is the ALICE Off-line framework
for simulation, reconstruction and analysis. It uses the ROOT system as a
foundation on which the framework and all applications are built. I am also involved in a INFN project to port
the software CALMA (Computer Assisted Library in MAmmography) from C/Tcl/Tk to
C++/ROOT/PROOF to be used in a GRID system.
I have written code in FORTRAN, C, C++, with several years of experience
in object-oriented programming. The operating systems I am familiar with are
Windows and Linux.
Anna MASTROBERARDINO
My current work involves data
analysis of electron-proton scattering events
in the ZEUS experiment. I'm responsible of the measurement of the
diffractive structure function of the
proton and the azimuthal angle between the electron and proton scattering planes in the deep inelastic scattering
regime. As a recent work I was involved
in the project of the new silicon microvertex
detector (MVD) for the ZEUS experiment. Within this project, I was
responsible of the development of the
Slow Control Software of the Low Voltage System, used to supply the HELIX chips, chosen for the silicon readout
electronics. The Low Voltage (LV)
communication with the MVD complex readout chain (~ 225.000 channels) is handled by a CAN controller, equipped
with an on board CAN-bus controller
(20CN592), itself ruled by the central system.
I wrote all LV slow control software in C using IAR System ICC8051
Development Kit for the 8051 microcontroller
family: it consists of an ANSI-C cross-compiler, assembler, linker and librarian, all running on a PC under
MS-DOS. The software, developed according to CANopen, a high level protocol
supported by the Can in Automation
(CiA), is able to control and monitor up to 8 ADCs (PCF8591 from Philips), each with up to 32 multiplexed analogue
inputs and foresees the system
autonomous self-monitoring for status change and the hardware fault detection and recovery. I'm also responsible of a Monte Carlo
generator (Diffvm) simulating diffractive events in electron-proton scattering which I updated and implemented in
the ZEUS software package. In charge of the Diffraction and Vector
Meson Data Quality Monitoring DQM, I recently
developed software codes performing automatic checks, used for real time
monitoring of the quality of the
data. I've experience in programming
in C/C++ Fortran and HTML languages.
The operating systems I'm familiar with are: UNIX,ALPHA/OSF,VAX(VMS),PC
and MS/DOS.
MARIA SANTA
MENNEA
For my Ph.D thesis I am working
in the CMS collaboration. I have worked in the Data Management Work Package of
the European DataGrid project. In this context I've developed the GNU Autotools
functionality for the GDMP package. My current work involves the development of
graphics objects to improve CMS Tracker Visualization. I write my programs
using mostly C++ and Java on Unix platforms.
Gonzalo MERINO
My current work is focused on
two quite different topics. On one hand I am working in the European DataGrid
(EDG) project. Inside EDG I am part of the Testbed work package, whose ultimate
responsibility is to deploy the EDG s/w in a number of sites across Europe so
that various applications can be run and tested on this distributed
environment. Most of my work inside the EDG Testbed up to now has been related
to installing and configuring at my local institute those "Grid
services" that have to be deployed in an EDG Testbed site. These are
mainly the so called Storage Element, Computing Element, Worker Node and User
Interface. For the time being, all of these services are deployed on Red Hat
linux boxes, which are installed and configured in an automated way using a
tool called LCFG. My main activity, therefore, has been devoted to testing all
these services and tools.
On the other hand, I also
recently started to work for the trigger of the Atlas detector. In particular, our
group is collaborating in the 3rd level trigger, or "Event Filter",
that will be based on a PC farm of O(1000) nodes running the offline
reconstruction code based on the Athena framework. My work there, which just
started, will be essentially c++ code developing.
I have written code in Fortran,
C, C++ and Tcl/Tk. The operating system I am most familiar with is Linux.
Predrag MILENOVIC
I am currently involved in two
LHC projects. The first project is software and hardware design of the DCS
sub-systems for CMS ECAL detector. My work includes: the design and hardware
implementation of the system, programming of PLC microprocessors in assembler
and building applications for distributed system control and monitoring in PVSS
II. The second project includes the
development (writing, testing and debugging) of the C++ code for the online and
offline data analysis for the Preshower of CMS ECAL. I particularly write the
algorithms for the beam-position and energy reconstruction, as well as
algorithm for Pi0 rejection in the ECAL Endcap (in the presence of Tracker)
using neural networks. I have also
experience in building and servicing computer networks and building
server-client applications. So far, I have written the code for applications
and scripts in Pascal, C, C++, Assembler, Fortran, Matlab and Mathematica. The
operating systems I am most familiar with are Unix (Linux) and Windows.
Asif Jan MUHAMMAD
My current work involves
deploying the GRID tools and frameworks developed by various sources and test
their efficiency in production environment. This work is in supplement to Data Grid effort and our aim
is to integrate the technologies
currently used for production into GRID infrastructure and vice versa.
At the same time I am also involved in developing a Monitoring Tool / Service
for monitoring systems and applications in GRID like environment. The
monitoring data from various entities is used for simulation which helps to
determine work load and usage for
various entities. This tools aims to be very flexible and easily integrated with current technologies
and tools and at the same time tries for cross platform compatibility and uses
next generation technologies . I am familar with Java , SQL , JavaScript , JSP
, Windows NT /2000 and Linux.
Nello NELLARI
Currently my main activities
focus on evaluation, deployment,
development and enhancement of Grid products and tools. I'm involved
in different European projects such as
EUROGRID for the HPC-Grid and Meteo-Grid
work packages. In particular, the Meteo-Grid goal is to provide an
on-demand numerical weather forecast
service on top of UNICORE. In this context
I'm responsible of designing and implementing a Java application
that controls the submission and the
execution of the weather forecast codes
in UNICORE. Main programming skills: Java (Certified programmer), C, C++
in Unix/Linux and MS-Windows
environments. OS experiences with Unix (Solaris, IRIX, Super-ux, Linux) and MS-Windows.
Krzysztof NIENARTOWICZ
Krzysztof.Nienartowicz@cern.ch
Nowadays, I am the member of the
Physics Data Management activity within Database Group at CERN. Our main focus
is to employ ORDBMS (extended RDBMS) as a storage frameworks in the fields
dominated either by OODBMS, hybrid or flat files systems before. So far, new OO
features of Oracle have been investigated, some benchmarks conducted. Recently,
I have been responsible performing comparative (functionality, speed) database
to Root based physics tags analysis from Aleph experiment (ALPHA++). I have
several years of experience in industry, mainly with middleware technologies
like message oriented middleware, message brokers (MQSeries, MQIntegrator),
global parallel data acquisition and processing using those technologies along
with integration of relational/mainframe databases. In addition to relational
DBs (Sybase, MSSQL) one of my main topic of interest during studies were
experimental transactional mechanisms (semantic based, temporal dependencies,
lockless) and OODBMS. I am fluent in C++/Java/VB/STL/MFC/ATL, COM/SQL (depends
on the platform..)/DAO/ADO/RDO/WinAPI/PVM/MPI/PCCTS etc, used to use Smalltalk,
prolog, assembler. Worked on UXes/NT. I am also responsible for contacts with
IBM as their DB2 "evangelist" at CERN...
Danila OLEYNIK
My current work involves
software development for cosmic test for Monitored Drift Tube (MDT) Chambers
manufactured in LNP JINR for the ATLAS experiment. The setup includes reference
module, test module and trigger system. Data acquisition system is based on PCs
under OS Linux using DATE and ROOT software. My work consists in data
monitoring, reconstruction of cosmic muons in reference module and definition
of test module geometric parameters. I have written code in C (for monitor),
C++ (for ROOT representation and analysis). The operating systems I am most
familiar with are Unix (Linux) and Windows.
Gennaro OLIVA
I’m currently working on the
realisation of a prototype application for medical diagnoses based on PET/SPECT
images. It consists of a database containing the archive, a software system for
the reconstruction of PET/SPECT images that runs on a Beowulf cluster and a web
portal. Medical doctors can access the archive trough the portal, submit
PET/SPECT images to the reconstruction engine and store their diagnosis. The
project is written in java and C and uses both the Globus toolkit & APIs.
In my life I have mostly written code in C and Fortran and the operating
systems I am most familiar with are Unix, Windows and Mac OS.
Carlos
OLIVEIRA
The work I'm developing concerns
to the data change between a ATLAS offline software framework called Athena and
the configuration, calibration, alignment and robustness database used also on
the TDAQ environment. The Athena common framework is the main structure of an
application where algorithms can be plugged to provide common functionalities
for event reconstruction. Users can plug their applications using specific C++
based algorithms. Data objects are passed between the different algorithms and
transient store becoming available in a temporary repository of information in
order to reduce coupling of all the algorithms. Several services are available
to clients, with specific capabilities. These services get data objects off the
transient store and work them out on the purpose of that service. The goal of
my work is to investigate the ConditionDB service (ConditionDBSvc) and transfer
the objects, with a time interval validity, into the ConditionsDB. I am
investigating commercial and open source implementation of the ConditionsDB
made available by our group.
Stephen PAGE
I am working on the development
of a server system in C for hard real-time control of the power converters
within LHC. LHC will contain circa 100 instances of the server controlling
around 1800 converters. The system runs on a Power PC with the LynxOS real-time
operating system and forms a gateway between the CERN controls network
(ethernet) and the field-bus to which the power converters are attached
(WorldFIP). The system will handle routing of both synchronous and asynchronous
commands to the converters as well as performing a number of system management
activities such as arbitrating the activity on the field-bus and updating
software on the converters' embedded systems. A CORBA interface into the server
is also foreseen. I am familiar with the following programming languages
(amongst others): C, C++, Java, Perl, Unix shell programming (various). I work
primarily in a Unix (HP-UX / Linux) environment, and am also fully familiar
with Microsoft Windows.
Juan PELEGRIN
My work environment is defined
by the central Unix services run by PS group, including system support for SUNDEV, PaRC, SLAP, PRINT
spool server cluster, LICMAN license
server cluster and AXCAD (Euclid) and Cadence engineering clusters. The
strategic OS for these services is
Solaris but there are also some Linux and Tru64 systems. In this environment
I'm programming some scripts in Perl or shell to improve the Solaris network installation server, and in few
weeks I will start with the development of a Graphical User Interface for client setup. Another aim
is to design and implement an automated client
system follow-up to synchronize the information in the installation
server with the network registration
database. Finally, it's possible that
I'll have the opportunity to participate in the DataGRID Solaris port.
Maria Valeria PEREZ REALE
My work as a PhD student at the
University of Bern involves participation in software development for the ATLAS
experiment at CERN, which is currently under construction. In ATLAS, Bern has
an active role in the PESA working group “Physics Event Selection Algorithms”
and the Data Collection group, both part of the ATLAS Trigger and Data
Acquisition TDAQ subsystem. PESA is responsible for the event selection in
TDAQ, with emphasis in the high level trigger HLT and its goal is to produce a
software framework which allows trigger algorithms and steering to be run
within the new offline software framework (ATHENA). Within this framework I
will be involved in the development of EF (event filter) algorithms needed for
performance studies of the current prototype. DataCollection is a subsystem of
the Atlas TDAQ DataFlow system responsible for the movement of event data
inside the HLTs and to the Event Filter and also to Mass Storage. Currently I
am developing a time stamp library for instrumentation and performance
measurements in ATLAS TDAQ (C++). I
have written code in Fortran, C, C++ and I am very familiar with Unix operating
systems.
Valentino
PIETROBON
My PhD course at the Department
of Computer Science of the University of Venice has been funded by INFN,
section of Padua, to follow a research program in the field of Computational
and Data Grids. In particular, my current work involves the study of the actual
solutions adopted in the Data Grid European project for workload management and
job scheduling, and propose innovative solutions, aiming at improving the use
of resources and thus increasing the system throughput. I'm also involved in
testing the actual Data Grid system at the INFN section of Padua-Italy. I have
written code in C, C+, Fortran and Prolog. The operating systems I am most
familiar with are Windows and Linux.
Piotr
POZNANSKI
Currently I work for the Fabric
Management (WP4) of the European DataGrid Project (EDG). I am a member of
Configuration Management and Monitoring Tasks.
In Configuration Task I have been involved in development of all aspects
of the WP4 Configuration System, which include design of the new language
(called pan) to describe high level configuration of the systems, design and
implementation of the compiler of the pan language, low level description
language (XML based) to transport the information to fabric elements, daemon
and its access API running on the fabric node which serves configuration
information to local applications, and also design of the central repository to
store configuration information of the systems developed by WP4. For Monitoring Task, I have been involved in
works on the configuration issues of the monitoring system and I will be
involved in the design and the implementation of the main database storing the
data collected by the Monitoring System.
I am familiar with OO software development technologies, C++/Java
programming languages, XML and UNIX operating systems (mostly Linux).
Alberto PULVIRENTI
My current work deals with the
problem of track reconstruction in the ALICE ITS detector. The scope of the
work is increasing the reconstruction efficiency, by finding also some particle
tracks which aren't recognizable with the already present TPC+ITS Kalman filter
method, because of a low pt or a decay within the TPC (e.g. the charged kaons).
I'm developing a neural network simulation algorithm which implements a
Hopfield neural network model in order to face this problem. The main idea is
the Denby-Peterson model, where each "neuron" represents a guess for
a segment between two consecutive points in a track, and the neural weights are
defined in order to select chains of such segments to produce good track
candidates. At present I'm studying which kind of improvements are necessary to
adapt this algorithm to the confusion due to the high multiplicity which is
expected in a typical ALICE event.
Mario REALE
My current activity is focused on WorkPackage 8 (Hep Applications) of the European DataGrid (EDG) project, for which I am responsible of the testing of all software provided to the applications and in particular to the high energy physics experiments. I have developed a set of tools to monitor the status of the EDG distributed testbed and I contribute to the set up of a set of installation and configuration testing suites. I belong to a small set of Experiment-Independent Persons which contributed to the end user perspective overview and check of the EDG provided middleware and contributed to the set up of the High Energy Physics Use Cases for the EDG project, in view of the set up of a Common Application Layer for interfacing the GRID and the High Energy Physics applications. I initially joined the EDG integration team during the initial deployment of the first EDG testbed (testbed-1) at CERN, where I contributed to the set up and test of the system on the EDG tb cluster.
Nicola REALE
Currently I'm leading the
evolution and the development of a distributed testing and monitoring system
for performance analysis. Several geographical distributed agents run tests
simulating end users and measuring key quantities; all the collected measures
are transferred to a centralized system which allows, via WEB, several types of
navigation and analysis of all the stored data. The product is currently being
used by Telecom Italia. I’m also the main architect of the whole solution, the
designer and the developer of the distributed agents and of part of the
centralized system.. I have worked for some years at TINA (Telecommunication
Information Network Architecture) and distributed architectures. I’m proficient
in UML, C/C++, Visual Basic, CORBA, but sometimes I use Java, JavaScript,
JScript, VBScript. I studied Fortran, Lisp, and Prolog too. I’m familiar with
Windows and Solaris platforms.
Elisabetta
RONCHIERI
elisabetta.ronchieri@cnaf.infn.it
At the moment I am working for Datagrid project, within the WP1 team. My main concerns are looking at packaging problems, i.e. starting from the structure of the code through to the delivery of it in rpm format. This kind of activity allows me to improve my knowledge in autotool and my skills in making rpm. Additionally, I am currently studying both GARA and DAGman for advanced reservation and job dependencies respectively. These are two functions that will be added in WP1 software by September 2002. I have experience of writing code in C, C++, Tcl/Tk, Fortran and Assembler plus the operating systems I am most familiar with are Unix (RH 6.2, 7.1, 7.2 Linux) and Windows.
Marta RUSPA
I am presently involved in the ZEUS experiment at the HERA collider and in the CMS experiment at the LHC collider. The former has been taking data for almost 10 years and I am working in data analysis. The latter is under construction and I participate to the construction of the muon chambers. In a class of events at the ep collider HERA the proton emerges intact or excited in a low mass state carrying most of the incident proton momentum and is well separated in phase space from the closest energy deposit in the calorimeter. These events are commonly referred to as diffractive events and contribute almost the 10% of the total DIS cross section. The selection of diffractive events in my analysis job is done asking for a proton in the ZEUS Leading Proton Spectrometer, which was placed along the beam line, downstream to the interaction point. The scattered electron (positron) is measured for virtualities of the exchanged photon down to 0.03 GeV2. The diffractive cross section is measured. Inside the CMS Collaboration, the Torino group is responsible of the assembly of a part of the drift chambers for the muon detection which will surround the central part of the CMS detector. A production line for the assembly of the chambers is under installation and I am working at the software for its control and handling. I am quite familiar with Fortran and C programming under UNIX, with Visual Basic, with the programming of microprocessors and with the software for statistical and graphical analysis of a great amount of data.
Scott RUTHERFORD
I am currently working on the
software for the CMS Ecal, specifically it has been my responsibility to design
and implement an algorithm capable of performing intelligent selection and
readout of the ecal crystals. The main
bulk of this has been carried out in C++ although I use Perl on a daily basis
as the glue for many applications. I have made a point of finding out as much
as possible about OO paridigms (including attending the Patterns Designs course
run by CERN), and try to implement these in my code. On an everyday basis I
work exclusively on Linux, however when debugging I regularly turn to Solaris for a more sane and informative
environment Before joining CMS, I was involved in the ALEPH experiment, so I
also have a good knowledge of FORTRAN, and a limited knowledge of VAX from
being Data Coordinator. Outside of the experiments I also have made it my
business to get a working knowledge of
Java and SQL (PostGresSQL and MySQL), and of course Javascript / HTML /
CGI.
Pablo SAIZ
I am working in the development
of AliEn (Alice Environment) AliEn is a lightweight GRID framework developed by
the Alice Collaboration to satisfy the
needs for large scale distributed computing. AliEn provides a virtual
file catalogue that allows transparent access to distributed datasets,
applicable to cases when handling of a large number of files is required . At
the same time, AliEn is meant to provide an insulation layer for access to
other GRID implementations and a stable
user and application interface to the community of Alice users during the expected lifetime of the
experiment. This project is written in
PERL. I am also familiar with C++, C, Java, FORTRAN. The operating systems I am most familiar with are UNIX (Linux),
and windows
Kilian SCHWARZ
My current work involves the
setup of ALICE GRID Software at GSI Darmstadt and Forschungszentrum Karlsruhe in Germany. This includes the
installation of the Globus Software package at various sites as well as the
installation and maintenance of the ALICE "GRID" software package
AliEn. The basic analysis software packages like ROOT and AliRoot are installed
and serviced by me, too. During the ALICE Productions (PPR and Data
Challenge) I am the primary contact
person concerning ALICE and GRID Software at GSI and Karlsruhe. I contributed
developing an LDAP client for ROOT which enables physicists to address directly from AliRoot LDAP databases, which are an integral part of the Globus MDS
service. Additionally, I am a member of the RDCCG-TAB group which is
responsible for setting up the
technical infrastructure of the
Regional Data and Computing Center Germany and I am working
together with the EDG - WP6. I have
written code in C, C++ and Fortran. The operating systems I am most familiar
with are Unix (Linux, AIX, DEC) and Windows.
M
Tadeusz
SZYMOCHA
I finished my studies (in
Computing Physics) at Jagiellonian University Cracow in 2001. My Mater Thesis
concerned studies of proton induced
strangeness production in the nuclear medium at energies 1-2 GeV as well
as non-mesonic decay of Lambda hyperon in proton induced reactions on heavy
targets. I wrote a common analysis programme for both experiments. Now I am
involved in activities of Cracow group of experiment ATLAS (one of the detectors at LHC,CERN). I am using
dedicated software called Atlfast for fast simulation of physics in this experiment.
Concurrently I take a part in the CROSSGRID project (High Energy Physics
Application task).
Lukas TOMASEK
For the last two years I have
been mainly working on the ATLAS Readout Driver(ROD) Test Stand software, which
is used for debugging and production testing of the ROD boards. I'm also
involved in a work on the ROD hardware library for the ATLAS Pixel/SCT DAQ
system. My another responsibility has been the measurement system that is used
for a testing of silicon sensors in our laboratory. I'm most familiar with C/C++.
Luca VACCAROSSA
I am currently involved in Work
Page 4 of the DataTAG project (Research and Technological Development for a
Transatlantic Grid) which has as main goal interoperability between EU and US
Grids services from DataGrid, GriPhyN, PPDG and in collaboration with iVDGL,
for the HEP applications. My work is
the evaluation, deployment and possible integration of existing Grid services
and tools; in particular I am involved
in the field of the packaging and deployment tools (VDT/PACMAN; EDG/LCFG) and
services of job scheduling, submission, monitoring (DAGMAN, Condor-G,
EDG/RB). The near term plan is to
produce a distribution of the EDG (European DataGrid) software via Pacman and
to produce a collection of requirements regarding the job scheduling,
submission and monitoring services. As
preparation for the interoperability tests between EDG and US Grids, I have
collaborated in the setting up and deployment of the EDG testbed farm in Milan.
Before this activity I worked
for Oracle supporting services as a technical analyst, becoming experienced in
Oracle database administration and UNIX system management.
As an undergraduate student, I
have been a collaborator of the BaBar experiment, so I have worked within BaBar
software framework, using C++ as programming language. I have written code in C, C++, SQL, PL/SQL. The operating systems I am most familiar
with are UNIX (Solaris, AIX, Digital, HP-UX, Linux) and Windows NT, 2000.
Jan VAN ELDIK
I am currently working in the
Computer Center Supervision project of
IT division. Specifically, I am prototyping and field-testing different
solutions for fabric monitoring, combining the EDG WP4 sensor agent and the
PVSS Scada tool, and providing sensors to measure 100 different quantities on a
1000 machines. I am also working on the configuration of the monitoring
infrastructure, and on the display of alarms.
As a member of the Offline Computing Group of the DELPHI experiment, I
have been actively developing High Energy Physics analysis programs. These applications run on a multitude of UNIX
platforms (Linux, DigitalUnix, HPUX, AIX), and are mainly written in FORTRAN,
Perl and shell scripts.
Fritz VOLLMER
fritz.vollmer@physik.uni-muenchen.de
My current field of work is the
analysis of the W-boson mass and width with data taken by the OPAL detector at
LEP. Probability density functions for each event are calculated and convoluted
with a function describing the physical process yielding a likelihood analysis.
To reduce cpu time for calculating of the probability density functions
parallelization methods in a master-slave environment on a local Linux cluster
are used. The LAM/MPI package is used
in an object-oriented way with the OOMPI package under a ROOT environment. I
have written code in C++, C, Perl and Fortran under a Linux system.
Andrew WASHBROOK
Over the last three years I have
been searching for the signature of scalar lepton production using the DELPHI detector
at LEP. No significant deviation from Standard Model predictions was found. In
the absence of a signal in the data, conservative lower limits were placed on
the masses of the sleptons. The statistical techniques used in my analysis were
written in Fortran77 and ran under a number of UNIX architectures (LINUX, OSF,
AIX, HPUX). Perl scripts were extensively used for file handling and data
manipulation. I was also able to successfully implement large scale event
simulation using the 300 PC Monte Carlo Array Processor (MAP) at Liverpool.
Other aspects of my PhD allowed me to learn additional programming languages
(C/C++) and become familiar with the VAX/VMS operation system. As a research associate I intend to work on
ATLAS/LHCb software and take an active role in the ATLAS data challenges.
Katarzyna ZAJAC
Currently I am involved in the
CrossGrid project that will develop, new Grid components for interactive
compute and data intensive applications (including also distributed data
analysis in HEP). I am a member of
CrossGrid Architecture Technical Team. The goal of my work is to define grid
architecture for the project, especially for distributed, near real-time
interactive simulation and visualisation for surgical procedures. I am investigating Cactus tool and HLA
standard for that purpose. I am also
familiar with high performance programming (MPI). Recently, we have developed
the library for parallelisation of irregular and out-of-core applications built
over MPI. I have written code in C,
C++, Java, Perl. I am familiar with Unix (Solaris, Linux) and Windows
(95/98/Millenium/NT/2000) operating systems.
I was a summer student at CERN in 1999. I also participate in Summer
Scholarship Program in Edinburgh Parallel Computing Centre (University of
Edinburgh) in summer 2000. I am
coauthor of 2 papers in Computer Physics Communications and 4 papers in Lecture
Notes in Computer Science.
Alexsander ZAYTSEV
The first part of my current
work involves the design and development of online and offline software for
CMD-2M detector at VEPP-2000 electron-positron collider, which is being
commissioned at Budker Institute of Nuclear Physics. Software design standards
for this project are object oriented programming techniques, C++ as a main
language and Linux. I am also participating in the CMD-2 offline processing
Linux farm support and design group. The essential upgrade of the farm and
distributed data storage management software is required to satisfy new CMD-2M
detector needs and scheduled to perform in near future. The second part of my
work involves the physical analysis of the data collected recently with CMD-2
detector at VEPP-2M collider and support of data processing (ROOT and JAS),
detector simulation (GEANT), numerical/symbolic calculation and FORTRAN-based
offline software for these purposes. I have written code in FORTRAN, C, C++ and
several symbolic calculation script languages. The operating systems I am most
familiar with are Linux and Windows NT.
Marianna ZUIN
My current work deals with two
different areas of the networking field, namely network testing and wireless
installation. I am currently completing measurements to test the network of the
SM18 pilot installation. It consists of a redundant Gigabit Ethernet network
designed to simulate that of the future Large Hadron Collider at CERN. After
participating in the building of the network, I am now testing network behavior
and measuring network latency according to different types of traffic, loads
and events. My work on wireless installation has involved studying the 802.11b
protocol and its features, and testing them in different possible environments.
I am currently implementing standard procedures for installing, replacing and
handling wireless base stations in the CERN area. Once these procedures are
fully implemented and tested, they will be ready to become part of the official
network management of CERN.
All trademarks and copyright names and products referred to
in this document are acknowledged as such.
Use of any trademark in this document is not intended in any
way to infringe on the rights of the trademark holder