General
About CSC
Organisation

People
Process for CSC hosting
School Models
Role of Local Organisers
Other Roles

Participants
Past Schools

2004 2005 2006 2007 2008 2009 2010 2011

Diploma at CSC
Sport at CSC
Inverted CSCs

iCSC05 iCSC06 iCSC08 iCSC10 iCSC11

Special schools

School@chep06

 

CSC 2007

CSC2007 Overview

Practical Information

Programme

Schedule

Lecturers

Participants

Organisers

 
Examination results
 
Grants from EU -FP6

Eligibility Conditions

Level of grant support

How to apply
 

CSC-Live

CERN School of Computing 2007 20-31 August 2007 - Dubrovnik, Croatia

Programme Overview

Grid Theme

Software Theme

Physics Comp. Theme

Schedule

Lecturers

Lecturer Bios

CSC-Live

Printable Version

Forum for Proposals to iCSC2008


Name_of_posting_person: François Fluckiger
Type_of_posting: I am commenting on a previous posting
Date: November 02, 2007
Time: 03:18:47 PM

Message

Many thanks to all those who posted proposals and comments. The proposals are now being analyzed. You will hear from us by the end of November. François.


Name_of_posting_person: Jose M. Dana Perez
CSC_year: CSC2007
Type_of_posting: I have a specific topic to propose
Date: October 31, 2007
Time: 08:01:15 PM

Message

I'm member of a research group of my university since 2004 and one of our topics is "scalable image and video coding".

Right now, we are designing and developing cutting-edge techniques for video coding using fine grain scalability.

People don't usually work with all these algorithms directly. However, sometimes developers need to design a new system where one of its parts needs an image or video (de)coder and, therefore, I think it could be an interesting topic. Knowing the internals of some coding systems could help developers to understand what is the best way to implement their solutions (streaming systems, pattern recognition systems, etc.) and what is the best algorithm (JPEG, JPEG2000, MPEG-2, MPEG-4, WMV, etc.).

The agenda could be something like this:

1.- Data compression

1.1.- Plain text coding techniques
1.2.- Lossless vs. lossy compression

2.- Image coding

2.1.- Basic techniques behind image coding
2.2.- Practical example: JPEG algorithm

3.- Video coding

3.1.- Basic techniques behind video coding
3.2.- Practical example: MPEG-4 algorithm

4.- Scalable image and video coding

4.1.- What does scalability mean? Why do we need scalability?
4.2.- Basic techniques behind scalable image and video coding
4.3.- Practical example: JPEG2000 algorithm
4.4.- Practical example: Motion JPEG2000 algorithm
4.5.- Practical example: FSVC algorithm

I think that it's something original and, unfortunately, we don't usually have the opportunity to attend lectures about this topic.


Name_of_posting_person: Vikas Singhal
CSC_year: CSC2007
Type_of_posting: I have a specific topic to propose
Date: October 27, 2007
Time: 09:20:42 AM

Message

Topic:- Are we using GRID as Power Grid concept?

As Grid concept came from Power Grid, but as it is going to evolve it is not a miracle. It became just as a Distributed Computing across Globe, and only for a specific community (Not for a layman person.)

My idea is to use Grid as POWER GRID. We should only plug our appliance (monitor) and we should get full Computer.

Idea is Remove CPU from a Computer and there should only be one Monitor with in build Keyboard mouse (like a TV). In this Monitor, there should be some sort of Modem or LAN card like thing which decode input sequence and display on the monitor. On the other hand at Centralized Grid site all complexities reside. This Centralized Farm will keep track of all the Clients and send required application then continues communication to each. It will be a Big Computing Farm which will distribute required things to Client Monitor. All the computing, processing etc will be done at centralized place and send to Client.

I know this seems very simple but it is not simple at the implementation level. But we can start at small level then try to exploit it.

In this Electronics and Computer scientist are required to make special kind of Monitor and at the Centralized site how to communicate with client. Each special Monitor has unique MAC address and also we have to develop some new communication protocol like TCP/IP.

I think if this is successful idea then it will reduce the cost of full computer around $100 or may be less and one has to pay some monthly charges for computing like today one pays for Internet access.

In today’s scenario each and every one purchases computer whatever is his/her requirement. (whether he is a child to play games, a farmer who wants best solution for his field, an architect who needs 3D programs, one Business analyst makes his presentation etc (list is endless)). In this scenario one computer will never be outdated only central things to be updated. It will solve many more problems.

This is just an idea, and required more in this direction.


Name_of_posting_person: George Serbanut
CSC_year: CSC2007
Type_of_posting: I am commenting on a previous posting
Date: October 26, 2007
Time: 10:34:15 PM

Message

Thank you, François! I am sorry for my lack in formating, but since it was just a renewal of an older proposal, I didn't think it was necessary. Thank you for understanding. For everybody else, my e-mail address is: serbanut@to.infn.it. I am sorry for not revealing it from the beginning. All the best, George P.S.: If any of you thinks I can help with my knowledge, and if an abstract is required, just let me know. Thanks!


Name_of_posting_person: François Fluckiger
CSC_year: CSC2007
Type_of_posting: I am commenting on a previous posting
Date: October 26, 2007
Time: 08:45:54 PM

Message

Iris, Manfred, Andrzej, Alfio, Vipin, Guillaume, George, some formatting have been done on your texts. Thanks for these postings. Regards. François.


Name_of_posting_person: Andrzej Nowak
CSC_year: CSC2007
Type_of_posting: I have a specific topic to propose
Date: October 26, 2007
Time: 12:58:14 PM

Message

Hi all, I have three topics to propose from my side, on which I could give talks.

The first is multi-threading in general: the push to multi-core, the reasons which motivate us to improve software efficiency, software technologies and software trends related to multi-threaded and parallel programming.
In general, it's similar to what George Serbanut has proposed, but my proposal is to focus more on the platform-related and practical aspects of multi-threading, amongst others thermal, financial and logistic issues in running a computing center.

The second topic is a hardware discussion related to mainstream CPU lines (mostly x86 and Itanium), the past and current trends, general purpose developments in GPU computing and finally some news from the industry, as openlab has the opportunity to cooperate quite closely with big players.

The third topic is performance monitoring: performance monitoring basics, monitoring huge applications, a little bit of what the inside of a performance monitoring utility looks like, current industry trends in the field, popular tools (Intel's tool line, perfmon, etc) and finally some tips on how to start looking for bottlenecks in your application.

On a side note, I admire the rich and detailed program that Manfred and Iris have proposed and I think that we could work towards finding common grounds within that framework. I would certainly be interested to hear some hardware and platform related talks.


Name_of_posting_person: Manfred Mücke (Muecke@ITI.TUGraz.at), Iris Christadler
CSC_year: CSC2007
Type_of_posting: I have a specific topic to propose
Date: October 25, 2007
Time: 11:00:08 PM

Message

Dear all, Iris and myself came up with a suggestion on a series of lectures on high-performance computing architectures and respective programming models.

We think the topics cover an important area. If anyone thinks he can (and wants to) contribute to one of the lecture - let us know. Would you be interested in hearing more? (If not, we better try to get a good outing organized :-) ) Hope you are all fine, Iris & Manfred

Towards Reconfigurable High-Performance Computing or Breeding Petaflops

Abstract:
Moore's law still holds and provides us with unprecedented device integration, resulting in abundant logic resources even on commodity computing platforms. However, computing has failed to take advantage of this gift and increase in computing performance is constantly lagging behind the increase in logic resources.
In short - semiconductor technology has overtaken chip designers, computer scientists and programmers on the right. This is strongly felt in high-performance computing (HPC).Among the more promising future solutions to this problem is reconfigurable computing (computing on flexible fabrics). In this series of lectures, we want to explore the reasoning behind reconfigurable HPC, its prospects, implications and issues.
We will discuss different hardware architectures and their respective requirements and implications to the computing model applied (or the mismatch thereof). After discussing different architectures separately, we will try to present a more unified view.

The lectures aim at qualifying students to understand where and why reconfigurable computing can be expected to have a considerable impact on tomorrows high-performance computing landscape, and where not.

Lectures:

Basics - HPC Computing - Petascale Initiative.
- Why is at the moment everyone in HPC computing busy looking for alternatives to classic computer - architectures?
- Is it HPC- or technology-driven?
- Does this affect the normal PC-user/HEP community and how?
- Importance of computing architectures and programming models.

Platforms I - Multicore Architectures
- Why Moore gave us all these cores.
- Prospects and issues of multicores (AMD, Intel, Cell, ..., scalability).
- How to exploit multicore architectures (MPI, threading, HPF, new Intel Language, ..)

Parallel programming I
- Why is parallel programming the holy grail of HPC? (Scalability, Amdahl's Law)
- Why is it so difficult?
- What theoretical solutions exist (parallel programming models, levels of parallelism, implicit vs. explicit parallelism)?
- What is available today and who uses what (Occam, Functional Languages, MPI, OpemMP, Java Threads, HPF, Fortress, CUDA, Hardware Description Languages, ...).

Platforms II - SIMD Extensions
- What are SIMD Extensions, what applications do they serve and how can one exploit them? (SSE, MMX, ...; OpenMP, parallelizing compilers, fine-grained parallelism).

Platforms III - Special-Purpose Accelerators (esp GPUs)
- What solutions exist.
- Why GPUs are so dominant.
- What GPUs are (not so) good at.
- Programming GPUs (CUDA).
- Is there future for general purpose programming on GPUs?

Optional: Platforms IV
-
Just playing... using the Cell Processor in HPC Cell architecture.
- Porting code to your Playstation (2 examples: Pattern recognition algorithm + Multigrid Solver).
- Building the worlds first Petaflop system out of CELLs and Opterons: The Roadrunner Project (Los Alamos National Lab)

Platforms V - FPGAs
- Introduction to FPGAs.
- State of the art.
- Application Examples.
- Hardware Description Languages.
- How to compare computing performance figures.
- Gap between potential computing performance and programmer productivity.
- Easing the path from programming to hardware design (Simulink, Handel-C, ...).

Reconfigurable HPC I - Introduction
- What is reconfigurable HPC?
- Why is HPC becoming reconfigurable? (energy costs, footprints, cooling, performance, embedded HPC).
- Examples (Roadrunner, Maxwell), issues and prospects.
- Why the programming model is the dominant bottleneck to exploiting reconfigurable platforms for HPC.

Reconfigurable HPC II - HW Design Methodology, Theory & Tools
- Hardware Description Languages (VHDL, Verilog), improving productivity (Simulink, Handel-C, Synplify DSP, Mentor Catapult, ...), higher-level synthesis.

Parallel Programming II
- What have the different computing platforms in common from a programming model point of view
- Generative programming, platform-independent programming, platform adaptation (FFTw, Spiral).
- Grid abstraction layers for HPC applications.
- Cross-compiling HPC applications.

Summary - Hybrid platforms, hybrid programming?
- Mixed architectures are there.
- CPUs+FPGAs (Stretch, Softcores, run-time reconfigurable systems).
- Where are suitable languages?
- What will be the future programming model?
- Codesign Tools (Virtual machines, HW-enables JIT compilers, custom accelerator generators).


Name_of_posting_person: Alfio Lazzaro (alfio.lazzaro@mi.infn.it)
CSC_year: CSC2006
Type_of_posting: I have a specific topic to propose
Date: October 22, 2007
Time: 05:09:36 AM

Message

Folks, I have already posted last year a proposal for the iCSC. I repeat here that proposal.

The idea is to present some aspect of data analysis software and analysis techniques:
a) unbinned maximum likelihood fit using RooFit
b) data analysis using ROOT in general
c) advanced concepts for data analysis, like Neural Network, Boosted Decision Tree (TMVA package, StatPatternRecongnition)

Some of these packages are integrated in ROOT. I would remind that starting with the operation of LHC experiments, the data analysis software will play an important role. We can take advantage from the experience of previous running experiments, like BaBar (I'm a member of this collaboration).

The program can be organized in the following important topics:
1) data analysis with ROOT: plot of variables, tool for upper limit calculation, event selection, ...
2) signal/background separation techniques: unbinned maximum likelihood fit using RooFit package, Neural Network, multi-variables cut optimization, ...
3) general case of data analysis: event reduction, event selection, signal extraction and measurement

I'm also currently working on an parallel implementation of software for data analysis using concept of HPC, like shared memory (OpenMP, pthread) and MPI.

I see that there are other posts concerning this issue (in particular programming on multi-core platform) and I think it is very useful to show something.
I'm also currently working on the possibility to use GPU using CUDA by Nvida (http://developer.nvidia.com/object/cuda.html The possibility to use GPUs for data analysis seems very promising in case of CPU-intensive tasks.

I will be very happy to organize something.

Cheers, Alfio


Name_of_posting_person: Vipin Gaur
CSC_year: CSC2007
Type_of_posting: I have a specific topic to propose
Date: October 05, 2007
Time: 04:35:25 AM

Message

Based on the lecture delivered by Rudi on Physics Computing where he talked about mechanical and laser alignment, I would like to propose a topic on automated alignment using IR sensors. This is an initial raw idea. Mail from Francois Fluckiger encouraged me to give this proposal.

I propose to write a software that process and controls bidirectional data flow to achieve the task.

The primary objective of this idea is to develop an automated system that aligns objects. The task will be performed by the coordination between stepper motors and the feedback of IR sensors (mounted on the objects to be aligned) to the computer. Both the coordination and feedback will be controlled and processed by the software that will also control the bidirectional data flow via LPT port.

The main technical challenge in this proposal will be to develop an architectural framework (that includes hardware, software and computer interfacing) that will permit a high degree of autonomy for each individual motor, while providing a coordination structure that enables the group to act as a unified team.

I will appreciate if other participants would like to join me and help to form a more polished idea.

Vipin Gaur Research Scholar Aligarh Muslim University (India)


Name_of_posting_person: Guillaume Dargaud
CSC_year: CSC2007
Type_of_posting: I have a specific topic to propose
Date: September 19, 2007
Time: 11:02:34 AM

Message

Similar to the multithreading idea suggested by George, I'm currently in the process of building a minimalistic Linux system for embedded acquisition.

I'm not master (yet) and it's far from working, but I think something showing all the steps involved in selecting a toolchain, creating a bootloader, cross-compiling BusyBox and a minimalistic kernel, and putting it all together in as few files as possible on a remote target might be of interest.

We already saw at CSC (albeit quickly) how to write a minimal kernel module/driver, so it all fits together. It's not groundbreaking but it's useful.


Name_of_posting_person: serbanut
CSC_year: CSC2007
Type_of_posting: I have a specific topic to propose
Date: September 18, 2007
Time: 09:19:37 PM

Message

Dear all, I saw a proposal coming from last year about a lecture on parallel threads. I would like to renew the idea and to join my efforts in developing an well documented lecture on that subject.

GRID and cluster technologies are some solutions, but are you sure you use you computer to its maximum power? I can bet there is all the time a faster way to execute your application. What's behind that parallel threading technology? Who or what allows us to use more than one thread per CPU core? These are questions on which I discuss with some other CSC students and I had no time to include them in my presentation (even if at the beginning I wanted to introduce them in it). And I noticed that one generation before us the students were interested as well in this topic (otherwise they wouldn't have proposed it, would they?).

I hope the students from the last year haven't lost their interest in this subject yet, so, I can help them in developing a lecture on parallel threads.

Best regards, George Serbanut


Name_of_posting_person: François Fluckiger
Type_of_posting: Welcome message
Date: September 18, 2007
Time: 06:15:28 PM

Message

Welcome to all CSC2006 and CSC2007 participants. I hope you enjoyed our schools and you will be willing to suggest topics through this forum.

However, I know that the coming year will be extremely busy for several of you (LHC completion). Therefore, consider this as an opportunity but not as having any kind of moral obligation.
François, School Director

 

Feedback: Computing (dot) School (at) cern (dot) ch
Last update: Thursday, 14. November 2013 11:49

Copyright CERN