[07 Apr 09]  EuroSys 2009 photos are now online at Google Picasa and Flickr.


[06 Apr 09]  Thanks to all attendees for contributing to a great EuroSys 2009 conference. It was a pleasure having you all in Nuremberg!

Posters online

[02 Apr 09]  Posters are now online. Details...

Organisation by:

Tutorial Programmes

For general requests regarding the EuroSys 2009 tutorials, please contact the Workshops Co-Chairs Olaf Spinczyk and Franz Hauck.

For the abstracts of the EuroSys 2009 tutorials, please refer to their individual pages listed below. The tutorial programmes will be included there when they are ready.

Predictive Methods for Managing Dependability and Performance of Systems Summary
Introduction to Host Identity Protocol (HIP) and its applications Summary
Dependability Benchmarking of Computer Systems Summary

Predictive Methods for Managing Dependability and Performance of Systems

Artur Andrzejak, ZIB, Berlin


Predictive techniques based on statistical modeling and machine learning methods are rightfully regarded as the backbone of proactive system management. If deployed correctly, they can identify upcoming system failures, estimate capacity and demand in resource pools, and help to manage availability in large-scale systems. In this tutorial we focus on exploiting these methods to enhance reliability, availability and performance of distributed systems and SOA environments. The first, motivating part discusses a series of use cases. We cover proactive detection of failures in Grid and telecommunication systems, prediction of host availability in volatile resource pools, and estimating performance degradation of applications suffering under software aging. The main part of the tutorial focuses on the essential elements of any prediction / data mining study, including preprocessing, fundamentals of classifications algorithms, and evaluation. These topics are complemented by measures to improve prediction accuracy and by the discussion of typical traps which can render the results meaningless. In particular, we explain the intricacies of preprocessing and the problem of recognizing and avoiding overfitting. In the last part we introduce software tools which allow the attendees to deploy predictive techniques in their own research domain. In addition to a brief overview of existing software, we illustrate how to set up and conduct small-scale, off-line prediction studies with help of MATLAB and a pattern recognition library. The target audiences of this tutorial are researchers, system architects and practitioners who are interested in learning about practical deployment of predictive methods related to dependability and performance management.

Introduction to Host Identity Protocol (HIP) and its applications

Andrei Gurtov


The Host Identity Protocol (HIP) has been developed by the IETF as a new solution for host mobility and multihoming in the Internet (RFCs 5201-5207). HIP uses self-certifying public-private key pairs in combination with IPsec to authenticate hosts and protect user data. HIP is an important component of several distributed systems, including P2PSIP and Host Identity Indirection Infrastructure (Hi3).

The tutorial covers the current problems in Internet architecture, the identifier/location split, the base HIP protocol including the base exchange, new IPsec mode, DNS and rendezvous extensions, infrastructure for resolving host names to locators, micromobility, privacy, and support of legacy applications. We will as well go through current implementations, HIP testbeds, including a pilot deployment of HIP in Boeing airplane factory.

The tutorial will cover selected chapters from the speaker’s book, Host Identity Protocol (HIP): Towards the Secure Mobile Internet, ISBN 978-0-470-99790-1, Wiley and Sons, June 2008. (Hardcover, 332 p).

Dependability Benchmarking of Computer Systems

Marco Vieira, Universidade de Coimbra
Henrique Madeira, Universidade de Coimbra


Computer benchmarks are standard tools that allow evaluating and comparing different systems or components according to specific characteristics (e.g., performance, robustness, dependability, etc). Computer systems industry holds a reputed infrastructure for performance evaluation and the benchmarks managed by TPC (Transaction Processing Performance Council) and by SPEC (Standard Performance Evaluation Corporation) are recognized as two of the most successful benchmarking initiatives of all computer industry. However, dependability evaluation and comparison have been absent from benchmarking efforts for a long time.

Dependability benchmarking has gained ground in the last years and is currently the subject of intense research. In fact, several dependability benchmarks have been proposed, covering several different application domains (e.g., general-purpose operating systems, real time kernel space applications, engine control applications for automotive systems, on-line transaction processing systems, and web-servers). The objective is to find a useful representation that captures the essential elements of the application domain and provides practical ways to characterize the computer features that help the vendors/integrators to evaluate and build their systems and help the users in their purchase decisions.

The purpose of this tutorial is to present the state-of-the-art on computer systems dependability benchmarking. During the tutorial we will discuss different approaches to this problem and present in detail the most important works in the field, contributing this way to disseminate possible paths to benchmark the dependability of computer systems and to foster the technical discussion required to create the conditions for the use of dependability benchmarks by the computer systems industry.