ICS 2014

28th International Conference on Supercomputing

Keynotes

Photo Prof. Dr. Dr. Thomas Lippert

HPC for the Human Brain Project

Thomas Lippert
Juelich Supercomputer Center

Wednesday June 11

Abstract:

The Human Brain Project, one of two European flagship projects, is a collaborative effort to reconstruct the brain, piece by piece, in multi-scale models and their supercomputer-based simulation, integrating and federating giant amounts of existing information and creating new information and knowledge about the human brain. A fundamental impact on our understanding of the human brain and its diseases as well as on novel brain-inspired computing technologies is expected.

The HPC Platform will be one of the central elements of the project. Including major European supercomputing centres and several universities, its mission is to build, integrate and operate the hardware, network and software components of the supercomputing and big data infrastructures from the cell to fullscale interactive brain simulations, with data management, processing and visualization.

I will discuss the requirements of the HBP on HPC hardware and software technology. These requirements follow the multi-scale approach of the HBP to decode the brain and recreate it virtually. On the cellular level, hardware-software architectures for quantum mechanical ab-initio molecular dynamics methods and for classical molecular dynamics methods will be included in the platform. On the level of the full-scale brain simulation, on the one hand, a development system to “build” the brain by integration of all accessible data distributed worldwide as well as for tests and evaluation of the brain software is foreseen, and, on the other hand, a system that acts as the central brain simulation facility, eventually allowing for interactive simulation and visualization of the entire human brain. Additionally, the brain needs to be equipped with the proper sensory environment, a body, provided by virtual robotics codes developed on a suitable hardware system.  It is expected that the human brain project can trigger innovative solutions for future exascale architectures permitting hierarchical memory structures and interactive operation.

Vita:

Prof. Dr. Dr. Thomas Lippert received his diploma in Theoretical Physics in 1987 from the University of Würzburg. He completed Ph.D. theses in theoretical physics at Wuppertal University on simulations of lattice quantum chromodynamics and at Groningen University in the field of parallel computing with systolic algorithms. He is director of the Jülich Supercomputing Centre at Forschungzentrum Jülich, member of the board of directors of the John von Neumann Institute for Computing (NIC), and he holds the chair for Computational Theoretical Physics at the University of Wuppertal. His research interests include lattice gauge theories, quantum computing, numerical and parallel algorithms, and cluster computing.

 

21st Century Computer Architecture

Mark D. Hill
University of Wisconsin, Madison

Thursday, June 12

Abstract:

This talk has two parts. The first part will discuss possible directions for computer architecture research, including architecture as infrastructure, energy first, impact of new technologies, and cross-layer opportunities. This part is based on a 2012 Computing Community Consortium (CCC) whitepaper effort led by Hill that is available at here, as well as other recent National Academy and ISAT studies.

The second part of the talk will discuss one or more examples of cross-layer research advocated in the first part. For example, our analysis shows that many “big-memory” server workloads, such as databases, in-memory caches, and graph analytics, pay a high cost for page-based virtual memory: up to 50% of execution time wasted. Via small changes to the operating system (Linux) and hardware (x86-64 MMU), this work reduces execution time these workloads waste to less than 0.5%. The key idea is to map part of a process’s linear virtual address space with a new incarnation of segmentation, while providing compatibility by mapping the rest of the virtual address space with paging.

Vita:

Mark D. Hill is the Gene M. Amdahl Professor of Computer Sciences and Electrical & Computer Engineering at the University of Wisconsin--Madison, where he also co-leads the Wisconsin Multifacet project. His research interests include parallel computer system design, memory system design, computer simulation, and transactional memory. He earned a PhD from University of California, Berkeley. He is an ACM Fellow, a Fellow of the IEEE, co-inventor on 30+ patents, and ACM SIGARCH Distinguished Service Award recipient. His accomplishments include teaching more than 1000 students, having 40 Ph.D. progeny so far, developing the 3C cache miss taxonomy (compulsory, capacity, and conflict), and co-developing "sequential consistency for data-race free" that serves as a foundation of the C++ and Java memory models.

Marc Snir

The Future of Supercomputing

Marc Snir
Argonne National Laboratory
University of Illinois at Urbana-Champaign

Friday, June 13

Abstract:

For over two decades, supercomputing evolved in a relatively
straightforward manner: Supercomputers were assembled out of commodity
microprocessors and leveraged their exponential increase in
performance, due to Moore's Law. This simple model has been under
stress since clock speed stopped growing a decade ago: Increased
performance has required a commensurate increase in the number of concurrent threads.

The evolution of device technology is likely to be even less favorable
in the coming decade: The growth in CMOS performance is nearing its
end, and no alternative technology is ready to replace CMOS. The
continued shrinking of device size requires increasingly expensive
technologies, and may not lead to improvements in cost/performance
ratio; at which point, it ceases to make sense for commodity technology.

These obstacles need not imply a stagnation in supercomputer performance.
In the long run, new computing models will come to the rescue. In the
short run, more exotic, non-commodity device technologies can provide
two or more orders of magnitude improvements in performance. Finally,
better hardware and software architectures can significantly increase
the efficiency of scientific computing platforms. While continued
progress is possible, it will require a significant international
research effort and major investments in future large-scale "computational instruments".

Vita:

Marc Snir is  Director of the Mathematics and Computer Science Division at the Argonne National Laboratory and Michael Faiman and Saburo Muroga Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign.  He currently pursues research in parallel computing.

He was head of the Computer Science Department from 2001 to 2007. Until  2001 he was a senior manager at the IBM T. J. Watson Research Center where he led the Scalable Parallel Systems research group that was responsible for major contributions to the IBM SP scalable parallel system and to the IBM Blue Gene system.

Marc Snir received a Ph.D. in Mathematics from the Hebrew University of Jerusalem in 1979, worked at NYU on the NYU Ultracomputer project in 1980-1982, and was at the Hebrew University of Jerusalem in 1982-1986, before joining IBM. Marc Snir was a major contributor to the design of the Message Passing Interface. He has published numerous papers and given many presentations on computational complexity, parallel algorithms, parallel architectures, interconnection networks, parallel languages and libraries and parallel programming environments.

Marc is Argonne Distinguished Fellow,  AAAS Fellow, ACM Fellow and  IEEE Fellow.  He has Erdos number 2 and is a mathematical descendant of Jacques Salomon Hadamard. He recently won the IEEE Awrd for Excellence in Scalable Computing.