Research


Research Interests

My research interests are focussed around the design of purely functional programming languages and the development of compilers and compiler infrastructure for them. The areas of particular interest so far are:

Reduction Systems

I got attracted to functional programming when realizing the conceptual beauty of the pure lambda-calculus and its direct correspondence to programming languages. This observation was spurred by the development of several prototype compilers for declarative languages using the reduction system Pi-RED. It supports step-wise transformations of high-level programs which demonstrates nicely how program evaluation relates to beta-reduction. The resulting gain in program clarity and thus programming productivity combined with the freedom of a dynamically typed system hooked me on functional programming.

The experiences with Pi-RED motivated my work on extending that system for exploiting the conceptual benefits of the lambda-calculus to a bigger extent. The first extension I worked on was to investigate whether the primary evaluation strategy of Pi-RED, i.e., applicative order evaluation, could be substituted by lazy evaluation. As the change of the reduction engine required a radical change in the implementation, this work resulted in a new reduction system called LISA (cf. [P1], [T1]) which I implemented jointly with Carsten Rathsack.

After exploring the implications of the reduction strategy used, the main focus of interest shifted towards the inherent potential for concurrent program evaluation. This lead to the development of a concurrent graph reduction system on a distributed memory system nCUBE/2. The experiences made with this system (for a summary see [R1]) showed that despite the seemingly large overhead for synchronization, process management, and communication, very good speedups could be achieved. Encouraged by these results, I wanted to investigate whether such good improvements could be carried over to systems where sequential execution times are competitive to state of the art imperative languages such as C or Fortran.

Unfortunately, it turned out that particularly in application areas where high-performance concurrent evaluation is essential, the sequential versions of functional programming languages are not very competitive with their traditional counterparts. Among the various compilation challenges that inflict efficiency problems in functional languages, e.g. boxing/unboxing in the context of polymorphism and higher-order functions, space leaks due to lazy evaluation, etc., support for non-side-effecting operations on (state-less) arrays seemed to be the most demanding one. This observation drew my attention to efficient support for high-level array operations in a purely functional setting.

High-Performance Functional Array Processing

Generating a Testbed for Functional High-Level Array Programming

Since competitive sequential performance is crucial for investigating concurrent evaluations, I decided to follow a bottom-up approach rather than a top-down one, i.e., instead of trying to tweak an existing full-fledged language to utmost performance, I created a no-frills functional language aimed at the efficient evaluation of scientific applications (cf. [R3], [T2]). Keeping the language as close as possible to C would allow to detect potential sources of overhead (wrt. sequential execution) more easily, and it would also allow programmers with an imperative background to adopt more easily. These considerations lead to the development of Single Assignment C (SaC for short). The current compiler distribution as well as documentation and papers on SaC in general can be found on the SaC-homepage.

The very first compiler release in 1996 showed that -- at least in principle -- the efficiency problems that arise from high-level array support can be overcome. This observation gave rise to several new questions, e.g.,

An extended visit at the Clean Research Group in Nijmegen showed that this efficiency could not be easily ported to a full-fledged functional language such as Clean. Therefore, I decided to focus my research on the latter two objectives first.

High-level Specifications of Array Operations

Comparisons to standard array processing languages such as APL, NIAL, or J (cf. [R5] and [R7]) made the expressive power of the array support in SaC more evident. Contacts with array processing language researchers such as Bob Bernecky from Toronto or Mike Jenkins from Kingston and others opened my eyes for the beauty of specifying array operations in terms of compound array operations, i.e., using compositions of operations that apply to entire arrays rather than writing customized loops. However, applying this technique excessively in SaC poses quite some new compiler challenges, which have to be taken care of in terms of new optimization techniques. This lead to yet another core area of interest of mine:

Optimizations for Improving Array Operations

Several optimization techniques evolved in the context of the SaC project. Besides a shape inferring type system, several techniques for avoiding the creation of arrays at runtime have been developed. The most prominent techniques to this effect being With-Loop-Folding (cf. [R4] and [R6]) as well as the more recently developed With-Loop-Scalarization (cf. [R10]). Furthermore, several more low-level oriented optimizations have been developed for creating efficiently executable C code (cf. [R8]). Information about other optimizations built into the current SaC compiler release can be found at the SaC-homepage.

Compilation into Multithreaded Code

Being able to achieve sequential runtime performance competitive with that of imperative counterparts (cf. [J1] or [R9]) allowed to pursue the initial aim, namely to generate concurrently executable code. Work in this area lead to a POSIX-threads based back-end of the compiler with almost linear speedups on shared-memory multi-processors. A summary of the results can be found in the PhD thesis of Clemens Grelck (now University of Lübeck).

Experiments with SaC in teaching as well as the growth of the SaC standard library showed that the initial implementation which was strictly guided by performance considerations wrt. its type inference system is too restrictive. The extensions required for resolving this shortcoming of the earlier compiler versions turned out to be quite essential. Several new techniques for coping with the demands of such generic array specifications had to be developed leading to yet two further areas of interest:

Type Systems for Inferring Array Shapes

The major challenge of the type system in SaC is the existence of an unbound hierarchy of array types containing various levels of shape information. Demanding the most specific type, i.e., the exact shapes of all arrays, to be computed statically is undecidable. Being biased by the untyped background of Pi-RED, I tried to develop a type system that would not preclude programmers from writing programs where the static determination of exact shapes is not feasible. These attempts resulted in a new kind of type system which constitutes an amalgamation of a type system and a static analysis based on intersection types (cf. [W1] or [P11]). Current research jointly with Alex Shafarenko from Hertfordshire focuses on the idea of extending subtyping wrt. array shapes to subtyping wrt. element types.

Code Generation for Array Operation with Partial Shape Information

Being less restrictive within the type system requires the compiler to create code for arrays with various levels of static shape information being attached to them. In order to achieve as much runtime efficiency as possible, a new concept for code generation is developed. It is based on the idea to parameterize the code generation by the different levels of genericity determined by the hierarchy of array types. Most if this work will be found in the upcoming PhD thesis of Dietmar Kreye.

Sven-Bodo Scholz
Last modified: Jan 24, 2005
Valid HTML 4.01!