Waterloo Programming Languages Seminars

Previous terms: S06 W06

Current term:

Date Speaker Title (click for abstract) Location
Sept. 27, 2006 Nomair Naeem Programmer-Friendly Decompiled Java DC 3314
Oct. 11, 2006 Ghulam Lashari Control Flow Virtualization for General-Purpose Computation on Graphics Hardware DC 3314
Oct. 25, 2006 Mike Chase On the Near-Optimality of List Scheduling Heuristics for Local and Global Instruction Scheduling DC 3314
Nov. 03, 2006 at 1:00-2:00pm Peter Thiemann A Type-Based Program Analysis Framework for Linear and Affine Resources EIT 3142
Nov. 08, 2006 Gordon Cormack MABEL: A Beginner's Programming Language DC 3314
Nov. 22, 2006 Ben Korvemaker Run-time Code Replacement DC 3314
Nov. 29, 2006 Stephen Curial TBA DC 3314

Seminars are Wednesdays at 3:00 pm, unless otherwise indicated.

All are welcome.

Talk announcements are also posted to a public mailing list.

To volunteer to give a talk, contact Ondřej Lhoták. Talks can range from informal to formal, and they can be about any of the following:

The purpose is for us all to see what people are interested in, and motivate discussions.

Abstracts

Programmer-Friendly Decompiled Java

Nomair Naeem
Sept. 27, 2006
Java decompilers can produce good quality source code for bytecode produced from the standard javac compiler. However, most Java decompilers fail to decompile bytecode produced from other sources e.g. AspectJ, optimizers, obfuscators. We introduce Dava, a decompiler for arbitrary bytecode. Our work focuses on improving the quality of decompiled code for better program comprehension. We show how pattern matching on an AST representation of decompiled Java code can be used to simplify control flow. Further we present a Structure Flow Analysis framework for Java which can be used to implement compiler data flow analyses. Leveraging flow analysis information we perform complex AST transformations enabling program deobfuscation and simplification.

Control Flow Virtualization for General-Purpose Computation on Graphics Hardware

Ghulam Lashari
Oct. 11, 2006
Graphics hardware (GPUs) have become fully programmable in the last few years, and have more computational power and memory bandwidth than traditional microprocessors (CPUs). However, GPUs have hard limits on various resources, making general-purpose computation on GPUs (GPGPU) difficult to implement. We focus on overcoming limits on control flow constructs. Current GPUs have either no control flow, or tight limits on loop nesting depth and iteration count. We present control flow virtualization techniques that allow a program with arbitrary control flow to be executed on a GPU by either partitioning the program into passes and performing the control flow on the CPU, or by restructuring the program to fit the control flow limits of the GPU. Along with virtualization of other GPU resources, our approach enables efficient execution of a large class of data parallel problems on GPUs.

On the Near-Optimality of List Scheduling Heuristics for Local and Global Instruction Scheduling

Mike Chase
Oct. 25, 2006
Modern architectures allow multiple instructions to be issued at once and have other complex features. To account for this, compilers perform instruction scheduling after generating the output code. The instruction scheduling problem is to find an optimal schedule given the limitations and capabilities of the architecture. While this can be done optimally, a greedy algorithm known as list scheduling is used in practice in most production compilers.

List scheduling is generally regarded as being near-optimal in practice, provided a good choice of heuristic is used. However, previous work comparing a list scheduler against an optimal scheduler either makes the assumption that an idealized architectural model is being used or uses too few test cases to strongly prove or disprove the assumed near-optimality of list scheduling. It remains an open question whether or not list scheduling performs well when scheduling for a realistic architectural model.

Using constraint programming, we developed an efficient optimal scheduler capable of scheduling even very large blocks within a popular benchmark suite in a reasonable amount of time. I improved the architectural model and optimal scheduler by allowing for an issue width not equal to the number of functional units, instructions that monopolize the processor for one cycle, and non-fully pipelined instructions. I then evaluate the performance of list scheduling for this more realistic architectural model.

I found that when scheduling for basic blocks when using a realistic architectural model, only 6% or less of schedules produced by a list scheduler are non-optimal, but when scheduling for superblocks, at least 40% of schedules produced by a list scheduler are non-optimal. Furthermore, when the list scheduler and optimal scheduler differed, the optimal scheduler was able to improve schedule cost by at least 5% on average, realizing maximum improvements of 82%. This suggests that list scheduling is only a viable solution in practice when scheduling basic blocks. When scheduling superblocks, the advantage of using a list scheduler is its speed, not the quality of schedules produced, and other alternatives to list scheduling should be considered.

A Type-Based Program Analysis Framework for Linear and Affine Resources

Peter Thiemann
Nov. 03, 2006
Java(X) is a framework for type-based program analysis for Java. Java(X) augments Java's type language with annotations drawn from an algebra X, polymorphism, and structural subtyping in terms of the annotations.

The main innovation of Java(X) is its concept of activity annotations. An activity annotation enhances a classical value flow annotation with linear propagation, which facilitates strong updates. Thus Java(X) can perform protocol checking as well as refinement typing with high accuracy. Aliasing is addressed with a novel splitting relation on types.

MABEL: A Beginner's Programming Language

Gordon Cormack
Nov. 08, 2006
I will present an informal description of MABEL -- a programming language designed in 1976. I will present the language design and implementation in terms of design principles and technologies then and now.

Audience members wishing to prepare rotten tomatoes and the like my find raw material here: http://doi.acm.org/10.1145/1061719.1061725

Run-time Code Replacement

Ben Korvemaker
Nov. 22, 2006
Patching programs at run-time allows software to be updated without requiring a restart of the running program. Long-running programs and services that must maintain high-availability both benefit from run-time code replacement. This talk will cover the progress of run-time patching over the last 30 years, and how previous work relates to the problem of run-time code replacement in multi-threaded programs.