Personal tools
You are here: Home Research Programming Tools Compiling and Optimizing Node Programs
Document Actions

Compiling and Optimizing Node Programs

by admin last modified 2007-12-18 04:55
We have pursued methods for compilation on individual Grid nodes under the aegis of VGrADS programming tools. This work enables VGrADS applications to use source code, compiled on the VG node, as grid program components. In effect, the node compiler becomes a Just In Time (JIT) compiler as far as VGrADS execution is concerned. This effort aims to support efficiency in the basic components of Grid applications in two ways.

Heterogeneity in the Virtual Grid (VG) — One goal of the VGrADS project is to create a set of tools that let the programmer target an abstract, or virtual, grid and let the programmer ignore the specific physical machines on which the code will execute. This virtualization should include the ability to run the code on a heterogeneous collection of machines, specifically, machines with different processor architectures.

To support processor heterogeneity, we need a compiler that can both target multiple architectures and serve as an experimental platform.  After evaluating several possibilities, including GCC, we selected the Low Level Virtual Machine (LLVM) as a good platform for our experiments. We have since investigated improving the quality of its code generation by implementing new register allocators and code schedulers. Our most significant result to date in this vein has been a new register allocator tuned to JIT contexts. Because of LLVM’s careful attention to retargetability, the allocators work on multiple architectures with almost no change. As a result of this work, we expect LLVM to be a viable compiler for use in VGrADS experiments, which will, in turn, let us experiment directly with runtime re-optimization. 

Runtime Re-optimization — Poor single-processor performance is a significant challenge for any Grid-based computation. In practice, underperformance of one or more processors can nullify the results of scheduling and, in extreme cases, of running the task in parallel. One way that the runtime system can respond to underperformance is to re-optimize the program in ways that capitalize on knowledge that was not available at compile time.

Runtime re-optimization (and its intellectual cousin, just-in-time compilation) has received a great deal of attention in the literature over the last ten years.  The ideas have proven themselves profitable in high-overhead environments, such as a Java virtual machine. In low-overhead scientific environments, it is harder to demonstrate actual improvements from runtime code rewriting. Working inside LLVM, we are developing a strategy for statically-planned, dynamically-applied optimization. These algorithms involve compile-time planning for alternative code sequences and runtime rewriting to implement these alternative code sequences in response to poor performance. This approach has the potential to achieve some of the benefits and flexibility of runtime re-optimization while imposing a much lower runtime cost.  Anshuman DasGupta presented a Ph.D. thesis proposal on this topic in May 2005.

« September 2010 »
Su Mo Tu We Th Fr Sa

VGrADS Collaborators include:


Powered by Plone