July 16, 2008
If anyone knows how to introduce a new programming language, it's Sun Microsystems. The company's highly successful Java language, which was introduced in 1991, has become ubiquitous in network-centric and embedded computing. Today, there's a whole research team at Sun Labs devoted to programming languages, and the big project there in recent years has been the development of the Fortress programming language. The end game is to "do for Fortran what Java did for C."
Unlike Java though, Fortress is geared for HPC applications, with programmability as a major design goal. The language maintains a high level of abstraction for the developer, allowing the focus to be on the algorithm rather than the underlying hardware. And even though Fortress specifically targets high-end technical computing, it is also applicable to large-scale parallel applications of almost any type. "We were looking for a language that was good for multicore, for supercomputing, and for everything in between," explains Eric Allen, principal investigator of the programming languages research group at Sun Labs.
The project began in 2003 and was originally funded out of DARPA's High Productivity Computing System (HPCS) program. When Sun was dropped from HPCS in Phase III of the program, Sun Labs took over the Fortress R&D completely. But since Sun has made Fortress an open source project, the company has received a lot of outside help from universities and other researchers that have contributed to the design and implementation of the language. The University of Tokyo, the University of Virginia, and University of Aarhus in Denmark are all developing new Fortress libraries, while Rice University has been working on compiler optimizations.
Although the basic foundation is now fairly stable, the language specification is not written in stone. Version 1.0 of the compiler and runtime was launched in April of this year and represents a prototype for users who would like to kick the tires and offer some feedback. According to Allen, Sun is updating the spec as new features are added or current ones are refined and is incorporating the changes into the language as appropriate. The intention is to release new distributions every few months. Allen says a production version of the compiler is expected in 2010, or thereabouts.
The current prototype runs on top of a standard Java Virtual Machine (JVM), so just about anyone with a computer can give Fortress a whirl. Sun offers the latest distribution free on their Project Fortress site. For performance reasons, Allen expects that at some point more of the runtime will be statically compiled rather than interpreted, but right now the convenience of the JVM is enabling widespread experimentation. He says they've already received a lot of good suggestions, especially from the academic community.
Allen himself teaches a programming course using Fortress at UT Austin. According to him, the kids there are enthusiastic about writing code with it and are amazed at how concise Fortress programs are compared to other languages they've used.
The language itself supports both task and data parallelism. Most of the constructs assume concurrency unless the programmer explicitly specifies sequential execution. So parallel computation is automatically performed underneath the covers as a result of standard source code execution (assuming the underlying platform has more than a single core). For example, basic operations like for-loops are parallelized by default. Even computing arguments that are to be passed to a function are performed in parallel. "In fact, everywhere where we could possibly add parallelism into the language, we added it," says Allen.
The runtime implicitly farms out computations to the available processor cores using a fine-grained threading model. As cores becomes idle, the runtime will transparently steal work from overloaded parts of the system and move those computations to the unused cores. The language also provides for explicit threading under the control of the programmer. Atomic operations are executed using a transactional memory scheme instead of the old-style locks.
For clusters, where the locality of the computation becomes an issue, the language has both implicit and explicit methods of distributing data. By default, Fortress arrays are spread across a system with the default arrangement determined by the Fortress libraries. This allows the implementation to use target-specific libraries for machines with similar locality characteristics. Fortress also has the notion of a "distribution," which permits the programmer to explicitly specify both distribution of data and locality information for scheduling.
Probably the most distinguishing feature of Fortress is its support for mathematical notation. The goal here is to make the step from algorithm specification to source code as short as possible. To do this, the language supports 16-bit Unicode characters and specifies ASCII keyboard sequences that are rendered into mathematical notation. The current Fortress distribution includes an extension to the Emacs text editor that will convert these keyboard sequences as they are typed. The language designers' devotion to this type of notation created some challenges for the compiler's parser. For example, the use of whitespace between two operands to indicate multiplication (e.g., x y) requires some natural language smarts to determine the intention of the programmer.
Below is an example of Fortress code using some math notation. It's the NAS (NASA Advanced Supercomputing) Conjugate Gradient Parallel function, a well-known HPC benchmark:
Fortress also allows for the creation of new grammars, so many types of domain-specific formulations are possible. For example, the molecular dynamics community could conceivably create a customized syntax for applications under its domain. The language enables these new grammars to be incorporated via library additions.
But with any new language, even a technically superior one, widespread adoption is elusive. Sun believes that maintaining Fortress as an open source project will go a long way toward attracting a larger audience. Allen says they are taking negative feedback seriously and are committed to letting the outside community help shape the design. The company hopes that giving people a stake in the language's evolution will help drive a sense of ownership.
The notion of allowing the language to evolve is one of the central themes of the Fortress designers. Wherever possible, language features have been implemented in libraries rather than in the compiler proper to allow for alternative implementations and a more flexible upgrade path. "Fortran has been around for about 50 years," says Allen. "I think it's incumbent upon any design team for a new language to have that sort of timescale in mind when thinking about how their design is going to weather with time."
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?