October 21, 2005
The advent of the worldwide web has brought a new world of data, tools, and online services within the reach of scientists. With this wealth of opportunities, however, also comes a new set of challenges: researchers can be overwhelmed by the sheer volume and complexity of resources. To help overcome this, researchers at the San Diego Supercomputer Center at UC San Diego, the National Center for Ecological Analysis and Synthesis at UC Santa Barbara and their partners have initiated an interdisciplinary collaboration to develop Kepler, a tool for scientific workflow management. By helping organize and automate scientific tasks, Kepler lets scientists take full advantage of today's complex software and Web services.
"For scientists, a good workflow tool is invaluable," said Kim Baldridge, a computational chemist at SDSC and professor at the University of Zurich. "It relieves researchers of the drudgery of tedious manual steps, but more importantly it can dramatically expand our ability to think bigger and ask new questions that were simply too complex or time-consuming before." Her research group has developed modules known as "actors" to carry out workflows in Kepler, leading to published results in the Resurgence project, which makes use of computational grids and Web services in computational chemistry.
Kepler is named after the Ptolemy software from UC Berkeley on which it is built, and the new tool is part of the emerging cyberinfrastructure, or integrated technologies for doing today's science. The open-source Kepler project grew out of the need for an analytical workflow tool in the Science Environment for Ecological Knowledge (SEEK) project.
The researchers from SDSC and UCSB who initiated Kepler worked with partners in the Department of Energy Scientific Discovery through Advanced Computing Program. Kepler has now grown to include more than half a dozen scientific projects, and the researchers plan to release a beta version in the coming months. In its simplest form, Kepler may be thought of as a sort of "scientific robot" that relieves researchers of repetitive tasks so that they can focus on their science. In addition to increasing the efficiency of scientists' own workflows, Kepler will also give researchers increased capabilities to communicate and work together -- searching for, integrating, and sharing data and workflows in large-scale collaborative environments.
"With Kepler, scientists from many disciplines can automate complex workflows, without having to become expert programmers," said Bertram Ludäscher, one of the initiators of the Kepler project and an SDSC Fellow and associate professor of Computer Science at UC Davis. "Kepler's flexibility and its visual programming interface make it easy for scientists to create both low-level plumbing workflows' to move data around and start jobs on remote computers, as well as high-level data analysis pipelines that chain together standard or custom algorithms from different scientific domains. And beyond automation, being able to document and reproduce workflows is a major objective of scientific workflow systems like Kepler."
An important factor in ensuring that Kepler will be broadly useful across multiple scientific disciplines is its organization as an open source consortium. Participants in an open source project collaborate in building, maintaining, and peer-reviewing a common software tool. The source code is made publicly available without charge, and those who use the software are encouraged to contribute to its development -- finding and fixing errors and adding new features that benefit the entire community. The Linux operating system and tools from the Apache Software foundation are well-known examples of open source efforts.
"The fact that Kepler is open source encourages researchers to join the collaboration and build their own components, leveraging the infrastructure, and providing the vitality of a community approach to more rapidly extend Kepler's capabilities," said Ilkay Altintas, director of SDSC's Scientific Workflow Automation Technologies lab, which brings together scientific workflow efforts at SDSC under one umbrella. Researchers interested in scientific workflow technologies are invited to contact the lab to learn more.
When scientists search for relevant data sources and then undertake multi-step workflows, they must typically carry out and keep track of these complex steps in manual, ad hoc ways as they export and import data from one step to another across diverse environments. As a first step toward automating these tasks, scientists and computer scientists may collaborate on building a custom workflow tool. But this is an expensive and time-consuming process in which the software must generally be developed and maintained in an individual effort for each application.
To overcome these limitations, the Kepler initiative is developing a generic tool and environment that builds on existing technologies and will work in a wide range of applications to capture, automate, and manage researchers' actions as they carry out scientific workflows. The initial effort has brought together computer scientists with domain scientists in the disciplines of ecology, biology, chemistry, oceanography, geosciences, nuclear physics and astronomy.
"With Kepler, we're giving scientists an intuitive tool that they can use to build their own workflows, which can include emerging Grid-based approaches to distributed computation," said Kepler co-initiator Matt Jones, a co-principal investigator and project manager for the SEEK project. "And in order to build a workflow environment that is effective across multiple domains of science, we're working with a growing range of projects to ensure the widest possible usefulness of the infrastructure."
In addition to the Ptolemy project of UC Berkeley described below, which serves as the framework for Kepler, the collaboration currently includes the following projects that span a range of scientific fields:
Kepler is used in a wide variety of ways in these projects. In the Encyclopedia of Life project, the integrated Genome Annotation Pipeline software uses the Application Level Scheduling Parameter Sweep Template in month-long grid computing jobs that would be far more difficult without a workflow tool. First, Kepler prepares the databases and submits the computing jobs. Then it continues in a monitoring mode that checks on the execution and updates the corresponding database. In the event of a failure, the most recent update can be retrieved from the database, greatly simplifying recovery. Kepler also makes it easy for scientists to execute a new task simply by double-clicking on and changing the parameters of an existing task. All of these capabilities let scientists accomplish genomic research much more rapidly.
To advance genomic research, biologists in SciDAC Program at the DOE study co-regulated genes. In their research, they try to identify promoters and develop models of transcription factor binding sites that play key roles in the expression of genes. The scientists use Kepler to help execute a series of data analysis and querying steps in which they move the results of each successive step from one Web resource to another. By automating these steps, the researchers save hours or days of time, speeding their results and allowing them to undertake problems on larger scales than previously possible.
Ecologists in the SEEK project study problems that include invasive diseases such the West Nile virus. West Nile virus is spread through mosquitos feeding on migrating birds in a complex dual-vector process. The researchers develop predictions for where and how fast this kind of disease will spread. To do this, ecologists access online data sets about where the mosquitoes and birds are observed to live and migrate. Then they use web-based ecological niche modeling tools to correlate this information with climate data, computing predictions for where the birds and mosquitos are likely to be found.
Automating these steps with Kepler can make it feasible to produce accurate predictions for the spread of an invasive disease far more quickly than previously possible. Automating workflows can yield similar benefits in a wide range of other scientific fields, and a growing number of projects and individuals are contributing to the Kepler open source project. More information on members and contributors can be found on the Kepler website at http://kepler-project.org/.
To explore whether they could build on existing technologies, the Kepler team surveyed available tools. Ptolemy, a project of the Center for Hybrid and Embedded Software Systems led by UC Berkeley professor Edward Lee, focuses on modeling and simulation as well as design of concurrent, real-time embedded computing systems. The Kepler team realized that although Ptolemy had been developed for a different purpose, it had capabilities that would provide a mature platform for the needs of Kepler in designing and executing scientific workflows. Ptolemy II, published as open source software, is the current base version of the Kepler infrastructure.
Ptolemy provides a set of Java language packages that support heterogeneous, concurrent modeling, design, and execution. Among Ptolemy's strengths are support for a number of precisely-defined models of computation such as streaming, and a concurrent dataflow paradigm for process networks that is appropriate for modeling and executing many scientific workflows. Ptolemy's programming approach is activity-based, or "actor-oriented" in Ptolemy terminology, which makes it easier to design the reusable components that scientists need.
Ptolemy also has an intuitive graphical user interface called Vergil that allows users to compose complex workflows simply by stringing together individual actors, linking them according to the flow of data, and nesting them to represent desired levels of abstraction. In addition to Ptolemy's considerable built-in capabilities, which include more than 100 actors (or processing components) and directors (or workflow engines), the Kepler collaborators are continually adding new ones that extend the system, and have already contributed more than 100 additional actors.
To capture the actions that scientists carry out in conducting their research and to automate these steps, the flow of data from one analytical step to another is described in Kepler in a formal, computer-readable workflow language. Among technical issues the researchers are facing in developing Kepler are:
As they resolve these technical challenges, Kepler developers must work closely with domain scientists in order to ensure that the resulting software meets the scientist's needs.
Beyond automating the steps of a given project, workflows captured in Kepler are intended to promote communication and collaboration for scientists in diverse domains -- a crucial capability for today's large-scale interdisciplinary collaborations.
"Through its systematic approach to scientific workflows, Kepler can fulfill the important function of publishing analyses, models, data transformation programs, and derived data sets," said Kepler co-initiator Jones. "This gives scientists a way to track the provenance of derived data sets produced through workflow transformations, which is essential to being able to identify appropriate data sets for integration and further research."
In addition to distributing new Kepler actors that automate specific tasks, scientists can publish the results of workflows, storing the formal workflow descriptions of the steps carried out in a web-accessible repository such as one of the metadata catalogs that are part of the SEEK EcoGrid. Kepler developers are working on extensions that will allow scientists to easily publish their workflows and share them with colleagues in flexible ways.
"We're now starting to add semantic capabilities to Kepler," said Shawn Bowers, project scientist at the UC Davis Genome Center, where he works with Ludäscher on workflow and data integration technology for the SEEK project. "These include domain-specific ontologies acting as semantic types' for datasets, which will let scientists use the concepts of their own fields to search for and discover data and services, link to, and integrate data sets in both local and distributed grid environments."
Scientists are also interested in the potential of Kepler and related tools to power comprehensive "science environments," which they envision will follow accelerating growth paths as scientists are rapidly and seamlessly able to find out about and build on the previous work of their own and collaborating groups.
"As the Kepler environment gains momentum and becomes more robust and reliable," said SDSC's Altintas, "the body of resources that scientists can build upon grows larger, and more groups and scientific domains are joining this open collaboration."
Paul Tooby is a senior science writer at SDSC and editor of SDSC EnVision Magazine.
Participants: Ilkay Altintas, Kim Baldridge, Zhijie Guan, Efrat Jaeger-Frank, Nandita Mangal, Steve Mock, SDSC; Shawn Bowers, Bertram Ludäscher, UC Davis; Chad Berkley, Daniel Higgins, Matt Jones, Jing Tao, UCSB; Christopher Brooks, Edward A. Lee, Stephen Neuendorffer, Yang Zhao, UCB; Zhengang Cheng, Mladen Vouk, NCSU; Tobin Fricke, U. of Rochester; Timothy McPhillips, NDDP; A. Town Peterson, Rod Spears, U. Kansas; Kim Baldridge, Wibke Sudholt, U. Zurich; Terence Critchlow, Xiaowen Xin, LLNL.
URL: Kepler Scientific Workflow Project - http://kepler-project.org/
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?