June 26, 2012
Cloud computing offers highly flexible and available compute capacity for a variety of applications. However, the act of provisioning those resources can be rather complicated, requiring a certain level of expertise and consequently reducing cloud adoption. In 2005, Techila was created to simplify the act of using the cloud, developing middleware to bridge applications to external compute resources.
HPC in the Cloud spoke with Tuomas Eerola, vice president and partner at Techila, about the company's software and its uses. He mentioned that Techila was not looking to compete with MPI-based HPC applications, but noted that a growing number of HPC users are able to accomplish their workloads through the cloud:
If we look to the markets for high performance computing which are growing fastest at the moment, they are pretty much all in new science areas… That is systems biology, geosciences, oil and gas, bio pharmaceutical, financial engineering and that kind of area. People who are working in these sciences, they are not computer scientists. We developed a solution, which we wanted to make as easy to use as possible for the regular John Doe on the street.
The software handles the task of spinning up a cloud instance, configuring .dll files and managing possible service interruptions by migrating tasks to alternate instances. The company also provides services to the financial and pharmaceutical industries, which typically adopt stringent security standards. To meet their requirements, the software offers end-to-end secured connections along with certificate signing, execution policies and logging functionality.
Notable users include the University of Helsinki, Tampere University of Technology and Northern University, Boston.
In the case of Tampere University, their computational neuroscience lab was developing methods and models to understand how neuronal systems functioned. Researchers wanted to study neuronal systems at the molecular, neural and biological neural network levels. To achieve this, they devised a set of Monte-Carlo simulations to emulate neural models. They also needed to analyze the data generated from high throughput measurements. The entire project required roughly seven years of computational power. Using Techila's PC-grid software, the scientists utilized cloud resources to complete the simulation in just six days.
While most of the applications Eerola mentioned are familiar territory for cloud services, he stressed that the company's goal is to expand the adoption of high performance resources by making their software available on virtually any device:
I just want to bring fast computing and infinite scalability of compute power to whatever it is we have in our hands. Whether that is with the MacBook on my table or whether it is the fancy phone on my desk, I really don't care.
As a proof of concept, the company recently developed an Android application, giving smartphone users similar functionality from a mobile device. In the following video, the app provisions 200 cores on Microsoft's Azure infrastructure with the company's platform. The end result was a 138x speedup compared to running a similar job natively on the device.
Given the components found in mobile devices, Techila's platform may find a new set of use cases. For example, smartphone users may want to edit audio and video captured on their phones. Most mobile processors have a difficult time with these compute-intensive applications and an app developer could potentially add cloud rendering as an available feature to alleviate these issues. For a fee, users could utilize cloud resources without needing to move media from their phones to a separate workstation.
The company has a number of pricing options, which vary depending on the software's use. For developers, the Techila SDK is available free of charge. If a developer were to host a service, their payment structure is based per use. On the other hand, if an end user hired a systems integrator to build an application based on Techila, they could purchase an end user license and avoid a "middleman fee." Larger clients, like universities and laboratories, can pay a flat rate for perpetual use.
Currently, users can spin up instances via Windows Azure, Amazon EC2, IBM SmartCloud Enterprise and through a local network. The software allows multiple resource pools to be utilized simultaneously, which enables workloads to be handled in hybrid environments.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?