June 27, 2010
One of the biggest issues that life science CIOs deal with is maintaining their inventory of legacy systems. These types of companies all have large portfolios of applications built up over the years using a variety of technologies and platforms. Many of these applications are very complicated point solutions performing specific functions in the drug or device development process. A good portion of these applications, because they deal with drug data, formulation, chemistry, patient data or manufacturing fall under the auspices of 21 CFR Part 11 and have been validated to meet FDA regulatory guidelines.
The crux of the problem is that life science CIOs have to ensure the continuing performance and availability of these older systems and applications. It is not uncommon to walk into the data center of a large pharmaceutical firm and find systems that are 5-10 years old or older still in use as part of the R&D process. This issue causes a whole slew of different problems:
- The need to maintain obsolete or no longer supported technologies or ones where the vendor no longer exists
- Expensive data center space tied up with older systems
- Keeping the necessary skilled resources knowledgeable on staff to provide support for these systems
- The associated costs of support, maintenance and licensing when budgets are already being squeezed
- Tying up resources that are maintaining these systems that could better be used elsewhere
- Issues integrating these older systems with newly deployed applications
- Increased power/cooling requirements for older, less efficient systems
- Providing backup/recovery support
Why does this problem exist? There are a number of reasons that life science CIO’s are faced with these issues. Many of these systems were deployed having gone through the time consuming and expensive IQ/OQ/PQ Part 11 compliance process and most companies would rather invest in new systems than spend scarce funds upgrading and re-validating existing systems, other reasons include:
- FDA requirements that all data on any drug product must be kept for several years after the last sale to the public (think Aspirin here)
- Lack of funds to go through an expensive re-validation process required for any major upgrade of an already validated system
- Multiple company mergers where sufficient funds were never allocated to fully integrate the IT functions
- Scientists who insist on using specific systems they know or are experienced with for their work
- Funding for new projects many times does not include money for retiring the system being replaced so you end up with both the old and the new
All of these factors come together so that is it just basically easier to implement new systems and keep the existing ones alive. The life science CIO ends up watching his data center and personnel resources slowly being chewed up by the need to maintain older validated applications.
What can be done to solve this problem? Prior to the advent of cloud computing the life science CIO did not have much in the way of real alternatives to deal with these issues. Now they can have the ability to move these legacy applications into a cloud based environment.
So, the question is, how do you go about getting this done? First of all, this not something that would be undertaken lightly. You would need to have a complete inventory and understanding of your existing legacy application environment. This would be to determine which systems are potential candidates for moving to the cloud, how critical those systems are, and what risks are associated with the move. You would also need an existing cloud strategy for your IT organization along with assessing the security risks, how users would still access those legacy applications, and what type of cloud environment would be appropriate and acceptable from a risk and regulatory standpoint. Once you have determined which systems can be migrated you would need to take the following very high level steps:
- Create a validated machine image (using the IQ process explained in a prior post) of the complete software environment for the platforms to be migrated, this can be re-used for any similarly configured systems
- Use database tools or write programs to extract the data from the legacy application
- Create a documented and validated process for the actual transferring of the data that would fully ensure that there is no data loss or corruption during this process
- Perform the migration itself keeping a record of all logs, check sums, record totals or whatever other check points are being used to ensure proper migration
Migrating legacy applications to the cloud is not something to be done lightly. It takes a real understanding of your existing systems, a disciplined process for the migration itself, and the ability to secure both data and access to these systems once they are migrated.
If you are a life science CIO you would jump at the chance to remove these systems from your on-site application portfolio. Imagine not having to deal with support, hardware issues on obsolete equipment, reducing your data center foot print, power consumption, and backup requirements. You would also free up critical personnel resources to focus on the goals and objectives of your business and isn’t that what the CIO is supposed to be doing?
Posted by Bruce Maches - June 27, 2010 @ 9:48 AM, Pacific Daylight Time
Former Director of Information Technology for Pfizer's R&D division, current CIO for BRMaches & Associates.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?