April 20, 2010
I am excited to have this opportunity to blog about the impact and usage of cloud-based HPC computing technologies on the life sciences industry. I have seen a lot of new technologies come and go in my 32 years in IT and feel that cloud computing will have a tremendous positive impact on life science companies. To offer a bit of background, my experience ranges from junior programmer to CIO/Director—I’ve spent more than half my career either working with or consulting to many life science companies such as Pfizer, Lilly, Amylin, Abbott Vascular, and Elan.
Since this is the first blog entry I thought I would provide some background information on the industry and commonly used terms, the drug development R&D process, FDA regulatory environment and current industry challenges.
The term life science is a rather loose definition but encompasses three basic types of companies:
• Pharmaceuticals – this usually refers to the big multi-product firms such as Pfizer, Lilly, Merck, Glaxo Smith Kline etc.
• Bio-tech – loosely defines smaller research oriented companies that are focused on one product or one therapeutic area, many not commercialized yet
• Medical device – companies that make products that can be used externally or internally in the body and range from tongue depressors to artificial hearts
These types of companies make up the life science industry and all operate under the regulatory guidelines put forth by the Food & Drug Administration (FDA). The amount of regulatory scrutiny varies greatly depending on the product in question, obviously a company making bandages or an elbow brace would have a much lower regulatory hurdle to pass than a company making heart valve replacements. The IT organizations in these companies must adhere to the FDA guidelines put forth in the Code for Federal Regulations 21 Part 11 – or CFR 21 Part 11, which defines how systems managing electronic records in life science firms must be validated and verified to ensure that the operation of and the information in these systems can be trusted. I will provide more background on CFR 21 Part 11 compliance in another post.
The life science R&D process can be a long and expensive endeavor with current averages of getting a new drug from molecule to market being 10 years and over $900 million for new therapies. It is a terribly expensive process with many new products or drugs not panning out. Added to this dynamic is the fact for new drugs that the exclusive patent has only a life of 20 years before generic competition can emerge. Since the filing of the patent starts prior to the full development process the exclusivity period for the patent after a product comes to market usually ranges from 8 to 10 years. This means that every day you can take off of the development and approval process can translate into millions of dollars in extra sales for a large block buster drug.
The product development process varies by type of drug or device but follows these basic steps which I describe at a very high level:
• Phase 1 – identification of the molecule, initial testing, toxicology studies
• Phase 2 – further development, formulation, human testing
• Phase 3 – full scale double blind clinical trials to test efficacy, submission for FDA approval
There are many areas where cloud computing can improve and speed up this process by reducing IT complexity and cost while allowing R&D organizations to focus on the ‘what’ of the R&D process in stead of the ‘how’.
Here are some of the topics related to cloud computing and the life sciences that I will be covering in future blog entries.
• Managing/reducing drug development expenses using cloud based technologies
• Dealing with the increasingly complex R&D process
• How cloud computing can help in reducing time to market for new therapies
• Facilitation FDA regulatory compliance
• How cloud computing can facilitate the increasing use of complex models (human genome) in the drug R&D process
• Supporting complex data analytics
• How to better manage the exploding amount of data generated as part of the R&D process
• Migration of legacy systems to the cloud
• Data access, regulatory and security issues
I look forward to exploring these topics in-depth and hearing your thoughts and your inputs. If there are other subjects you would like explored please feel free to let me know.
Posted by Bruce Maches - April 20, 2010 @ 7:48 PM, Pacific Daylight Time
Former Director of Information Technology for Pfizer's R&D division, current CIO for BRMaches & Associates.
No Recent Blog Comments
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?