November 27, 2012
Watching the competition between public cloud providers is like following a multi-party ping pong game – there's a lot of back and forth. On Monday Google delivered a counterhit to rival Amazon, revealing upgrades to its infrastructure as a service (IaaS) offering, Google Compute Engine, as well as reduced storage pricing and enhanced European datacenter support.
When Google Compute Engine debuted in June, it supported just four standard instance types. In the coming weeks, Google will be rolling out 36 additional instance types, and pricing of the four original instances will be cut by 5 percent.
Google Product Management Director Jessie Jiang summarizes the new instance categories thusly:
Google is also decreasing the cost of its standard storage offering by over 20 percent, from $0.12 per GB down to $0.095 per GB (for the first terabyte). And for customers who are willing to trade data availability for a lower price point, Google is announcing Durable Reduced Availability (DRA) storage, at a cost of $0.07 per GB for the first TB.
Yet another new service, Object Versioning, is designed to help protect against accidental overwriting or deletion. And Persistent Disk Snapshotting, which lets users create backups that they can transfer around Google datacenters, is also in the works.
Google is actively seeking to expand its European presence. Google App Engine, Google Cloud Storage and Google Cloud SQL will be accessible from Europe-based datacenters with Google Compute Engine soon to follow.
Two weeks ago, the search giant announced enhancements to its MySQL database, Google Cloud SQL, including faster performance, larger databases (100GB), and EU availability.
The latest upgrades to Google's cloud portfolio were unveiled the day before Amazon kicked off its first annual user conference, AWS re:Invent, in Las Vegas. Google Compute Engine is still in preview mode and no official launch date has been released by the company.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?