April 09, 2012
Before EC2, there was S3. On March 14, 2006, Amazon launched its first utility computing service, Simple Storage Service (S3), and within weeks there were 200 million objects packed onto its disk arrays. Now, on the heels of its sixth anniversary, the service is about to hit a major milestone: one-trillion objects stored.
In a blog post, Amazon Evangelist Jeff Barr wrote that the storage service contained 905 billion objects at the end of Q1 2012. He also revealed that Amazon S3 routinely handles over 650,000 requests per second, up from 500,000 requests per second just three months earlier.
The cloud service has more than tripled over the last two years. This chart, provided by the company, depicts year-end totals as well as the most recent quarter (which is not shown to scale):
According to Barr:
The S3 object count continued to grow at a rapid clip even after we added object expiration and multi-object deletion at the end of the year. Every day, well over a billion objects are added via the S3 APIs, AWS Import/Export, the AWS Storage Gateway, all sorts of backup tools, and through Direct Connect pipes.
As for what constitutes an object, Amazon's S3 FAQ explains thusly:
The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from 1 byte to 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.
Note to job seekers: with the growth of the storage service, it's only natural that Amazon needs more team members to support the effort. The company lists a number of open positions on the business and technical side.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?