December 14, 2007
SAN JOSE, Calif., Dec. 12 -- BlueArc Corporation, a leader in scalable, high-performance unified network storage, today announced that Brookhaven National Lab (Brookhaven), a multi-program laboratory operated for the U.S. Department of Energy, has deployed a BlueArc Titan 2200 cluster with nearly 300 terabytes of storage. The BlueArc Titan system serves as a massively scalable and reliable foundation for the fastest available access to data resulting from research today -- and in the future.
"We can't afford to experiment when it comes to storage infrastructure," said Robert Petkus, RHIC/USATLAS Computing Facility, Brookhaven National Laboratory. "As Brookhaven prepares to support some of the world's most important particle physics research next year, we've replaced cutting-edge but inadequate systems with BlueArc Titan 2200 servers that can scale effortlessly and respond consistently to shifts in volume and demand."
Approximately 3,000 scientists, engineers, technicians and support staff and another 4,000 or more guest researchers per year depend on data from the Relativistic Heavy Ion Collider Computing Facility (RHIC) that Brookhaven operates at its U.S. facility. Brookhaven also has a major role in international projects such as the ambitious Large Hadron Collider (LHC) under construction by CERN, the European Organization for Nuclear Research and the world's premier particle physics research lab. Data from RHIC experiments is proliferating at an astounding rate, and Petkus anticipates that by 2012, Brookhaven will have more than 4,000 nodes on its storage area network. With so many users and many means of accessing data, Petkus sought a unified storage environment and a single vendor to help him retain control over the implementation.
BlueArc offers precisely the combination of record-setting performance and reliability essential to deliver data that maps the speed of change of subatomic matter. Petkus and his team have deployed a two-node BlueArc Titan 2200 cluster with six-gigabit connections trunked together and 288 terabytes of Fibre Channel disk capacity. Petkus sees a two-fold advantage in the BlueArc Titan solution's distinctive hardware-based architecture, which supports multiple access protocols without requiring modification to Brookhaven's 2,000-node server farm, maximizing the value of the laboratory's technology investments and supporting growth.
"My job is to think ahead as far as I possibly can," said Petkus. "Every node in our storage network is becoming a supercomputer with massive memory and 64-bit architecture. We support huge networks, huge amounts of data and demanding physicists around the world, so I've always got to know what the latest high-performance technologies are and make choices that won't risk our data to unproven systems."
BlueArc is a leading provider of high-performance unified network storage systems to enterprise markets, as well as data-intensive markets, such as electronic discovery, entertainment, federal government, higher education, Internet services, oil and gas and life sciences. BlueArc's products support both network attached storage, or NAS, and storage area network, or SAN, services on a converged network storage platform. Bluearc enables companies to expand the ways they explore, discover, research, create, process and innovate in data-intensive environments. The company's products replace complex and performance-limited products with high performance, scalable and easy to use systems capable of handling the most data-intensive applications and environments. Further, the company believes that its energy efficient design and its products' ability to consolidate legacy storage infrastructures dramatically increases storage utilization rates and reduces its customers' total cost of ownership. Information about BlueArc solutions and services can be found at http://www.bluearc.com.
Source: BlueArc Corp.
10/30/2013 | Cray, DDN, Mellanox, NetApp, ScaleMP, Supermicro, Xyratex | Creating data is easy… the challenge is getting it to the right place to make use of it. This paper discusses fresh solutions that can directly increase I/O efficiency, and the applications of these solutions to current, and new technology infrastructures.
10/01/2013 | IBM | A new trend is developing in the HPC space that is also affecting enterprise computing productivity with the arrival of “ultra-dense” hyper-scale servers.
Ken Claffey, SVP and General Manager at Xyratex, presents ClusterStor at the Vendor Showdown at ISC13 in Leipzig, Germany.
Join HPCwire Editor Nicole Hemsoth and Dr. David Bader from Georgia Tech as they take center stage on opening night at Atlanta's first Big Data Kick Off Week, filmed in front of a live audience. Nicole and David look at the evolution of HPC, today's big data challenges, discuss real world solutions, and reveal their predictions. Exactly what does the future holds for HPC?