Open Flash Platform Storage Initiative Aims to Cut AI Infrastructure Costs by 50%

Today, the Open Flash Platform (OFP) initiative was launched with inaugural members Hammerspace, the Linux community, Los Alamos National Laboratory, ScaleFlux, SK Hynix, and Xsight Systems. OCP intends to address requirements from the next wave of data storage for AI.

“The convergence of data creation associated with emerging AI applications coupled with limitations around power availability, hot data centers, and data center space constraints means we need to take a blank-slate approach to building AI data infrastructure,” OFP said in its announcement.

A decade ago, NVMe unleashed flash as the performance tier by disintermediating legacy storage busses and controllers.  Now, OFP unlocks flash as the capacity tier by disintermediating storage servers and proprietary software stacks. OFP leverages open standards and open source—specifically parallel NFS and standard Linux—to place flash directly on the storage network.  Open, standards-based solutions inevitably prevail. By delivering an order of magnitude greater capacity density, substantial power savings and much lower TCO, OFP accelerates that inevitability.

Current solutions are inherently tied to a storage server model that demands excessive resources to drive performance and capability. Designs from all current all-flash vendors are not optimized to facilitate the best in flash density, and tie solutions to the operating life of a processor (typically five years) vs. the operating life of flash (typically eight years). These storage servers also introduce proprietary data structures that fragment data environments by introducing new silos, resulting in a proliferation of data copies, and adding licensing costs to every node.

OFP advocates an open, standards-based approach which includes several elements:

  • Flash devices – conceived around, but not limited to, QLC flash for its density. Flash sourcing should be flexible to enable customers to purchase NAND from various fabs, potentially through controller partners or direct module design, avoiding single-vendor lock-in.
  • IPUs/DPUs – have matured to a point that they can replace much more resource intensive processors for serving data. Both lower cost and lower power requirements mean a much more efficient component for data services.
  • OFP cartridge – a cartridge contains all of the essential hardware to store and serve data in a form factor that is optimized for low power consumption and flash density.
  • OFP trays – An OFP tray fits a number of OFP cartridges and supplies power distribution and fitment for various data center rack designs.
  • Linux Operating System – OFP utilizes standard Linux running stock NFS to supply data services from each cartridge.

“Our goals are not modest and there is a lot of work in store, but by leveraging open designs and industry standard components as a community, this initiative will result in enormous improvements in data storage efficiency,” OFP said.

“Power efficiency isn’t optional; it’s the only way to scale AI. Period. The Open Flash Platform removes the shackles of legacy storage, making it possible to store exabytes using less than 50 kilowatts, vs yesterday’s megawatts. That’s not incremental, it’s radical,” said Hao Zhong, CEO & Co-Founder of ScaleFlux.

“Agility is everything for AI — and only open, standards-based storage keeps you free to adapt fast, control costs, and lower power use,” said Gary Grider, Director of HPC, Los Alamos National Lab.

“Flash will be the next driving force for the AI era. To unleash its full potential, storage systems must evolve. We believe that open and standards-based architectures like OFP can maximize the full potential of flash-based storage systems by significantly improving power efficiency and removing barriers to scale,” said Hoshik Kim, SVP, Head of Memory Systems Research at SK hynix.

“Open, standards-based solutions inevitably prevail. By delivering 10x greater capacity density and a 50 percent lower TCO, OFP accelerates that inevitability,” said David Flynn, Founder and CEO, Hammerspace.