In the present day, the Open Flash Platform (OFP) initiative was launched with inaugural members Hammerspace, the Linux neighborhood, Los Alamos Nationwide Laboratory, ScaleFlux, SK Hynix, and Xsight Methods. OCP intends to handle necessities from the subsequent wave of knowledge storage for AI.
“The convergence of knowledge creation related to rising AI functions coupled with limitations round energy availability, sizzling information facilities, and information heart house constraints means we have to take a blank-slate strategy to constructing AI information infrastructure,” OFP mentioned in its announcement.
A decade in the past, NVMe unleashed flash because the efficiency tier by disintermediating legacy storage busses and controllers. Now, OFP unlocks flash because the capability tier by disintermediating storage servers and proprietary software program stacks. OFP leverages open requirements and open supply—particularly parallel NFS and commonplace Linux—to put flash straight on the storage community. Open, standards-based options inevitably prevail. By delivering an order of magnitude larger capability density, substantial energy financial savings and far decrease TCO, OFP accelerates that inevitability.
Present options are inherently tied to a storage server mannequin that calls for extreme assets to drive efficiency and functionality. Designs from all present all-flash distributors are usually not optimized to facilitate the most effective in flash density, and tie options to the working lifetime of a processor (usually 5 years) vs. the working lifetime of flash (usually eight years). These storage servers additionally introduce proprietary information constructions that fragment information environments by introducing new silos, leading to a proliferation of knowledge copies, and including licensing prices to each node.
OFP advocates an open, standards-based strategy which incorporates a number of components:
- Flash gadgets – conceived round, however not restricted to, QLC flash for its density. Flash sourcing ought to be versatile to allow clients to buy NAND from numerous fabs, doubtlessly by way of controller companions or direct module design, avoiding single-vendor lock-in.
- IPUs/DPUs – have matured to a degree that they’ll change way more useful resource intensive processors for serving information. Each decrease value and decrease energy necessities imply a way more environment friendly part for information providers.
- OFP cartridge – a cartridge accommodates the entire important {hardware} to retailer and serve information in a kind issue that’s optimized for low energy consumption and flash density.
- OFP trays – An OFP tray suits various OFP cartridges and provides energy distribution and fitment for numerous information heart rack designs.
- Linux Working System – OFP makes use of commonplace Linux operating inventory NFS to produce information providers from every cartridge.
“Our objectives are usually not modest and there’s a lot of labor in retailer, however by leveraging open designs and trade commonplace elements as a neighborhood, this initiative will lead to huge enhancements in information storage effectivity,” OFP mentioned.
“Energy effectivity isn’t non-compulsory; it’s the one technique to scale AI. Interval. The Open Flash Platform removes the shackles of legacy storage, making it doable to retailer exabytes utilizing lower than 50 kilowatts, vs yesterday’s megawatts. That’s not incremental, it’s radical,” mentioned Hao Zhong, CEO & Co-Founding father of ScaleFlux.
“Agility is all the pieces for AI — and solely open, standards-based storage retains you free to adapt quick, management prices, and decrease energy use,” mentioned Gary Grider, Director of HPC, Los Alamos Nationwide Lab.
“Flash would be the subsequent driving power for the AI period. To unleash its full potential, storage programs should evolve. We imagine that open and standards-based architectures like OFP can maximize the complete potential of flash-based storage programs by considerably enhancing energy effectivity and eradicating boundaries to scale,” mentioned Hoshik Kim, SVP, Head of Reminiscence Methods Analysis at SK hynix.
“Open, standards-based options inevitably prevail. By delivering 10x larger capability density and a 50 % decrease TCO, OFP accelerates that inevitability,” mentioned David Flynn, Founder and CEO, Hammerspace.
 
			 
		     
                                
















