WEKApod Nitro and WEKApod Prime Present Prospects with Versatile, Inexpensive, Scalable Options to Quick-Monitor AI Innovation

WekaIO (WEKA), the AI-native knowledge platform firm, unveiled two new WEKApod™ knowledge platform applianced: the WEKApod Nitro for large-scale enterprise AI deployments and the WEKApod Prime for smaller-scale AI deployments and multi-purpose high-performance knowledge use instances. WEKApod knowledge platform home equipment present turnkey options combining WEKA® Knowledge Platform software program with best-in-class high-performance {hardware} to offer a robust knowledge basis for accelerated AI and trendy performance-
intensive workloads.
The WEKA Knowledge Platform delivers scalable AI-native knowledge infrastructure purpose-built for even essentially the most demanding AI workloads, accelerating GPU utilization and retrieval-augmented technology (RAG) knowledge pipelines effectively and sustainably whereas offering environment friendly write efficiency for AI mannequin checkpointing. Its superior cloud-native structure permits final deployment flexibility, seamless knowledge portability, and sturdy hybrid cloud functionality.
WEKApod delivers all of the capabilities and advantages of WEKA Knowledge Platform software program in an easy-to-deploy equipment ideally suited for organizations leveraging generative AI and different performance-intensive workloads throughout a broad spectrum of industries. Key advantages embody:
WEKApod Nitro: Delivers distinctive efficiency density at scale, delivering over 18 million IOPS in a single cluster, making it ideally suited for large-scale enterprise AI deployments and AI resolution suppliers coaching, tuning, and inferencing LLM basis fashions. WEKApod Nitro is licensed for NVIDIA DGX SuperPOD™. Capability begins at half a petabyte of usable knowledge and is expandable in half-petabyte increments.
WEKApod Prime: Seamlessly handles high-performance knowledge throughput for HPC, AI coaching and inference, making it ideally suited for organizations that need to scale their AI infrastructure whereas sustaining value effectivity and balanced price-performance. WEKApod Prime provides versatile configurations that scale as much as 320 GB/s learn bandwidth, 96 GB/s write bandwidth, and as much as 12 million IOPS for patrons with much less excessive efficiency knowledge processing necessities. This allows organizations to customise configurations with elective add-ons, so that they solely pay for what they want and keep away from overprovisioning pointless parts. Capability begins at 0.4PB of usable knowledge with choices extending as much as 1.4PB.
“Accelerated adoption of generative AI purposes and multi-modal retrieval-augmented technology has permeated the enterprise sooner than anybody may have predicted, driving the necessity for reasonably priced, highly-performant and versatile knowledge infrastructure options that ship extraordinarily low latency, drastically scale back the fee per tokens generated and might scale to fulfill the present and future wants of organizations as their AI initiatives evolve,” mentioned Nilesh Patel, chief product officer at WEKA. “WEKApod Nitro and WEKApod Prime supply unparalleled flexibility and selection whereas delivering distinctive efficiency, vitality effectivity, and worth to speed up their AI initiatives wherever and in every single place they want them to run.”
Join the free insideAI Information publication.
Be a part of us on Twitter: https://twitter.com/InsideBigData1
Be a part of us on LinkedIn: https://www.linkedin.com/firm/insideainews/
Be a part of us on Fb: https://www.fb.com/insideAINEWSNOW
Verify us out on YouTube!