Language AI firm DeepL introduced the deployment of an NVIDIA DGX SuperPOD with DGX Grace Blackwell 200 techniques. The corporate stated the system will allow DeepL to translate your entire web – which at present takes 194 days of nonstop processing – in simply over 18 days.
This can be the primary deployment of its form in Europe, DeepL stated, including that the system is operational at DeepL’s accomplice EcoDataCenter in Sweden.
“The brand new cluster will improve DeepL’s analysis capabilities, unlocking highly effective generative options that can permit the Language AI platform to broaden its product choices considerably,” DeeppL stated. “With this superior infrastructure, DeepL will strategy mannequin coaching in a completely new manner, paving the trail for a extra interactive expertise for its customers.”

NVIDIA DGX SuperPOD with DGX GB200
Within the brief time period, customers can count on fast enhancements, together with elevated high quality, pace and nuance in translations, together with better interactivity and the introduction of extra generative AI options, based on the corporate. Trying to the long run, multi-modal fashions will turn out to be the usual at DeepL. The long-term imaginative and prescient consists of additional exploration of generative capabilities and an elevated deal with personalization choices, making certain that each consumer’s expertise is tailor-made and distinctive.
This deployment will present the extra computing energy needed to coach new fashions and develop revolutionary options for DeepL’s Language AI platform. NVIDIA DGX SuperPOD with DGX GB200 techniques, with its liquid-cooled, rack-scale design and scalability for tens of 1000’s of GPUs, will allow DeepL to run high-performance AI fashions important for superior generative purposes.
This marks DeepL’s third deployment of an NVIDIA DGX SuperPOD and can surpass the capabilities of DeepL Mercury, its earlier flagship supercomputer.
“At DeepL, we take delight in our unwavering dedication to analysis and improvement, which has constantly allowed us to ship options that outshine our opponents. This newest deployment additional cements our place as a frontrunner within the Language AI area,” stated Jarek Kutylowski, CEO and Founding father of DeepL. “By equipping our analysis infrastructure with the newest expertise, we not solely improve our present providing but additionally discover thrilling new merchandise. The tempo of innovation in AI is quicker than ever, and integrating these developments into our tech stack is crucial for our continued progress.”
In accordance with the corporate, capabilities of the brand new clusters embrace:
-
Translating your entire internet into one other language, which at present takes 194 days of continuous processing, will now be achievable in simply 18.5 days.
-
The time required to translate the Oxford English Dictionary into one other language will drop from 39 seconds to 2 seconds.
-
Translating Marcel Proust’s In Search of Misplaced Time can be lowered from 0.95 seconds to 0.09 seconds.
-
General, the brand new clusters will ship 30 instances the textual content output in comparison with earlier capabilities.
“Europe wants strong AI deployments to take care of its aggressive edge, drive innovation, and deal with complicated challenges throughout industries” stated Charlie Boyle, Vice President of DGX techniques at NVIDIA. “By harnessing the efficiency and effectivity of our newest AI infrastructure, DeepL is poised to speed up breakthroughs in language AI and ship transformative new experiences for customers throughout the continent and past.”