Nvidia Expands Open Source AI with SchedMD Acquisition and New Models
Nvidia's been making some serious moves lately in the open source AI world, and it's clear they're not slowing down. They're not just building the hardware that powers AI; they're also getting deeply involved in the software side of things. This week, they announced two big steps: acquiring SchedMD and launching a new family of open AI models.
Nvidia Acquires SchedMD
First up, the acquisition. Nvidia is bringing SchedMD, the company behind the popular Slurm workload management system, into the fold. If you're not familiar with Slurm, it's basically the backbone for managing tasks in high-performance computing and AI environments. I think it is a smart move because Nvidia has been working with SchedMD for over a decade, and this acquisition just formalizes the relationship. They're promising to keep Slurm open source and vendor-neutral, which is crucial for maintaining trust within the community. What I really want to know is the price they closed the deal. The financial details remain secret.
Think of it this way: Nvidia's not just selling you the ingredients (the GPUs); they're now also providing the recipe (Slurm) to make sure your AI projects cook up perfectly. For example, companies can leverage this technology to manage the distribution of AI training workloads across a cluster of servers.
New Open AI Models: Nemotron 3
But wait, there's more! Nvidia also unveiled the Nemotron 3 family of open AI models. They're touting these as the most efficient open models for building AI agents. Now, I know "most efficient" can be a loaded term, but Nvidia's breaking them down into a few versions to accomplish different objectives. There's Nemotron 3 Nano for specific tasks, Nemotron 3 Super for multi-agent applications, and Nemotron 3 Ultra for tackling more complex challenges. Each one has its strengths.
Nvidia is betting big on open source and I think it is a great idea. Their CEO, Jensen Huang, even stated that open innovation is the basis for AI advancement. By open sourcing it, other players can make it better and improve the technology.
It's worth noting that Nvidia's been on a roll with open source lately. Last week, they revealed Alpamayo-R1, an open reasoning vision language model geared towards self-driving car research. Plus, they've been expanding their Cosmos world models, which are also open source. It looks like they want to be the main supplier for AI and software.
The big picture here is that Nvidia sees physical AI – the kind that powers robots and self-driving vehicles – as the next big thing for their GPUs. This explains their focus in this area. By offering both the hardware and open source software, they're trying to become the one-stop-shop for companies building the brains behind these technologies. I, for one, am curious to see how this strategy plays out.
Source: TechCrunch