The most expected and probably useful update from Nvidia at the CES 2025 keynote event was the launch of their GeForce RTX 50 Series Graphics Cards. These cards continue the evolution of what made Nvidia a successful company, graphics cards. They have included a bunch of AI bells and whistles in the form of their DLSS AI technology. In previous generations they haved DLSS spatial upscaling and they have now added temporal upscaling too. So the graphics card renders at alower resolution and frame rate and their AI model adds pixels and frames to achieve simulated higher framerate and resolution. The utility of this is arguable because it is essential an AI interpolation of pixels that were not rendered by a physics based graphics engine, so they are guaranteed to contain pysics errors. But the smother motion and more HD look might be enough to make it look better. Try for yourself.
Nvidia has invested heavily in producing AI models for enabling humanoid robotics behaviors. 2025 is shaping up to be quite the space race for Humanoid Robotics companies. I'm not a fan of using AI models to enable behavior because the long tail problem sets up an infinitely receeding goal post and creates robots with highly unpredictable behaivor problems, but I'm all for watching the robot race from the sidelines while I work on more dependable robot tech.
There was the expected release of new GPU models and Nvidia's CEO announced video generation models targeted at robotics companies training AI models for their robots. However, the most interesting and partially mystifying announcement was there new device called Project DIGITS.
The perplexing part is they claim the intended use of this device is training AI models at home. That's all well and good as a concept but there is one gigantic AI elephant in the room: Such models require HUGE labeled datasets to train on. This is the reason only multi-billion dollar companies are creating LLM models or even older CNN inference models. The average cost of creating, labeling, and curating a dataset for these models is north of $100,000 and requires constant maintenance. So I guess it is great as a learning tool for hobbyists, but at a $3,000 price point that is a hard value proposition. I foresee a lot of purchasers being disappointed that they then need to spend another $100,000 to get the data needed for a real AI inference model or LLM. I wouldn't be surprised if the real play here is that Nvidia is positioning themselves to be a subscription model provider for pre-trained base models and additional datasets for customers to modify the base training models with.
© 2024
expert curated independent technology news