Nvidia to drop CUDA support for Maxwell, Pascal, and Volta GPUs with the next major Toolkit release

GTX 1080 Ti
(Image credit: Nvidia)

The official release notes for Nvidia's CUDA 12.9 Toolkit explicitly indicate that the next major release will no longer support Maxwell, Pascal, and Volta-based GPUs. Note that this deprecation is only limited to the compute side, as these GPUs will likely continue receiving normal GeForce drivers for the time being. That being said, this is likely the last SDK version that can be used to develop CUDA applications targeting the aforementioned architectures.

While the previous release hinted at this change, Nvidia's stronger wording now serves as a definitive signal for developers to shift to more modern architectures. CUDA 12.x series (and before) will still allow application development for these GPUs. The deprecation targets offline compilation and library support. Essentially, future CUDA compilers (nvcc) will lack the ability to generate machine code compatible with these GPUs. In the same vein, upcoming versions of CUDA-accelerated libraries like cuBLAS, cuDNN, etc., will not offer support for GPUs built using these architectures.

Nvidia has not specified an exact date for the upcoming major release (likely CUDA 13.x). Similarly, we aren't sure how many interim releases are to follow in the 12.9.x branch. Either way, this is quite a significant change as Nvidia is dropping three major architectures with one swing. Volta's consumer equivalent Turing (RTX 20) is next in line, but it likely has a lot more to offer before it too hits the chopping block.

"Maxwell, Pascal, and Volta architectures are now feature-complete with no further enhancements planned. While CUDA Toolkit 12.x series will continue to support building applications for these architectures, offline compilation and library support will be removed in the next major CUDA Toolkit version release. Users should plan migration to newer architectures, as future toolkits will be unable to target Maxwell, Pascal, and Volta GPUs."

CUDA 12.9 Toolkit release notes

Nvidia's Maxwell architecture was introduced in early 2014 with the GTX 745, GTX 750, and GTX 750 Ti series, along with the GTX 800M series on mobile. Maxwell even found its way into the original Nintendo Switch's Tegra SoC. A refresh later in 2014, featuring the GM20X series dies, brought several enhancements with the GTX 900 series. Pascal was soon to follow in 2016, serving as the basis for the legendary GTX 1080 Ti, and powering the Quadro P-series for workstations (mobile and desktop) along with Nvidia's Tesla P4 accelerators.

Most consumers might not be familiar with the name Volta, but this architecture marked the debut of Nvidia's Tensor Cores in 2017. Fun fact, the Volta-based GV100 is Nvidia's second-largest chip at 815mm2, only second to the monstrous GA100 (Ampere) at 826mm2. Volta would serve as the stepping stone for Nvidia's strides into the AI acceleration market, followed by Turing, Ampere, Hopper, and now Blackwell, which have since grown its valuation to nearly $2.8 trillion.

Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

TOPICS
Hassam Nasir
Contributing Writer

Hassam Nasir is a die-hard hardware enthusiast with years of experience as a tech editor and writer, focusing on detailed CPU comparisons and general hardware news. When he’s not working, you’ll find him bending tubes for his ever-evolving custom water-loop gaming rig or benchmarking the latest CPUs and GPUs just for fun.

  • A Stoner
    It is hard to comprehend what this means in the grand scheme of things.

    Are these CUDA programs things that us nominal game users need to be concerned with or is this purely about programs that use the GPU for processing other things?

    I mean, it would totally suck to have a nice working 1080P system with an older GPU only to not be able to play the latest games that come out.
    Reply
  • ThatMouse
    A Stoner said:
    I mean, it would totally suck to have a nice working 1080P system with an older GPU only to not be able to play the latest games that come out.
    Ya I'm wondering too, such as MAME, HTPC, and server applications where you might need to do some light transcoding, and buying new hardware is not necessary for the light loads. I've had to ditch old hardware due to lack of driver support, it just wasn't worth the time.
    Reply
  • Mattzun
    This is NOT a bid deal in the grand scheme of things.

    CUDA is not used in games.

    Current versions of CUDA apps will continue to function on the older cards.

    Most CUDA apps will eventually transition to the new toolkit
    When that happens, they will drop support for older cards on new releases of the program.
    Reply
  • bit_user
    A Stoner said:
    It is hard to comprehend what this means in the grand scheme of things.
    Nvidia typically maintains a couple releases of CUDA. Someone could still download an older release branch of CUDA and build apps that will run on older GPUs (so long as they don't require features only found on newer ones, like Tensor cores or Ray Tracing). Also, old releases of apps will still work, because they're built on an older CUDA release.

    A Stoner said:
    Are these CUDA programs things that us nominal game users need to be concerned with or is this purely about programs that use the GPU for processing other things?

    I mean, it would totally suck to have a nice working 1080P system with an older GPU only to not be able to play the latest games that come out.
    This is mainly about AI and other GPU compute apps. I think CUDA isn't used by most games.

    In general, Linux is really good at supporting legacy hardware. You can play OpenGL and even Direct3D games on some really old GPUs. However, those are standard APIs, while CUDA is something proprietary that Nvidia controls.
    Reply
  • bit_user
    ThatMouse said:
    Ya I'm wondering too, such as MAME, HTPC, and server applications where you might need to do some light transcoding,
    Yeah, there are a few different ways to do GPU-based transcoding, on Linux. Nvidia prefers you use their proprietary APIs, which I think do have CUDA dependencies. However, VDPAU and VAAPI are standard APIs that I expect wouldn't be affected by this change. So, whether or not it'll break your workflow (i.e. once the apps you mention transition to a newer CUDA version) probably depends on the details.
    Reply
  • ihatewindowss
    A Stoner said:
    It is hard to comprehend what this means in the grand scheme of things.

    Are these CUDA programs things that us nominal game users need to be concerned with or is this purely about programs that use the GPU for processing other things?

    I mean, it would totally suck to have a nice working 1080P system with an older GPU only to not be able to play the latest games that come out.
    CUDA is entirely for taking Nvidia graphics 3D power and applying them to a software program. It is never used in games, except for DLSS which is a post process. I think DLSS is plain ugly and a worthless product compared to antialiasing. CUDA is great for rendering in blender or accelerating video editing etc. Or you can write a specific computing problem around it. It has nothing to do with games and is for professionals.
    I have a 1070 ti still because new graphics cards are just ridiculously expensive. I don't care about this article because I can always just use the last available toolkit and most programs that use it won't see many changes even when the toolkit sees updates.
    The joke is that I will be getting an AMD card next because I just want VRAM to run a graphics library on. If I write a powerful computing script I can just get another AMD card for a 1/4th of the price of a Nvidia card that will be bogged down by its hybrid tensor cores, and those tensor cores will be out of date in AI computing in a few years. Meaning I bought a 1k+ card, and half of the graphics chip I want to use is dead weight ewaste that can only be used to generate stupid cats. Or I can just buy an AMD card with a lot of vram and wait 6 months until a good CUDA like library is available.
    The problem is with these companies is they don't understand that if a consumer wants to do AI, they will be happy to drop 15k on the proper hardware to get an accurate result. If they don't want to do AI they on want to spend 600~ on a new graphics card to play games or do graphics library computations like video rendering or blender. I hope they realize the mistake but they are obsessed with tensor cores, just like the ray tracing ads when you'd have to buy the most expensive rig to bother with that.
    Reply
  • bit_user
    ihatewindowss said:
    CUDA is entirely for taking Nvidia graphics 3D power and applying them to a software program. It is never used in games, except for DLSS which is a post process.
    I'm pretty sure DLSS doesn't use CUDA. A bit of quick searching seems to support this. If you have evidence to the contrary, please provide.

    ihatewindowss said:
    I don't care about this article because I can always just use the last available toolkit and most programs that use it won't see many changes even when the toolkit sees updates.
    If you use programs that depend on CUDA, then what will happen is that they will start to use features that are only found in newer versions of the CUDA toolkit, and will no longer compile with older versions of CUDA. So, unless you want to be stuck using older versions of these programs, you'll eventually be forced to upgrade your hardware.

    ihatewindowss said:
    The problem is with these companies is they don't understand that if a consumer wants to do AI, they will be happy to drop 15k on the proper hardware to get an accurate result.
    I don't know anyone who has $15k to spend on a graphics card for home use. I'm sure there are some, and they will be buying RTX Pro 6000 Blackwell cards, but I think people willing & able to spend that much on a graphics card are few and far between.
    Reply
  • renz496
    A Stoner said:
    It is hard to comprehend what this means in the grand scheme of things.

    Are these CUDA programs things that us nominal game users need to be concerned with or is this purely about programs that use the GPU for processing other things?

    I mean, it would totally suck to have a nice working 1080P system with an older GPU only to not be able to play the latest games that come out.
    This is not related to game. Even if you need CUDA older version of cuda still available. If you need new features you will going to need new hardware anyway since those new feature are not even available on 8-11 years old hardware. As for driver support maybe they will drop maxwell, pascal, volta soon.
    Reply