TL;DW:

  • FSR 3 is frame generation, similar to DLSS 3. It can greatly increase FPS to 2-3x.

  • FSR 3 can run on any GPU, including consoles. They made a point about how it would be dumb to limit it to only the newest generation of cards.

  • Every DX11 & DX12 game can take advantage of this tech via HYPR-RX, which is AMD’s software for boosting frames and decreasing latency.

  • Games will start using it by early fall, public launch will be by Q1 2024

It’s left to be seen how good or noticeable FSR3 will be, but if it actually runs well I think we can expect tons of games (especially on console) to make use of it.

    • Hypx@kbin.social
      link
      fedilink
      arrow-up
      20
      arrow-down
      5
      ·
      1 year ago

      People made the same claim about DLSS 3. But those generated frames are barely perceptible and certainly less noticeable than frame stutter. As long as FSR 3 works half-decently, it should be fine.

      And the fact that it works on older GPUs include those from nVidia really shows that nVidia was just blocking the feature in order to sell more 4000 series GPUs.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Frame generation is limited to 40 series GPUs because Nvidias solution is dependant on their latest hardware. The improvements to DLSS itself and the new raytracing stuff work on 20/30 series GPUs. That said FSR 3 is fantastic news, competition benefits us all and I’d love to see it compete with DLSS itself on Nvidia GPUs.

        • Hypx@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          If FSR 3 supports frame generation on 20/30 series GPUs, you’ll wonder if they’ll port it to older GPUs anyways.

        • Hypx@kbin.social
          link
          fedilink
          arrow-up
          5
          arrow-down
          3
          ·
          edit-2
          1 year ago

          You aren’t going to use these features on extremely old GPUs anyways. Most newer GPUs will have spare shader compute capacity that can be used for this purpose.

          Also, all performance is based on compromise. It is often better to render at a lower resolution with all of the rendering features turned on, then use upscaling & frame generation to get back to the same resolution and FPS, than it is to render natively at the intended resolution and FPS. This is often a better use of existing resources even if you don’t have extra power to spare.

        • Dudewitbow@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          because I think the post assumes that the GPU is always using all of its resources during computation when it isn’t. There’s a reason why benchmarks can make a GPU hotter than a game can, as well as the fact that not all games pin the gpu performance at 100%. If a GPU is not pinned at 100%, there is a bottleneck in the presentation chain somewhere. (which means unused resources on the GPU)

          • kadu@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            You’re correct, and if AMD is announcing the feature this does mean there’s is enough shader compute available for this to work.

            However, this does mean the algorithm must be light enough to generate the frame in that very limited resource usage. This is already what we see with FSR, that works well, but can’t fix some of the issues DLSS can because DLSS can use way more complex algorithms as it isn’t fighting for resources.

            • Dudewitbow@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              I still think it’s a matter of waiting for the results to show up later. AMD for RDNA3 does have an AI engine on it, and the gains it might have in FSR3 might be different in the same way XeSS does with branching logic. Too early to tell given that all the test suite tests are RDNA3, and that it doesn’t officially launch til 2 weeks from now.

              • kadu@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                Yep, looking forward to the results, if it works well enough it could be huge for portable devices, for instance.

    • hark@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      The hit will be less than the hit of trying to run native 4k.

    • Edgelord_Of_Tomorrow@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      1 year ago

      You’re getting downvoted but this will be correct. DLSSFG looks dubious enough on dedicated hardware, doing this on shader cores means it will be competing with the 3D rendering so will need to be extremely lightweight to actually offer any advantage.

      • Dudewitbow@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        I wouldnt say compete as the whole concept of frame generation is that it generates more frames when gpu resouces are idle/low due to another part of the chain is holding back the gpu from generating more frames. Its sorta like how I view hyperthreads on a cpu. They arent a full core, but its a thread that gets utilized when there are poonts in a cpu calculation that leaves a resouce unused (e.g if a core is using the AVX2 accerator to do some math, a hyperthread can for example, use the ALU that might not be in use to do something else because its free.)

        It would only compete if the time it takes to generate one additional frame is longer than the time a gpu is free due to some bottleneck in the chain.

      • echo64@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        1 year ago

        You guys are talking about this as if it’s some new super expensive tech. It’s not. The chips they throw inside tvs that are massively cost reduced do a pretty damn good job these days (albit, laggy still) and there is software you can run on your computer that does compute based motion interpolation and it works just fine even on super old gpus with terrible compute.

        It’s really not that expensive.

          • echo64@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            edit-2
            1 year ago

            Yeah, it does, which is something tv tech has to try and derive themselves. Tv tech has to figure that stuff out. It’s actually less complicated in a fun kind of way. But please do continue to explain how it’s more compute heavy

            Also just to be very clear, tv tech also encompasses motion vectors into the interpolation, that’s the whole point. It just has to compute them with frame comparisons. Games have that information encoded into various gbuffers so it’s already available.