Actually there is no rendering at all!
I always wounderd what games do now? Thought they just render the entire play space that you are in cuz the developers are too lazy to make games optimized
At least frameworks like Unity and Unreal will just, automatically if a model is entirelly outside the camera view frostum, not at all send those vertices to the Graphics Card to be rendered. If I remember it correctly a set of bounds is calculated (which can be pretty simple: look at all 3D model vertices and get the max X, max Y and max Z) and used to determine if the model is visible or not visible, a method that can only produce mistakes in the too pessimistic direction (i.e. treat as visible something which in practice is not) than optimistic, a technique which goes all the way back to the engine ID Software made and used for Doom.
So if using a framework there really is no “developers work” involved as its done at the level of the framework itself (i.e. the libraries used) - in Unity, for example, it’s literally checking or unchecking an option.
Further, at the graphics shader level itself you can further discard triangles (so, elements of a 3D model rather than the whole model) if they are outside the camera view frostum.
Last but not least the GPU itself won’t even try to run the fragments step (the last part of processing were the color for each pixel is calculated) outside the camera rendering plane (which is typically the screen, but virtual “cameras” are also used for other stuff, like shadow casting).
Would like to see it been done with another camera that looks at the ther one, while the world is unloading around them as the cam looks around, also my brain is fried from reading all of that 💀
This happens even in mario 64
Something something quantum physics.