how does a computer know how to work with 3d animations without like, someone rendering every single view model from every angle

i’m guessing by “view model” you mean like, the field of view a player/camera would see at location X,Y,Z with A,B,C pitch/roll/yaw

you are asking how a computer is able to render such (changing) views seemingly seamlessly

since the number of possible views (number of discrete positions multiplied by the number of possible attitudes) quickly exceeds the number of atoms in the universe, we can’t just generate them all and save them all. to do this for even something small and simple like HL2/source engine running gmod’s gm_flatgrass would take trillions of universe lifetimes & more data storage capacity than humanity has even manufactured.

however, rending all this shit in realtime is quite a fuckin feat too. but its possible:

your GPU does all of this. it has its own RAM and “CPUs”. it also handles communicating this information to a monitor you can see with your eyes. the GPU only really interacts with the CPU when it needs to pull information from a drive or the virtual address space into its own memory

but your GPU is fundamentally different than your CPU. your CPU has (probably) something like 4 synchronized cores on it. this means it’s capable of running 4 independent sequences of instructions simultaneously. these suckers run really, really fast. if they are each 2.0 GHz, they are probably pumping out 1.5 trillion instructions per second. this is overkill for graphics-related things as your monitor only needs to update a maximum of 60 times per second. (60 FPS)

your GPU is comprised of swathes of smaller, shittier, independent “CPUs”. these can’t do as many things as a normal CPU does, but they can calculate distances between vertices and vectors and shit really well. that’s all they do, because GPUs are predicated upon the ideas of vertices & textures:

a verticy (vert) is just a point in 3D space. an X, Y, and Z coordinate. a model is just a group of potentially tens of thousands of these verts. an example: 

image

each intersection of these lines you see in the above wireframe image is where a vert lies. your GPU’s job consists of remembering all these verts (easy, since they are only 3 numbers) and drawing textures between them. textures are also easy, as they are just 2D images.

since you have so many shittier, single-purpose “CPUs” you can render a great number of verticies/modles/textures at speeds that make them appear seamless to our stupid ape eyeballs. for example, if you have 100 60MHz “CPUs” on your GPU, you have 6 trillion computational cycles available every second. the catch is that all of these operations have to deal with completely independent and unaffiliated things. this works for graphics stuff because you’re usually rendering hundreds of independent, non-clipping models. rendering these models is a straightforward task, so the “CPUs” will never have to wait on any other “CPU” to do anything before it can start working (this is the case with real CPUs).

anyway, going back to our 100 “CPUs” @ 60MHz each example. if you are rendering at 60 FPS, you have ~600 million computations spread across 100 discrete cores available to you for every frame you wish to render. that’s quite a bit compared to if you were doing it off your CPU, and it’s why video games nowadays seem so incredible and realistic