>>107917653
Raytracing. For context all of this was made to be done in a "stand-alone" fragment shader,. Basically a minimal vertex shader like this
#version 460
const vec2 positions[4] = vec2[4](vec2(-1.0f,-1.0f), vec2(1.0f,-1.0f), vec2(-1.0f,1.0f), vec2(1.0f,1.0f));
void main() { gl_Position = vec4(positions[gl_VertexIndex], 0.0f, 1.0f); }
I was going to go into explicit detail but realized the whole thing was getting rather time consuming. If you want to learn more I'd suggest reading about raytracing and how fresnel equations are implemented. In raytracing the ray starts from the camera and eventually ends up sampling from the cubemap. You do normal optics in reverse (which still works mathematically). This makes sense because you don't care about all the rays leaving the light source, you care about all the rays entering the camera. So you go from the ray's destination to its source. Ergo, we will pretend light travels backwards and flip all the rays.
You should read Real-Time Rendering, it has material on this stuff. Moving on ...
First consider rays that go from the camera but don't hit the geometry. Those rays just sample the skybox.
Next you start with a ray from the camera that intersects the geometry and the ray splits in two, one ray reflects off the geometry and you follow it to the cubemap and the other refracts into the shape. Use Fresnel equations to determine contribution to color from inside the geometry (refraction) and the skybox (reflection).
Comment too long. Click here to view the full text.