**Part 4: Neural Surface Rendering**
# Sphere Tracing First we implement differentiable [sphere tracing](https://www.researchgate.net/publication/2792108_Sphere_Tracing_A_Geometric_Method_for_the_Antialiased_Ray_Tracing_of_Implicit_Surfaces) for rendering a SDF so that we can use it to render a simple torus. Sphere tracing (also known as "ray marching" or "sphere marching") is a tracing algorithm to render complex 3D geometries. It works by casting rays from a camera and tracing their path through the scene until they intersect with an object or surface. In traditional ray tracing, the intersection point is computed by testing every object in the scene against the ray, which can be computationally expensive. Sphere tracing, on the other hand, uses a more efficient method that takes advantage of the fact that most scenes are empty space. The basic idea of sphere tracing is to imagine a sphere centered at the starting point of the ray, which expands along the ray's path until it intersects with an object in the scene. The intersection point is then used as the new starting point for the next sphere, which expands along the new ray direction until it hits another object, and so on. This process is repeated until the ray either hits a light source or reaches the maximum number of iterations allowed. In our implementation, we use a the implicit signed distance function to obtain the distance from a point to the surface of the object. The distance is then used to offset the starting point of the next sphere along the ray's path in an iterative manner, until max iterations is reached or the distance is less than a specified epsilon value. A final mask is computed to indiciate whether the ray hit the object or not. The following visualization shows the result of rendering a torus with a sphere tracer:
|
|
**Optimized**
|
# Phong Relighting
We implement the [Phong reflection model](https://en.wikipedia.org/wiki/Phong_reflection_model)
in order to render the SDF volume we trained under different lighting conditions.
In principle, the Phong model can handle multiple different light sources coming from different directions,
but for our implementation we assume we're working with a single directional light source.
The rendered volume under different lighting is shown below:
Appearance | Geometry
:---:|:---:
|
# Fewer Training Views
In section 3, we trained on 100 training views for a single scene.
A benefit of using surface representations, however, is that the geometry is better regularized
and can in principle be inferred from fewer views.
In this section, we experiment with using fewer training views including 25, 50, and 75 views.
The observation is that the geometry is still well reconstructed with fewer training views,
but the appearance is not as good as when using 100 training views.
Num Views | Appearance | Geometry
:---:|:---:|:---:
25 |
|
50 |
|
75 |
|
100 |
|