The topic of my masters research was on visibility culling in densely occluded environments. Visibility culling is the process of excluding geometry from the rendering pipeline that would otherwise slow down an interactive computer rendering process (see Hidden Surface Determination from Wikipedia).
The image above is from my master's thesis research project. This image shows what a user could see from their point of view if they could see through buildings (yellow region). The areas outside of the yellow region are not visible and should not be sent to the rendering pipeline.
The first image shows the entire computer graphics model. This shows
what a computer would draw if no visibility algorithms were employed.
The second image shows the portion of the model that would be rendered
if frustrum culling was employed. Finally, the third image shows the
portion of the model that would be rendered if frustrum and occlusion
culling was employed. My research project focused on performing
occlusion culling using a distributed algorithm.
Interactive rendering of complex computer graphics models is an important problem in computer science. Despite the increased performance of computer graphics workstations, model sizes have outgrown the performance significantly. Visibility computation is an efficient method of selecting subsets of the model that are visible from a viewpoint.
There are several methods for computing visibility in polyhedral environments from a dynamic synthetic viewpoint. Visibility algorithms exist for computing the exact visibility from a viewpoint in object space. However, due to the complexity of most of these algorithms, interactive applications do not make use of them. The z-buffer algorithm is an image space algorithm that resolves visibility at the pixel level by comparing the depth values of scan-converted polygons. While the z-buffer algorithm is good at producing the correct output image, it requires sending invisible polygons down the graphics pipeline for scan-converting. This results in expensive lighting calculations and matrix transformations on invisible graphics primitives.
Currently, research has focused on techniques to conservatively over-estimate the amount of visible geometry from a synthetic viewpoint. Conservative algorithms may classify invisible geometry as visible but may never classify visible geometry as invisible. The output of these algorithms is fed to the z-buffer to produce the correct output image. View frustum culling conservatively eliminates invisible portions of the model not contained in the current view frustum. Occlusion culling conservatively eliminates invisible portions of the model occluded by geometry.
In addition to the conservative methods mentioned above, level-of-detail management (LOD) and back-face culling is used to reduce the rendering complexity of scenes. LODs provide a mechanism for varying the amount of detail rendered for a particular object by providing multiple representations. Back-face culling eliminates polygons not facing the synthetic viewpoint. When used together, interactive rendering of many models is possible.
Nevertheless, state-of-the-art computer graphics workstations can not render large models (ie. >> 10^7 polygons) at interactive rates using all the visibility methods above. This limitation is attributed to one of two factors. Firstly, an algorithm that is too conservative will produce a set of geometry that is too large to render interactively. Secondly, an algorithm that is not conservative enough will spend too much time computing a tight visible set to render it interactively. The focus of this research is to develop new methods for computing visibility that will overcome both factors.
Usually, visibility computation and rendering are coupled together on a single computer (see figure 1). In the case of multiprocessor graphics workstations, different processors compute the stages of the pipeline. One of the goals of this research is to augment the traditional single computer visibility pipeline with a Network of Workstations (NOW) visibility processing system (see figure 2).
In the single computer visibility pipeline, geometry flows through the various processing stages in pipeline fashion. At each stage, the output geometry should be a subset of the input geometry. In particular, the amount of geometry flowing from the back-face culling stage should be roughly 50% of the input geometry.
In the NOW visibility processing system, the single computer visibility pipeline is shortened. Remote Distributed Visibility Servers (DVS) compute occlusion, LOD and back-face culling stages on behalf of the graphics computer in real-time. The graphics host sends motion parameters indicating the acceleration, maximum velocity, initial position and direction of the moving viewpoint to the DVS. Upon receiving the motion parameters, the DVS dead-reckon the position of the viewpoint. As the viewpoint moves, the DVS computes the visibility of objects in the database. The visibility results are sent to the graphics rendering host via a local-area network (LAN) as visibility certificates.
Visibility certificates are conservative visibility guarantees. In the case of occlusion culling, visibility certificates consist of a set of n planes and an objectId. These certificates guarantee that the an object (or set of objects) will be invisible while the viewpoint remains in the region bounded by the n planes. The planes are computed based on methods of . Figure 3 shows a 2-D example of a synthetic viewpoint V, an occluder A and an object B. Object B is invisible while the viewpoint is in the positive-half space of supporting planes formed by the occluder and occludee.
 Satyan Coorg and Seth Teller. Real-time occlusion culling for models with large occluders. In Proc. of ACM Symposium on Computational Geometry, 1997.