Volume Ray Casting (part 2)

From Math Images
Jump to: navigation, search

If you have not done so already, please read the first part of this, Volume Ray Casting.


Figure 1: A Ray Cast Through a Volume
Figure 2: The Cube of Points Used for Trilinear Interpolation

A More Mathematical Explanation (continued)

By this point, you should have completed the first three steps, and have a camera base, a virtual screen, and viewing rays. The three remaining steps are as follows:

4. Sample along each ray
5. Combine the samples on each ray to create the final image


Step 4: Sampling Along Each Ray

The next step is to sample the data at regular intervals along each ray as it passes through the volume. Figure 1 shows an example ray passing through a set of points. Since there is space in between the points of data that make up the volume, it is unlikely the ray will exactly intersect with any of the data points. So, we must use a technique known as trilinear interpolation to approximate the value at the sample points on the ray. But before that, we must find the location of the sample points on the ray. To do this, first choose the distance between each sample on the ray. A good idea would be to choose a distance close to the distance between the data points in the volume. Next, step along the ray in intervals equal to the distance you chose. For each step, or sample, find the eight data points that form a cube around the sample point. Figure 2 outlines the cube formed around the sample point indicated by the orange arrow. Information and images on trilinear interpolation can be found on Wikipedia at this link. Basically, trilinear interpolation is just linear interpolation performed on each of the three axes. Linear interpolation finds the difference between the points on either side of the sample point, and assigns a value to the sample point based on how close or far it is from the each of the two end points. Please see the Wikipedia article for a technical explanation.


Step 5: Combining the Samples on Each Ray to Create the Final Image

Once you have interpolated the values of the samples on each ray, you need to combine the samples into a single value for each pixel in the final image. This is done moving either back-to-front or front-to-back, meaning the background is first followed by the farthest from the camera, or vice versa. As each sample is reached, it is combined with the total value of the pixel. To combine two RGBA values from the sample points, the opacity (alpha value, A) must be accounted for. The opacity of each of the samples is partly based on the type of material being scanned, and figuring it out is beyond the scope of this tutorial, as well as my understand. For the sake of this tutorial, assume you are given the alpha values. The simplest way to combine these color values is to multiply the R, G, and B values by the alpha, and then add the R, G, B, and A values of the two colors together. This will produce the effect you have likely seen in various medical images, where the thicker, more solid parts of the bone or tissue being imaged look more opaque than the thinner parts, like the edges of the tissues, or the eye sockets in a skull.

Example

s1 = (255, 0, 0, 0.2)  (red)
s2 = (0, 0, 255, 0.5)  (blue)

Apply the alpha values to the RGB values.

s1 = (51, 0, 0, 0.2)
s2 = (0, 0, 127.5, 0.5)

Now, add the RGBA values together.

total = (51+0, 0+0, 0+127.5, 0.2+0.5)
= (51, 0, 127.5, 0.7)

The result is a blueish purple, since there is more blue than red. The blue color was more opaque, so it had a greater influence on the resulting color. Also, notice that the result has an opacity of 0.7, so you can imagine that as more sample colors are added, the total color will get more and more opaque.

Once this total color value is calculated, it is stored as the color of the current pixel. When the color of all the pixels have been calculated, this data is output into an image file format of some type, so that it can be viewed.


Figure 3: The crocodile mummy provided by the Phoebe A. Hearst Museum of Anthropology, UC Berkeley. CT data was acquired by Dr. Rebecca Fahrig, Department of Radiology, Stanford University, using a Siemens SOMATOM Definition, Siemens Healthcare. The image was rendered by High Definition Volume Rendering® engine (Fovia, Inc).

Going Further

The methods described here are just the basic, simplified ways to perform volume ray casting. The algorithms used to create Figure 3 are likely far more complicated than anything you have read here. Figure 3 is a good example of the level of detail that can be obtained with volume ray casting. If you are interested in learning about how to improve on what you have already learned, I suggest looking into computing shading for the sample points, as well as better ways to combine the samples on the rays.