The panorama module employs a technique for stitching images together that is highly dependent upon source images that were captured by rotating a camera about its nodal point. The nodal point of a camera is the exact point within the lens where the image inverts. It is not equivalent to the film plane.
Failure to rotate the camera about its nodal point, results in a phenomenon known as parallax. To understand parallax and the effect it can have on the panoramas, a simple experiment can be conducted. The photographer should begin by holding the index finger of their left and right hands in front of their face. They should then place their right finger about 10 inches from their face, and the left one at arms length. Next, they should close one eye and align their hands, so the right finger obscures the left. Now they should turn their head from right-to-left. As they do so, they will see the left finger move from left-to-right, along with their head. This occurs because they are not rotating their head about their eye’s nodal point, but instead about their neck, which is significantly behind their eye’s nodal point.
A camera is no different from the photographer’s eye. Failing to rotate a camera about its nodal point, will result in physical differences between adjacent images, as background objects are obscured by foreground objects, in one image, but not in another. This is known as parallax, and it is impossible to accurately stitch a series of images, which are subject to parallax.
The solution to this problem, is to ensure a camera is rotated about its nodal point. Unfortunately, this is not very easy to accomplish. Simply mounting a camera to a standard tripod will not work, since the camera will be rotated about its mounting hole, which is likely to be the cameras center-of-gravity, not its nodal point. Fortunately there are special pan heads, from companies such as Kaidan and Peace River Studios, which are designed to mount a camera to a tripod, in such a manner that it is rotated about its nodal point. These pan heads are adjustable, so that they can accommodate a wide range of cameras and lenses, each of which will have different nodal points. The catch is that the operator has to first find the cameras nodal point.
The process for finding a camera’s nodal point, is very similar to the “finger experiment” performed earlier. There are two axes that must be adjusted, before a camera’s mounting position can be considered nodal. The first axis, which shall be referred to as the X-axis, is easy to determine. While facing the front of the camera, loosen the pan head’s adjustment screw, so the camera can be slid from left-to-right. Once done, slide the camera, so that the lens appears centered above center of the rotation of the pan head. Once in position, tighten the pan head’s adjusting bolt, so the camera cannot slide any more. Most quality pan-heads will have markings on them, so that the X-axis position can be recorded. If this is the case, the current position should be recorded for later reference, in case the photographer ever needs to break down the pan head or mount a different camera.
The next axis to be adjusted, the Y-Axis, concerns the front-to-back position of the camera. It is in this step that the nodal point of the camera needs to positioned precisely over the rotation point of the pan head. Unfortunately, since most camera’s do not have their nodal point marked, the nodal point must be determined through observation. First the photographer must find two objects that will assume the role of their fingers, in the finger experiment performed earlier. Telephone or light poles perform the role nicely, but virtually any pair of near and distant objects will do. Begin by loosening the pan head adjusting screw, so that the camera can slide front-to-back. Next aim the camera at the object pair, and position it, so that the near object obscures the distant object. Now, rotate the pan head from the left-to-right, and observe the behavior of the distant object. The process is to slide the camera front-to-back, until the distant object is always obscured by the near object. When this occurs, the nodal point has been found! Tighten the pan head adjusting screw, and record its position, as was done for the X-axis.
The above technique only works if the photographer has an SLR camera or a digital camera with an LCD viewfinder. Cameras with separate viewfinders present a problem, since the viewfinder is offset slightly from the lens. Employing the above technique, will find the nodal point of the viewfinder, not the lens. If such is the case, the photographer will actually have to take a pair of photographs, for several different positions of the camera, and evaluate the developed images. In such instances, finding the X-axis is exactly the same, since the photographer is visually aligning the lens over the rotation point. When finding the Y-axis position, the photographer will have to establish a range of movement for the camera, and test several positions within that range.
Establishing a range of movement, is pretty simple, since the nodal point will lay somewhere, between the front of the lens and the film plane of the camera. Determine what this range is and record the positions on the pan head, so that the range is not exceeded. Then position the camera at the front of the range, and aim it at the pair of objects. Take a picture, and rotate the pan head to its next stop position, and take another picture. Then slide the camera back slightly, within its range, and repeat the process, by taking another pair of images. For each pair of images the pan head’s position needs to be recorded for later reference. The photographer may also want to place some indication of the pan head’s position (such as a white-board with the current position written on it), within the field of view of the camera, so that the position is physically recorded with the picture. This is a precaution, in the unlikely instance that the developed images do not match the order, in which they were taken. Repeat this until the entire range of positions has been traversed.
Once the final set of images has been taken, they must be evaluated, by analyzing the relative position of the foreground and background objects. This is necessary, since it is unlikely that the camera was aligned perfectly, causing the near object to obscure the distant object. Choose the set of images, where the relative position of the two objects does not change. Use the recorded position for this pair, to adjust the pan head. The nodal point has now been found!
Choosing The Right CODEC
The options available when selecting a codec are a function of the codec and not a function of The VR Worx. When in the codec selector window, The VR Worx simply tells QuickTime to choose the “best depth.” The codec then displays what it considers to be the best depth. Some codecs do not even offer a choice of depths. Generally it is best not to change whatever depth the codec considers “best.” Any changes made to the codec type, depth or quality settings, are recorded by The VR Worx software and blindly passed on to QuickTime, when it comes time to compress images.
The codec quality setting, also, is a function of each codec, and not The VR Worx. Some codecs are lossless, and thus have no choice other than “best.” Lossy codecs typically offer a range of quality settings, though not necessarily. Generally, quality is a tradeoff between image fidelity and file size, and is where the most control over resultant QTVR movies file size lies. The codecs listed in the codec selector (Figure 13.1), are all the codecs installed on the system. This typically just represents the default set provided by QuickTime, but third-party codecs are possible. Many of the listed codecs are inappropriate. Generally, QTVR is limited to four choices.
Cinepak - Low on the quality scale and definitely larger in file size than other options, this codec decompresses faster than anything else, thus being the best choice, when consideration is being made for slow playback hardware. Also, if playback is performed on a pre-QuickTime 3.0 equipped machine, the operator has little choice, as other codecs will yield unpredictable results (movies also have to exported in QTVR 1.0 format for these older systems).
Video - Sometimes a viable option for object movies, especially when being played on slower machines or pre-QuickTime 3.0 machines. Much higher fidelity than Cinepak, but much larger file sizes to match. Not really an appropriate option for panoramas, in any situation.
Photo-JPEG - Good on quality, file size and performance. Slower machines (<120 MHz Pentium or <100 MHz PowerPC) may be a bit jumpy, when playing back, so reverting to Cinepak may be necessary. Images have a large amount of artifacts, and “harsh” edges at lower quality settings. Photo-JPEG/25% yields very good file sizes for the web (panoramas or objects).
Sorenson - Only available to QuickTime 3.0 (and later) equipped systems, it provides excellent image quality and small file size, at the expense of requiring the most horsepower to playback. This infers using 200-300 MHz systems or better. It will achieve better image quality and produce dramatically smaller file sizes than JPEG, at higher quality settings. Again, Sorenson/25% is typical for web, but the operator may be able to drop to an even lower quality than that, and still produce an acceptable image. If the source images are coming from computer generated art, the Graphics codec may also be an option. It is only suitable when the image has many pure, solid colors.
Determining FOV of a Lens
Without knowing the field-of-view of a camera’s lens, it is extremely difficult to achieve good quality stitches from source images. For this reason, it is very important to know the field-of-view, with a relative degree of certainty.
If using a 35 mm film camera to take the source pictures, the field-of-view is relatively simple to attain. The field-of-view of a lens is determined by its focal length. This specification is widely advertised for 35 mm film cameras. The first and most logical place to look for the focal length specification for a lens, is on the lens itself. It is usually stamped or printed on the front of the lens, and appears as number expressed in millimeters (mm). It istypical to have focal lengths in the range of 28 - 50 mm. Only higher-end cameras with interchangeable lenses, will offer focal lengths wider than 28 mm (the shorter the focal length, the wider the field-of-view).
If using a digital camera or a non-35 mm film camera, the focal length may be just as easy to attain. However, verify that the focal length advertised for such cameras, is a “35 mm equivalent” focal length. The VR Worx assumes a 35 mm image size, when calculating the field-of-view from a lens’s focal length. With digital cameras and non-35 mm film cameras, the image size is usually smaller, and thus specifying the true focal length for the cameras lens, would lead to erroneous results.
Luckily most camera manufacturers display the focal length of their lenses as a 35 mm equivalent. For instance, the Kodak® DC220 digital camera has printed on its lens “F.4.0 - 4.7 29 - 58 mm (Equiv).” In this case, 4 mm is the actual (wide angle) focal length but 29 mm is the (wide angle) focal length, equivalent to a 35 mm film camera. It is the equivalent focal length that must be specified in the panorama module’s Setup panel. If attempts to find a camera’s focal length, by inspecting the lens and reviewing the accompanying documentation fails, the following techniques may be employed to determine the actual field-of-view.
Method 1 - Check the Stationery Templates The VR Worx ships with stationery templates for over 40 different models of cameras. Check the list, and see whether the camera in question is among them. To do this, choose Open from the File menu. Click the stationery template icon (Macintosh), or choose “Stationery Template” from the file type list (Windows), and the list of available templates appears. If the camera model is listed, opening its corresponding file displays a template, with the correct settings for the camera preprogrammed.
Method 2 - Compare Two Pictures If a stationery template for the camera in question does not exist, an analysis of images will have to be made. Set the camera and pan head up to shoot a panorama. Only take the first two pictures in the node sweep. The left picture will be referred to as Image 1, and the right picture as Image 2. With the photographs developed (or downloaded in the case of digital cameras), measure the width of a photograph. The units of measure are unimportant, as long as the measurements are consistent. Once the width of the images is known, locate a prominent feature that is visible in both pictures. Measure the distance from the right edge of Image 1, to the prominent feature. This distance will be called Offset 1. Now measure the distance from the left edge of Image 2, to the same prominent feature. Call this distance Offset 2. Now plug the measurements into the following formula.
Now, have The VR Worx figure out the rest. In the Setup panel, supply all the known information, such as 360° Node Sweep, Max Frame count (according to pan head setting) and the image dimensions matching the digitized photo’s pixel dimensions. Next, enter 90° in the FOV field. The % overlap is calculated based on all the supplied parameters. If the number displayed is greater than the % overlap calculated in the above formula, reduce the value of FOV by 5°. If the % overlap is less than the calculated overlap, increase the FOV by 5°. Repeat these steps until a FOV value is found that matches the calculated overlap.
Archiving Panorama Projects
When saving a project document from the panorama module, all the source images, as well as the rendered panoramic image (the result of stitching and blending), are saved in this document. Depending on the size of the images involved, this can require a large amount of disk space. Frequently, once the panoramic image has been blended and the operator is satisfied with it, there is not a need to ever restitch again, and hence the source images stored in the project document are no longer needed. They are simply wasting space.
To overcome this, the source imagery can be eliminated by “archiving” the project. To do this, the operator should perform the following steps:
Blend the panorama fully
Switch back to the Setup panel and change the Source Format to Single Panorama.
A warning appears indicating that source images will be lost. Click OK.
The source images will then be removed, and the Acquire and Stitch panels are dimmed. The Blend panel is still available, however, and switching to it reveals the blended panorama intact.
The project now behaves as if the panorama were generated in another application and imported into the panorama module, as a “flat” panoramic image. Saving the project document at this point, will result in a significantly smaller file size, and thus more efficient disk usage.
Increasing Stitch Accuracy
When stitching a series of images in the Stitch panel of the panorama module, The VR Worx evaluates the imagery, and sets up several internal values, which are subsequently used to transform and correlate the source images. This is true whether or not automatic or manual correction is chosen. In the later case, this analysis is only performed the first time that the series of images is stitched. This phase of the stitching process, is indicated on the progress dialog, by the message Analyzing Images….
During this phase, The VR Worx is only analyzing the first few images in the panorama, and is assuming that the results of this analysis are valid for the rest of the panorama. This is usually the case, but will not always be. Panoramas that vary greatly in subject matter, as the view is panned, such as a beach front vista or the corner of a room, can have their stitch quality greatly increased, by strategically choosing the first set of images that the software will analyze.
For example, start by examining a panorama taken of the view, from the deck of a beach front property, which looks over the ocean. The series of images that look towards the ocean, do not have much detail and contrasting objects, which the software can key on during correlation. However, the series of views that face the structures on the beach front, provide plenty of detail and contrast. In this case, it would be most desirable if the initial images analyzed by the software, were those of the structures, not those looking towards the ocean.
As another example, examine a panorama of an interior room. The tripod is placed towards one end of the room, right next to a wall. The wall itself is plain and featureless, but the rest of the room provides plenty of detail. Here, the initial images should not be of the wall, but of the feature-filled room.
To specify which image should be analyzed first, the operator should select the desired image in the Acquire panel, by clicking on its thumbnail. The Make Origin command from the Edit menu should then be chosen. The frame cylinder is then rotated, so that the origin frame is displayed at the top of the cylinder. The software will now analyze this image first, the next time a Stitch operation is performed.
When QTVR media is generated, be it a panorama, object or scene, it often needs to be delivered over multiple mediums. For example, a series of panoramas linked together in a scene for a real estate presentation, may be distributed on a CD-ROM, as well as on a world-wide-web site. In such cases, the CD-ROM version will typically be fairly high in quality, while the web version will be lower quality, to preserve bandwidth. Aside from the quality, there is, otherwise, no difference between the two versions of the media. When multiple versions of the same media are generated at different quality settings, such as this, it is called “repurposing” the media.
The VR Worx has a powerful capability for repurposing media, the scene module. When composing scenes, the final stage of the process allows for recompression of the source media. The operator has independent control over the recompression of panoramas, objects, stills and linear movies, which may be a part of the scene. The scene module also has the capability of composing single node “scenes,” which in essence, transforms this module into a node filter of sorts, taking as input a single QTVR media node, and outputting a transformed version of the original.
When recompression is specified, each image frame in the source media is rendered internally at the highest possible resolution. The internal rendering is then recompressed, according to the new settings, and stored in the final QTVR movie. To achieve the best possible results, it is best to keep the source image quality as high as possible. This prevents significant image degradation that occurs when recompressing compressed imagery.
The basic concept of repurposing media, be it panorama or objects, is to create the original media, with as high of a quality setting as possible, add it to a scene, and rely on the scene modules ability, to recompress the media to create the final product. For example, a panorama may be created in the panorama module, and composed using no compression. The result is the highest possible image fidelity, but an unmanageable file size. This QTVR movie is added to a scene as a node. The scene is then composed, with recompression for panoramas setup to use the Sorenson codec at a 90% quality setting. This results in a high quality panorama, with a fairly large file size, but significantly smaller than the original panorama, which had no compression. The scene can then be recomposed, this time with a Sorenson/25% setting. The resultant QTVR movie is lower in quality, but significantly smaller in size.
The advantage to employing this technique, is that it eliminates the need to recompose the panorama in the panorama module, for each version of the QTVR media that needs to be produced. The time savings compounds itself, when multiple media files are involved, such as a multi-node scene.
Using the Lens Tuner Utility
1. Launch VR Worx 2.6. You will be prompted with a choice, to create a project file or to open a previously created project file. Choose "Panorama".
2. If this is your first time using VR Worx 2.6, you will need to create a camera preset that is specific to your camera/lens combination. Select the Utilities menu and choose Lens Tuner.
3. Enter the number of images shot to create the panorama. In this example 8 images were captured using a Nikon D70 with a Nikkor 10.5mm full frame fisheye lens. Due to the FOV conversion of the camera, the focal length of the lens is multiplied by 1.5 to give a 35mm equivalent of 16mm. Select two adjacent images, select one on the left and one on the right. Enable the filters and choose the Fisheye Rectification filter.
Click and hold the FOV down arrow until the image overlap area are aligned as close as possible. The focal length may have to be adjusted slightly up or down to move the images closer or farther apart so that image alignment is more exact. When this is achieved select the Add Preset button and name the preset. In this example, the preset was named Nikkor 10.5mm.
If a regular wide angle lens was used to capture your images, choose the Barrel Distortion filter and adjust the coefficients until your image overlap align as close as possible. The "A" coefficient, when changed, will effect the outer areas of the image. The "B" coefficient is the most common one to change. This value will effect the entire image uniformly. The "C" coefficient, when changed, will affect the inner areas of the image.
4. Enter the number of images you captured in the Max Frames field of the panorama setup panel. Then choose the camera preset you created.
5. Go to the Acquire Panel and select the "Multiple" button. Locate the folder containing your images. Select the first image in the sequence and click on the Add 8 button to import your images.
6. Go the Stitch panel and click on the "Build" button. When the overlapped images appear, uncheck the Opaque Frames display option to enable you to see the overlapping areas of adjacent images. By checking the Relevant Areas display option, you will be able to see the areas that should be aligned. Adjustments can be made, if necessary, by using the Fine Tune offset arrow keys, the keyboard arrow keys or by selecting the image with the cursor and click and dragging the image into place. The Tilt option is used for correcting keystone in such a case where the camera was not level and was tilted slightly up or down.
7. Go to the Blend panel and click on the "Build" button to produce a flat panoramic image. You can export the flat image by selecting File menu and Export. You can select from several image formats. In this panel, you also have the ability to resize the flat panoramic image by selecting the Resize button. The Blends Levels and Alpha Ramp Levels can be adjusted by selecting the Options button. Color adjustments, brightness, contrast, sharpness levels and other filters can be accessed by selecting the Filters button. The Edit button will open a built-in image editor or it can be customized to launch an external image editor of your choice via the Preferences. After making your adjustments, the flat image will be updated automatically in VR Worx.
8. Add hotspots to point to URL's or create hotspot placeholders to be accessed later on in HTML documents. You can also create hotspots to other QTVR panoramas, QTVR objects, QT movies, and still images as well but it's recommended you create these hotspots in the Scene module.
9. Choose a CODEC of your choice and compress the movie. Photo JPEG is the default. If you have a target size in mind for a particular project, enable the Target Size and enter the number in kilobytes.
10. Preview your movie and set the initial view and zoom levels. Export your movie.
Dialing In a Camera/Lens Combination
The panorama module is designed to deal with a wide variety of camera and lens combinations, when processing a series of images. A critical step in image processing involves warping each image in such a way that adjacent images can be aligned, or stitched together. The process is relatively simple, provided the imagery is perfect and conforms to the theoretical optimal characteristics. Unfortunately, it is an imperfect world, and source imagery is rarely optimal. Several factors, not the least of which is lens distortion, can contribute to imperfect imagery, thus resulting in imperfect panoramic renderings.
The VR Worx is designed to accommodate a certain degree of imperfection, when stitching a series of images. It first attempts to find the true vertical field-of-view of the lens being used. It then attempts to identify and correct distortions introduced by the lens. The result is usually quite acceptable; however, it likely can be improved, by applying a certain degree of manual corrections. Luckily, performing these manual corrections is not difficult, provided the operator understands how each correction setting works, and what flaws they are intended to correct.
Once all manual corrections have been performed, the settings may be saved in a stationery template. Since most of the correction settings are intended to correct physical flaws introduced by a lens, and these physical flaws will exist in exactly the same way for all images taken with that lens, the stationery template can be used for all panoramas in the future. This bypasses the need to manually correct each panorama generated and speeds up the composition, because the automatic correction process is bypassed.
Performing the steps necessary to create a stationery template containing manual correction settings, is known as “dialing in” a camera lens. This is highly recommended for all users of The VR Worx, whether casual or professional.
The following steps define the process of dialing in a camera lens.
Step 1 - Ensure “Nodal” Imagery The dial-in process will fail miserably, if the imagery is not “nodal,” i.e., the camera was not rotated about its nodal point, when the images were taken. To locate the nodal point, refer to the Finding a Camera’s Nodal Point tip earlier in this chapter.
Step 2 - Setup and Acquire Images Normally This step involves setting up the project, based on the camera and image characteristics. The source images will then be fully acquired, allowing access to the Stitch panel.
Step 3 - Stitch the Images Normally Ensure that Auto correction is enabled, and click the Stitch button. It is important to do this, to allow The VR Worx to identify and calculate the settings that will be the basis for manual correction. When stitching is complete, enable the Manual correction mode.
Step 4 - Identify and Correct Vertical Projection Flaws Each image is projected outward vertically at its middle. The degree of projection increases, at the top and bottom of the image. If the source imagery is rectilinear, it is unlikely that this adjustment will need to be made. However, if a wide angle adapter, instead of a true wide angle lens, were used, adjustments to vertical projection may be necessary.
There are basically two problems that can be present, too much or too little vertical projection. Too much vertical projection can be identified by “scalloping” that appears along horizontal lines in the imagery. The top or bottom of walls or fences are excellent subjects that will clearly exhibit scalloping, if the vertical projection is too great. Figure 007a exhibits scalloping, along the ceiling-line of a room. Notice how the ceiling-line appears to arc in each frame. In this case, the V-Adj value needs to decreased, until the ceiling-line appears smooth and contiguous. This example is somewhat extreme, thus the value should be reduced by typing negative values into the V-Adj. field, starting with 5° increments and reducing it to 1° as the desired result nears.
Figure 007a: (left) Example of too much V-Adj.
The second vertical adjustment problem is too little projection. In this case, an “angular corner” appearance will be present, along horizontal lines in the imagery. In Figure 007b, notice how the ceiling-line appears flat and seems to create an angular corner or joint, at the edge of each image. In this case, the V-Adj. value needs to be increased, to increase the vertical projection applied to each image. As in the above example, where the projection needed to be decreased, values should be typed into the V-Adj. field in 5° then 1° increments. In this case, however, positive values should be supplied, until the ceiling-line appears smooth and contiguous. While the vertical projection value is being adjusted, it may be very helpful to periodically restitch the images. This can be accomplished, by simply clicking the Stitch button. Verify that Manual correction is still in effect, when this is done or the manual adjustment settings will be lost.
Figure 007b: (left) An example of not enough V-Adj.
Step 5 - Identify and Correct Spherical Distortion All lenses introduce distortion at their edges, which is a result of light passing through the curved glass that the lens is made from. Some lenses exhibit more distortion than others. Such distortion is easily identified, by observing vertical lines in the imagery, such as the sides of buildings or door frames. These lines will appear to bow outward, the closer they are to the sides of the image.
Such distortion is much more obvious, when a wideangle adapter was used instead of a true wide-angle lens. However, this distortion exists to a certain degree in all lenses, and thus, spherical correction is likely necessary in all imagery. The process of correcting spherical distortion is known as rectification of the imagery.
The VR Worx provides control over the rectification of images through the H-Adj. setting. Increasing this value, compensates for a greater degree of bowing. Unlike vertical projection, it is almost never the case that too much horizontal correction is performed. It is usually just right or too little.
Figure 007c shows the result of stitching images that exhibit a fair amount of spherical distortion. Note that the window frame stitches correctly at the middle, but bows outward at the top and bottom.
In this case, applying a H-Adj. value of 8% rectifies the images enough that the correctly stitched image in appears Figure 007d.
Throughout the course or adjusting the H-Adj. value, the width of each image at its middle is changing, and thus the result of correlation also must change. The VR Worx will automatically try to compensate for changes in width, however, the only true method of compensation is to restitch the panorama. This can be accomplished, by simply clicking the Stitch button. Establish that Manual correction is still in effect, when this is done or the manual adjustment settings will be lost. Frequently restitching while making changes to the H-Adj. setting, is highly recommended.
Step 6 - Save the Dialed-In Settings Once the correct degree of V-Adj. and H-Adj. has been defined, it is time to save the settings as a stationery template. This is done by choosing Save as... from the File menu. In the file save dialog that appears, ensure that the stationery template file type is selected. Give the file a significant name, and save it in a handy location, preferably the Stationery Templates folder that is local to the application.
To use the settings in the future, begin the panorama composition process, by first opening the stationery template. This can be done, by doubleclicking its icon or by choosing Open from the File menu.