GISdevelopment.net ---> AARS ---> ACRS 2004 ---> Data Processing: DEM/3D Generation

Realistic Texture Mapping on 3D Building Models

Fuan Tsai, Hou-Chin Lin
Center for Space and Remote Sensing Research, Dept. Civil Engineering
National Central University, Zhong-Li, Taiwan
Tel: +886-3-4227151 ext. 57619 Fax: +886-3-4364908
E-mail: ftsai@csrsr.ncu.edu.tw, 92322088@cc.ncu.edu.tw


ABSTRACT
Three dimensional geo-information is a fast developing topic in remote sensing and Geographic Information System. Using the technology of remote sensing, 3D building models can be constructed to resemble real-world building layouts, appearances and other characteristics. Currently, however, most building models do not have sufficient and accurate texture information. The lack of texture not only makes 3D building models less realistic, it may also fail to provide needed information, especially for complex applications such as cyber city implementation. The purpose of this study is to produce accurate texture mappings on building models. The textures are generated from mosaic of digital pictures taken from different angles and distances by free hands. Because of different picture-taking conditions, individual pictures may have different brightness, shadings, and other properties. All of them need to be addressed before the mosaic images can be mapped onto the building models. This study develops a procedure to integrate digital pictures and correctly map them to corresponding objects in building models. The procedure first detects shadows and blocked regions in the pictures and excludes them from subsequent process. Secondly, overlapped regions are identified using tie points to develop mathematical relationships of target objects across pictures. The developed mathematical models are then used to merge the pictures to generate a smooth and seamless composite image of the target object. Finally, the mosaic image can be mapped to its corresponding building face (wall) using pre-defined control points. The resultant building models will have a more accurate texture and improves the reality and practicality of cyber city implementation.

I. INTRODUCTION
The construction and applications of three-dimensional geoinformatics are among the fast growing research topics in the fields of remote sensing and geographic information system. Advances of computer graphics, visualization, and other information technologies further extend 3D geoinformatics into a more complex and diverse industry. In particular, the implementation and applications of cyber city that requires comprehensive integration of remote sensing, GIS, and information technologies, have been identified as one of the most appealing challenges in the research and development of geoinformatics (McEachren & Krrak, 2001; Kraak, 2002). The fundamentals of cyber city lie on the accurate establishment of 3D building models and realistic texture mappings of model surfaces. Currently, building models are commonly generated from aerial photos, high-resolution satellite images, and LIDAR data in conjunction with digital terrain models (DTM). Algorithms developed for this purpose have been proposed and achieved certain degree of success (e.g., Rau & Chen, 2003). However, because of the restriction in sensor looking angles, these data can only provide limited texture information of building roofs, not surrounding walls or side surfaces. To overcome this disadvantage, this research tries to produce more complete and accurate texture mapping on 3D building models by developing algorithms and a procedure to generate panoramic images for individual building walls as seamlessly as possible from mosaics of digital photographs and map them to corresponding object surfaces in models.

Because the digital photographs are usually taken at different conditions, images are of different perspectives, brightness, contrasts, shadings, and other properties. These variations need to be adjusted in order to integrate into a seamless mosaic. The adjustment can be categorized into two issues: the geometric space and the color domain.

If the camera parameters are known, the geometric correction can be done using photogrammeric models of perspective photo mapping (Huang, 2001). Another approach is to (interactively) identify building boundaries from images to determine faces of the building and to map corresponding texture blocks to each surface from cropped areas selected from the image pool (Debevec, 1996; Fu 2002) or to use highly textured points as seed points to obtain relationship between two overlapped images (Kim et al., 2003).

For realistic texture mapping from mosaics of close-range images, in addition to the geometric correction or registration, the variations in color space of individual images also need to be minimized. The most common approach is to use histogram match or equalization to force the color distribution of candidate images to be with the same range of a "base image". This method may cause serious misrendering of shadings and sometimes may produce hazy or low-contrast results. Burt and Adelson (1983) demonstrated a multi-resolution spline for image mosaic to address this issue. Their algorithm is capable of generating smooth image mosaic but can only apply to an image pair a time and requires intensive computation. Consequently, this method may not be suitable for cyber city texture mapping. For cyber city implementation, texture mapping of 3D building models needs to react efficiently and effectively, so fast, light-weighted mapping algorithms are preferred. The following of this paper will describe and discuss such a mapping approach developed in this research and specifically for cyber city applications.

II. TEXTURE MAPPING PROCEDURE AND ALGORITHMS
This study employs a hybrid approach to perform texture mapping on 3D building models. The general procedure is:

2.1. Preprocessing
This includes geometric correction or registration of individual images and generating boundary polygons (regions of interest) of all identified building objects. The preferred geometric correction method is orthorectification in image photogrammetry. However, if correct photogrammetric affiliation cannot be established, images should be registered to the actual 3D building models with tie points. Polygons representing the regions of interest that not only describe outlines of building objects in images but also can be used to filter out blocked portions and shadows from original images. In addition, these polygons are also used to adjust color and shading distributions, especially for the overlaps among multiple images as described below.

2.2 Image Mosaic
Texture regions belonging to the same building facade are mosaiced to generate a panoramic image. This research developed a polygon-based fast algorithm to integrate building texture objects from multiple images and can recalculate color and shading information of overlapped areas simultaneously. The algorithm can be illustrated as a simple equation:

For a pixel in the mosaiced image Pm, its new gray value of each spectral band is determined by a wighted sum of all corresponding pixels in the image pool. As shown in Equation 1, the weighting consists of two measured values: the distance to the centroid of the polygon, dc, and the minimum distance to the object boundary, de. The power factors, a and b, are constants and can be specified dynamically according to the degree of smoothness required.

2.3 Mapping
As described above, because images have been geometrically corrected or registered before the merging process, pixels in resultant panoramic images should have inherited correct geometric conditions and relationship from their original images. Therefore, mapping them onto corresponding 3D building models should be straightforward. A simple approach is to use predefined control points to map pixels onto matching model facades. Detail operational aspects depend on the visualization or graphics methods used to display the model and will not be discussed in this paper.

III. RESULTS AND DISCUSSION
Figure 1 is an example tested in this study, which shows a face of a fairly complex building in three overlapped scenes. The three images were acquired using a consumer digital camera at different viewing points but with similar looking angles to minimize the geometric distortions. Polygons in the three pictures represent object boundaries but excluding blocked areas of the objects and were created interactively. After geometric correction and registration, a series mosaic tests was conducted in order to evaluate the performance of developed polygon-based integration algorithm.


Figure 1. Original digital images for mosaic test.

Figure 2 and 3 display the mosaic results from two and three images of Figure 1. The colors and shadings of the mosaic images were adjusted simply based on the mean and standard deviation in the overlapped areas among the base images. Figure 4 is the result of another integration of the three original images. The colors and shadings in overlapped areas of Figure 4 were calculated from their original images with weighting factors inversely proportional to the distance to polygon centroids. As can be seen from Figure 2, 3 and 4, the integration is not smooth. The jumps in shadings and gray levels are evident for regions located at the polygon edges of overlapping in original images.


Figure 2. Simple mosaic of two images.


Figure 3. Simple mosaic of three images.


Figure 4. Weighted mosaic with distances to centroids.

On the other hand, Figure 5 demonstrates the result of a smooth panoramic image constructed from the same three original images displayed in Figure 1. Shadings and gray values in all three color bands of Figure 5 were re-calculated from identified regions of interest in original images using the polygon-based algorithm described in the previous section, and with both distance weighting parameters taken into account. It is clear that, in Figure 5, the noticeable jumps previously appearing near overlapping edges in Figure 2, 3 and 4 have been effectively eliminated, resulting in a smoother (at least visually in the color space) panoramic image covering the complete building face.


Figure 5. Weighted mosaic with distances to centroids and polygon edges

One thing to note is that the ghost images of the second left tree and the aliasing effect on the roof of the semi-cylindrical structure in the panoramic images of Figure 2 to 5 are caused by spatial distortion or deviations in geometric registration of the original images. The polygon-based merging algorithm might help reduce some of this artifact. However, a robust geometric correction operation, such as image orthorectification, is more likely to be the ultimate solution for this problem.

Another advantage of using this polygon-based algorithm is that it is fast. Comparing with the spline-based mosaicing which uses complicated Lagrange operations to split and images using image pyramids (Burt & Adelson, 1983), the polygon-based approach is more straightforward and requires little computation. For example, whether or not a pixel is located inside a polygon can be determined by counting the number of interactions of a half-infinite vertical or horizontal line (originating from the pixel) and polygon edges. The distance from a pixel to a boundary line segment can also be obtained quickly using simple geometry or even approximated using distances to vertices. This provides a significant advantage in cyber city applications. Depending on the size and scale of the implementation, a cyber city may consist of hundreds to thousands of building objects. Thus, a fast texture mapping algorithm and operation is a practical necessity to the efficiency and the deployment of cyber city in real-world applications.

IV. CONCLUSION AND FUTURE WORK
This paper demonstrates a procedure for realistic texture mapping on 3D building models. The procedure uses a polygon-based algorithm for integrating individual digital photographs to produce seamless panoramic images for 3D building facades. In contrast to conventional image mosaicing methods, the algorithm can deal with multiple images at the same time. It is fast and requires little computation. In addition, the polygon-based approach can also easily remove shadows and blocked areas from the original images. The resultant panoramic images are smooth and seamless. All in all, with the effectiveness and efficiency of the procedure and mosaicing algorithm developed in this study, it will be a valuable addition to the research and development in realistic texture mapping for 3D building models and has a great potential to promote cyber city implementation and applications.

Finally, this project is still a work-in-progress, improvements are undeniably anticipated. The fist issue to improve is to increase the degree of automation. For example, the construction of polygons for regions of interest is currently through interactive operations. Automation of this process should be able to be accomplished by using image-based line detecting techniques. Secondly, the geometric correction and registration of original acquired images strongly affect the results of image mosaic. Therefore, it should be looked into in depth in order to build up a more robust procedure. These two issues will be the top priority for the future development and extension of this research.

REFERENCES
  • Burt, P. J. and E. H. Adelson, 1983, "A Multiresolution Spline With Application to Image Mosaics", ACM Transactions on Graphics, 2 (4), pp. 217-236.
  • Debevec, Paul E., 1996 "Modeling and Rendering Architecture from Photographs", Ph.D. Dissertation, University of California at Berkeley, CA USA.
  • Fu, Bin-Gang, "Automatic Perspective Texture Mapping on 3D Building Models", MS Thesis, National Cheng-Kong University, Tainan, Taiwan.
  • Huang, Wen-Li, 2001, "A Study of Facade Texture Mapping on 3D Building Model by Close-Range Photogrammetry", MS Thesis, National Cheng-Kong University, Tainan, Taiwan.
  • MacEachren, A. M. and Kraak, M.-J., 2001, "Research Challenges in Geovisualization", Cartography and Geographic Information Science, vol. 28, pp. 3-12.
  • Kim, D. H., Y.-I. Yoon, J.-S. Choi, 2003, "An efficient method to build panoramic image mosaics", Pattern Recognition Letters, 24, pp. 2421-2429.
  • Kraak, Menno-Jan, 2002, "Some Aspects of Geovisualization", GeoInformatics, pp. 26-37.
  • Rau, J.-Y. & L.-C. Chen, 2003, "Robust Reconstructionof Building Models from Three-Dimensional Line Segments", PE&RS, 69 (2), pp. 181-188.