GISdevelopment.net ---> AARS ---> ACRS 2002 ---> Data Processing, Algorithm and Modelling

Application specific compression for mini-satellites with limited downlink capacity

Tobias Trenschel,Timo Bretschneider,Graham Leedham
School of Computer Engineering, Nanyang Technological University
Blk. N4 #02a-32, Nanyang Avenue, Singapore 639798
Tel: +65 – 6790 – 6045
Fax: +65 – 6792 – 6559
E-mail: astimo@ntu.edu.sg
Singapore


Abstract:
The effectiveness of a remote sensing mission is restricted by any bottl eneck in the entire system, which comprises the actual imaging system, the satellite bus, and the ground receiving stations. One major constraint for many mini-satellite missions is the limited downlink capacity, i.e. more data can be acquired than transmitted. However, if the mission focuses on particular applications that do not require the storage of the raw image data on the ground, then appropriate on-board processing can ease the requirements on the downlink and increase the benefits and value of the satellite’s mission. One example is hazard monitoring like fire detection with the emphasis on the location, size, and characteristic of the fire and only secondary attention to the surrounding unaffected areas. Therefore this paper proposes a general model for application specific compression of the imagery. The idea comprises the aspects of image analysis with the support by an on-board database system and the resulting compression based on the intermediate results. The software is part of a parallel processing system, which will be flown on-board of X-Sat – Singapore’s first remote sensing satellite.

roduction
Mini-satellites face a variety of constraints that limit their downlink capacity e.g. on-board power availability and restrictions on the a ccess to, and operation of ground receiving stations. Therefore, the effectiveness of a remote sensing mission using a low cost mini-satellite is reduced since generally the actual imaging system can acquire more data than can be downloaded to a ground station. Assuming the satellite provides sufficient data storage and computational resources for a given user -defined application, then the effectiveness of the actual mission can be improved using appropriate on- board processing (Manduchi et al., 2000). The main idea is to move previously ground-based processing steps on -board, and to carry out the data processing prior to transmission. The advantage is provided by the ability to determine the locations of specific interest and thus to reduce the amount of data to be downlinked. For example, if the application under consideration is hot-spot detection to locate fires, only the hot-spot location needs to be transmitted if the processing can be performed on- board. Search tasks like these can monitor a huge area without using any significant downlink bandwidth.

The overall performance of the system can be further improved by an application specific compression scheme (Hou et al., 2000). This paper describes the creation of a compression map, which is a generalised concept of the region -of-interest (ROI) mask of JPEG2000 (Christopoulos et al., 2000). The compres sion map assigns continu ously adjustable weights to different regions according to their contribution to the user- defined mission. Prior to trans mission, an image transformation, which enables near lossless and lossy compres sion, is applied using the com puted multi-dimensional weights.

Thereby the developed technique caters for an arbitrary number of speciali sation schemes with respect to the actual application. The transformation leads to near lossless compression for regions of high interest – with respect to the actual application – while areas of low importance are encoded using lossy compres sion. Consider the previously mentioned hot-spot example: the output for hot-spot detection will be improved by orders of magnitude with respect to the limited transmission bandwidth if only the hot-spots and their surrounding areas are transmitted. Additionally, a special but less significant interest in urban areas and reservoirs enables evacuation and provision of water to counteract the fires. All this can be accomplished with little further costs involved since only a relatively small amount of data has to be transmitted to provide all the required infor mation. To fulfil the required task of detecting ROIs the.proposed system utilises un supervised classification, determines the so- called compression map according to the user’s application, and applies the gained information to compress the raw image. Note that all those processing steps are carried out according to the specifications provided by the user.

For evaluation of the technique an investigation compares compression assuming homogeneous interest, like it is generally sup ported, and variable content-based compression according to a compres -sion map, reflecting the ap plication. An analysis investigates the gain of the proposed technique and introduces an application specific error measure. In summary the analysis proved that a significant im-provement in the bandwidth usage is achievable for specialised applications. The model was developed for the small satellite X-Sat, which is designed and built by Nanyang Technological University.

This paper is organised as follows: Section 2 provides an introduction to X-Sat and its computa tional re-sources. Section 3 provides an overview of the content-based compression model and describes the different processing steps in detail. The actual results and their discussion are presented in Section 4 while Section 5 summaries the paper.

Overview of X-Sat and its on-board processing facilities
X- Sat is a small-satellite with a mass of approximately 120 kg. It carries three payloads, namely the imaging sys tem IRIS, the buoy detection instrument ADAM, and the parallel processing unit (PPU). Henceforth only the imaging system and the PPU will be considered since the emphasis of this paper is on image processing with the aim to reduce the constraining impact of the downlink. The camera is a push- broom scanner with three individual scan lines in the green (520 nm – 600 nm), red (630 nm – 690 nm), and near-infrared (760 nm – 890 nm) wavelength range. The spatial resolution is specified to be 10 m for a mean orbit altitude of 685 km. The main bottleneck is the downlink of the imagery since the only available ground station is located in Singapo re. In addition to the relative short visibility of the satellite in the range of the receiving antenna the mission objective of imaging over Singapore and the surrounding areas collides with the transmission. Due to power restrictions both operations cannot be run simultaneously. Therefore image compression is required to increase the data and information throughput with respect to the given bandwidth. This is enabled by the parallel computer payload. The PPU con sists of four fully connected radiation-hardened field programmable gate arrays (FPGAs) each hosting five processing nodes. Only four of the nodes associated with each FPGA block are operational at any moment in time. The fifth node is a spare and will only be employed if one of the other four nodes becomes inoperative, e.g. through radiation. The individual nodes comprise of a StrongArm processor and 64 MByte of local memory. The resulting architecture is a mesh with wrap- around and therefore perfectly suited for image processing tasks. For the storage of the acquisition data a 2 GByte RAM disk is attached to the PPU.

Model for content- based compression
The proposed model consists of four major processing steps, namely the image analysis to locate application-specific ROIs, the transformation of the analysis into an abstract representation for the compression, the actual com pres sion, and the buffering of the data for later transmission.
  • Image Analysis
    In the case of X-Sat the first processing step analyses the scene utilising a modified vers ion of the ISODATA classifier with k uniformly distributed class centres as initialisation. The modi fication concerns the computation of the class centres Ci and is based on the work of Looney (Looney, 1999), who introduced the so- called modified weighted fuzzy expected value Vi. The advantage of this measure is the increased insensitivity to noise in contrast to the normally used mean value. The Vi is defined iteratively, i.e. Vi = Vi(¥) , over a set of multispectral samples {x1.........xp}





    The process converges quickly and can be started with the arithmetic mean and standard deviation for Vi(0) and si(0) , respectively. Beside the robust behaviour of the fuzzy measure a high stability and reliability is guaranteed – it precludes convergence to local minima and prevents the precipitated elimination of small classes in an early stage of the classification. After every fifth iteration the existing classes are analysed according to possible merging and splitting operations. By incorporating spatial aspects in the analysis, i.e. distribution within the image and shape of the corresponding feature, the elimination of small classes with distinct characteristics is avoided. A more detailed description of the algorithm was presented in (Bretschneider et al., 2002).

  • Synthesis of Compression Map
    The result of the depicted approach is a classification map, which can be labelled according to the spectral characteristic of the individual classes. For this purpose an additional database provides a reflectance library. Note that the actual utilised analysis strategy depends highly on the application in mind. However, the general intention is the identification of relevant image content. Mathematically the process is described by


    where I is the image and C the corresponding content descriptor. The function þ is the utilised analysis algorithm. Subsequently the compression map M is com puted based on the results C of the previous analysis:


    The compression map is the two-dimensional descriptor for the application specific interest. The actual descriptor values are not necessarily scalars but might be multidimensional to reflect different aspects of interest. Two possible objectives were identified:

    • Spectral accuracy: Every pixel gets an accuracy-value assigned that enables controlled lossy compres sion. Note that spectral distortion introduced by a lossy compression does not depend on the application driven interest, but on the consecutive ground-based analysis.
    • Priority: Every pixel gets a priority assigned, which determines its importance for the application and therefore for the downlink order in case the entire image cannot be transmitted. Further objectives are likely, whereby the breakdown of the accuracy according to the individual multispectral band is one of the most probable extensions .

  • Application- Driven Compression
    In a first investigation, the joint picture expert group (JPEG) algorithm has been used, which is widely used in many applications including onboard remote sensing coding (Pelon and Spiwack, 1996), (Hou et al., 2000). The JPEG baseline compression scheme is a discrete cosine transform (DCT) of 8þ8 pixel blocks. For each block the DCT coeffi cients are quantised according to a quantisation table, i.e. each coefficient is divided by a corresponding value stored in the quantisation table. Subsequently the result is rounded to the nearest integer * . In this paper the baseline JPEG algorithm, which uses a constant quantisation table for the whole image, was modified similarly to the variable quality JPEG proposed by (Golner et al., 2001) allowing a space-variant quality – in particular a different quantisation – for each 8x8 DCT block. The actual quality values were derived from the accuracy components of the


    whereby N8þ8 identifies pixels that belong to the same DCT block. Therefore the application’s interest is enforced without penalising certain pixel regions due to their neighbourhood. Note that the second outlined objective for the compression, i.e. the prioritising of pixels, was not considered, since the investigation focuses on the achievable quality.

  • Data Buffer
    The compressed data is accumulated in the data buffer and downloaded to the ground receiving station when possible. The buffer manager schedules the transmission of packages according to the compression maps with a high scientific value first and therefore enables the prioritising on a higher level among different scenes. This is of particular interest since the downlink capacity is not constant, i.e. it depends on the actual orbit. Nevertheless, this approach will download as many packages as possible, ordered by priority and therefore adaptively transmit the maximum scientific value, which is described in terms of the mission, through the given channel. A flowchart of the entire model is depicted in Figure 1.

  • Mapping of the Model on the Hardware Architecture
    The time while X-Sat is over Singapore, can capture an image, and subsequently downlink it is limited. Additionally the transmission within the same orbit is a system requirement, Therefore the processing time of the data has to be kept to a minimum. Simulations have demonstrated that the simple round- robin streaming of image stripes to individual processors will pro vide the best performance. Each strip covers the entire swath, which results in an optimal RAM disk access and a limited communication between nodes in the classification stage. The number of lines per strip highly depends on the utilised analysis function þ in Equation (3) to create the compres sion map M as well as the possible latency of the algorithm since X-Sat requires a certain time for steering from the imaging orientation to the transmission orientation.


    Figure 1: Model of the contant-based compression scheme
Results
The idea of content-based compression using an application determined compression map requires the employment of an adaptive error measure to qualify the trade-off between accuracy and data volume. The performance of the system cannot be measured by the signal-to-noise ratio (SNR) or the mean square error (MSE), since these distortion measures do not compensate for the space- variant nature of the proposed model. By definition a larger error in areas of lower interest has to be penalised less than in regions of central significance. To reflect this concept an adaptive RMS error measure is applied to.assess the proposed compression scheme. This enables the comparison between the homogeneous baseline JPEG compressed and the adaptive compressed image whereby it is required that both image sets have the same final data size. Equation (6) shows the definitions for the RMS and the adaptive RMS – called aRMS – which basically scales for each pixel pi the transmission error by the quality value qi that is specified in the compression map.


Therefore the system performance of content- based compression is superior to homogeneous compression if the ratio between the aRMS with constant qi and the aRMS using the content- based qi is greater than unity for the same data volume.

Currently there is no imagery obtained by X-Sat available since the system is still in the implementati on phase. However, the spectral characteristic of the utilised camera is similar to the high resolution visible (HRV) instruments of SPOT. Thus the experiments have used SPOT imagery. As an example for the compression evaluation a high quality – in terms of RMS – for urban areas and a low quality for all other areas was used. Figure 2 shows the re sults for two different sub-scenes and three applications, i.e. a different weighting of the varying interests. For the exact values refer to Table 1. Note the obvious blocking artefacts for non-urban areas in Figure 2(a) – (d) due to the chosen low contribution of the corresponding regions to the application, whereas urban areas are en coded with a high quality, revealing many details. A summary of the used application details and the system performance, measured by the introduced ratio is presented in Table 1. In all cases the ratio is greater then unity, documenting the relative benefit of the application specific JPEG compression scheme compared to the baseline JPEG compression. The exact extent of the improvement depends on the application. In par ticular, Application 1 shows a superior performance with a ratio of 10.93 and 7.85 for sub-scene A and B, respectively.




Figure 2: Content-based compression (Imagery is copyright SPOT Image, 1995): (a), (c), (e) result for sub- scene A according to application 1, 2, and 3, (b), (d), (f) result for sub-scene B according to application 1, 2, and 3

A major benefit of the model using continuous adjustable weights is that the approximate outline of a scene is preserved even if only a very small interest is put in the corresponding regions. Therefore a general image understanding can be gained for only a small additional cost with respect to the limited bandwidth. However, the returned benefit from using fewer coefficients is used to enable more highly resolved details for the main ROIs. Unlike the approach in (Golner et al., 2001), which employs human perception as weighting function, a user-speci fied interest is used for the encoding. Thus the underlying model of the proposed compression map is universal and only requires the specifi cation of the function þ in Equation (4).

Table 1: Application dependent system performance

Conclusions
A method for content-based compression of remotely sensed imagery has been described, whereby the signifi cance, and thus the used representation accuracy, is defined by the application itself. Therefore the space- variant continuous adjustment enables the optimisation of the usabil ity of a given downlink bandwidth with respect to the particular mission. The approach extends the idea of standard image compression schemes, which are using constant quality compression for a given bandwidth. The concept of compression maps, which describe different as pects of application foci, is introduced and the regions- of-interest idea that is utilised in JPEG2000 is general ised. The main advantage of the proposed technique is that it allows within the downlink capacities arbitrary com pres sion rates for individual com ponents of a given scene. Therefore relevant data is transmitted near lossless while other information is compressed lossy resulting in an image with space- variant quality.

The key contribution of this paper is the development of a general model that incorporates the specifics of an arbitrary application. Therefore the transition from a-priori hypotheses made on the ground about suitable compressions rates to a posteriori approach with a dynamic adjustment driven by the actual.data and downlink capacity is achieved. The underlying assumption is that compression, in general, is not avoidable due to the mission objectives and constraints introduced by the satellite system.

An application specific measurement for distortions introduced by lossy compression was presented and the ratio of homogeneous and application specific compression calculated. The investigations have shown that depending on the application the proposed system improves the usability of a limited transmission facility.

Future work will incorporate the upcoming JPEG2000 standard to guarantee lossless compression. Furthermore an investigation of an improved buffer management method and its impact on the system is intended.

References
  • Bretschneider, T., Cavet, R., Kao, O., 2002. Retrieval of remotely sensed imagery using spectral information con tent. IEEE Proceedings of the International Geoscience and Remote Sensing Symposium, 4, pp. 2253– 2256.
  • Christopoulos, C. Skodras, A., Ebrahimi, T., 2000. The JPEG2000 still image coding system: An overview. IEEE Transactions on Consumer Electronics, 46(4), pp.1103 –1127.
  • Golner, M.A., Mikhael, W.B., Krishnan, V., 2001. Multi - fidelity JPEG based compression of images using variable quantisation. Electronics Letters, 37(7), pp. 423– 424.
  • Hou, P., Petrou, M., Underwood, C.I., Hojjatoleslami, A., 2000. Improving JPEG performance in conjunction with cloud editing for remote sensing applications. IEEE Transactions on Geoscience and Remote Sensing, 38(1), pp. 515– 524.
  • Looney, C., 1999. A fuzzy clustering and fuzzy merging algorithm. Technical Report, CS-UNR-101-1999.
  • Manduchi, R., Dolinar, S., Pollara, F., Matache , A., 2000. On- board science processing and buffer management for intelligent deep space communications. IEEE Proceedings of the Aerospace Conference, pp. 329– 339.
  • Pelon, H., Spiwack, R., 1996. Advanced image compressor for SPOT-5 satellite. Proceedings of the Round Table On Data Compression Techniques For Space Application, pp. 23 –27.
*Therefore JPEG can only guarantee almost lossless coding even if no compression is desired. Note that the up- coming JPEG 2000 standard (Christopoulos et al., 2000) will fulfil the requirement for lossless compression.