GISdevelopment.net ---> AARS ---> ACRS 1989 ---> Digital Image Processing 2

Neural Approah to Remote Sensing Image classifiation

Kozo Okazaki, Hisashi Taketani, Yutaka Fukui
Faculty of Engineering, Tottori University

Hiroshi Mitsumoto
Faculty of Engineering Science, Osaka University

Shinichi Tamura
Faculty of Medicine, Osaka University

Takashi Hoshi
Institute of Information Sciences and Electronics,
University of Tsukuba

Kiyoshi Torji
Faculty of Agriculture, Kyoto University

Masami Iwasaki
Faculty of Agriculture, Tottori University


Abstract
A neural network approach to the remote sensing image data is proposed. Multi-channel image he composed of neighbouring pixels are used as input to the back propagation network. The training in done by error-back-propagation algorithm. We use a 32 bit personal computer (NEC PC-9801 RA), bjper frame board memory and ImPP board which is used as the neuro-accelerator. The window are of 10X10 pixels with ground truth such as mountain, coast and sea etc. are used as the training data. Each neuron of the output layer correspond to each category respectively.

Introduction
In recent years, researches of feature extraction and classification using neural networks flousrish remarkably. Especially, back propagation method (BP) is being tried widely because of the simplexes of training and calculation algorithms and the excellent ability of learning [eg. (1)]. The classification of multi-specteral remote seining image is, usually, based on multi-variable analysis. This method examines the statistical characteristics for every pixels. The mage shows a market trend of being classified excessively. Looking for the land are in these cases, we need to recombine the clusters. On the other hand, classifications of the ample flat are including sea, lake area, etc. need to use more large regions for processing. This paper deals with the classification by neural networks based on 10 X 10 pixel square regions.

Error back propagation
Simpleness of the calculation algorithm and excellent ability of learning, BP is widely and actively used for many fields. Fig. 1 (a) shows a schematic diagram of neural networks for multi-input, multi-output in Fig.1 (a) and Fig.1 (b) is a input-output relation of a neuron unit.

We explain the signal flow of Fig. 1 (a)


Fig. 1(a) Schematic diagram of neural networks

Fig. 1(b) Input-output function of a neuron unit
Fig.1 Error back propagation


here,
m: the number of layer,
Oki: output of the I-the unit of the k-the layer,
ikj: input sum of the j-the unit of the k-th layer,
xi: input pattern,
yi: teaching pattern,
wk-1ki j weight between the j-th unit of the (k-1)-the layer and the j-th unit of the k-th layer.
f: input-output function of the unit; f is the same function for all the units.

f (x) = ½ (1 + tanh (X/uo))

Error back propagation method (BP) is based on the minimization of square error of the output and the teaching signal by changing the weight w.

The algorithm is given by


System configuration
We used a personal computer based system; computer (NEC PC9801 RA), neural (ImPP) board (NEC), Neuro07 software (NEC Information Technology Co. Ltd.), Hyper Frame (Digital Arts Co. Ltd.) and co-processor (80387) (Fig.2). The classification of image is carried out at high speed by ImPP and co-processor.


Fig.2 System configuration by personal computer


Extraction of cluster feature
Though what we use the image features for input of the BP has not been well studied up to now, we use gray level raw data and their differentials (by Sobel-operator) within the window area.

Neural network model
The number of the layers of BP is three. The initial values of the weight of the BP is initialized randomly. The data we use is Ishigaki Island (Japan) got form Landsat 3 (4ch.). Windows for teaching are specified by a mouse on the Hyper frame plane with its category (sea, cloud, coast, plain).

Input layer: (10X10) x 3ch
hidden layer: 10
output layer : 4
display: sea-blue, cloud-red, coast-violet, plain-green
Images of Ch0, Ch1, Ch2 and composite RGB image are shown in Fig. 3. (a)- (d). Fig. 4 is the profile of leaning process by ImPP board;

(ex.) No. 8 heiy Þ kaig (0.245), umi (0.190)
No. 15 kaig Þ kaig (0.455), umi (0.146)
here, umi, kaig, heiy and kumo means sea, coast, plain and clud respectively. No. 8 is the case that BP understand kaig (0.245) and umi (0.190) at this point for the true category " heiy", and displayed as red character for the mark of failure. No. 15 is the succeeded case.


Fig.3 Ishigaki Island [ Landsat 3 (Ch.4) ]


Fig. 5 shows the result of classification. Most parts of area are recoognized correctly, but in some part, fault. Table.1 is some of the detailed result of BP output layer.

  Sea cloud coast pain
(ex.) No.1 0.8937 0.0154 0.200 0.0196

This area (No.1) is decided as "sea", because its active value=0.8937 is max. In cases of No. 5 and No. 6 the difference between max. and the second one is small. For these cases, we need to classify once more using smaller window.


Fig.4 Profile of learning process by ImPP board


Fig.5 Result of classification by BP


Conclusive remarks
In this paper, we discussed about the classification o multi-spectral remote sensing images by BP using gray and differential values of the image. The more the size of widow area, the more we easily recognize the features of the area, but spend much more time for learning instead.

We programmed the BP on PC-9801 RA and Excel image processing unit (Avionix Co.Ltd.) using Lattice-C. Next, as the second stage we made up the present system using Hyper frame memory card instead of the Excel using MS-C for cost down. We need not necessarily use the ImPP board, but can process the image more than 10 times speedy using it.

Reference
  • Tamura S. et.al: Nauro-Voice-Recognition Jointly Using Mouth Shape Image and Voice Features , IEICE in Japanese, PRU89-19, pp. 1-8,1989.
Table.1 Some of the detailed result of BP output layer
Sea Cloud Coast Plain
1 0.8937 0.0154 0.2000 0.0196
2 0.0058 0.0682 0.0682 0.9444
3 0.0603 0.1972 0.0780 0.1783
4 0.1006 0.1957 0.0745 0.01381
5 0.4451 0.0479 0.4551 0.0262
6 0.0117 0.4306 0.4564 0.0887
7 0.0045 0.2810 0.2420 0.4779
8 0.0385 0.4108 0.1694 0.0604