This paper presents a nonlinear multidimensional scaling model, called kernelized fourth quantifica- tion theory, which is an integration of kernel techniques and the fourth quantification theory. The model can deal with the problem of mineral prediction without defining a training area. In mineral target prediction, the pre-defined statistical cells, such as grid cells, can be implicitly transformed using kernel techniques from input space to a high-dimensional feature space, where the nonlinearly separable clusters in the input space are ex- pected to be linearly separable. Then, the transformed cells in the feature space are mapped by the fourth quan- tifieation theory onto a low-dimensional scaling space, where the sealed cells can be visually clustered according to their spatial locations. At the same time, those cells, which are far away from the cluster center of the majority of the sealed cells, are recognized as anomaly cells. Finally, whether the anomaly cells can serve as mineral potential target cells can be tested by spatially superimposing the known mineral occurrences onto the anomaly ceils. A case study shows that nearly all the known mineral occurrences spatially coincide with the anomaly cells with nearly the smallest scaled coordinates in one-dimensional sealing space. In the case study, the mineral target cells delineated by the new model are similar to those predicted by the well-known WofE model.
An extended self-organizing map for supervised classification is proposed in this paper. Unlike other traditional SOMs, the model has an input layer, a Kohonen layer, and an output layer. The number of neurons in the input layer depends on the dimensionality of input patterns. The number of neurons in the output layer equals the number of the desired classes. The number of neurons in the Kohonen layer may be a few to several thousands, which depends on the complexity of classification problems and the classification precision. Each training sample is expressed by a pair of vectors : an input vector and a class codebook vector. When a training sample is input into the model, Kohonen's competitive learning rule is applied to selecting the winning neuron from the Kohouen layer and the weight coefficients connecting all the neurons in the input layer with both the winning neuron and its neighbors in the Kohonen layer are modified to be closer to the input vector, and those connecting all the neurons around the winning neuron within a certain diameter in the Kohonen layer with all the neurons in the output layer are adjusted to be closer to the class codebook vector. If the number of training sam- ples is sufficiently large and the learning epochs iterate enough times, the model will be able to serve as a supervised classifier. The model has been tentatively applied to the supervised classification of multispectral remotely sensed data. The author compared the performances of the extended SOM and BPN in remotely sensed data classification. The investigation manifests that the extended SOM is feasible for supervised classification.