A selective survey of artificial pattern recognition methods

Date

1964

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Pattern recognition by automata such as digital and/or analog computers essentially consists in recognizing an incoming pattern subjected to a vector of measurements [m]= m1, m2, ........., mn as belonging to one of k admissible pattern classes. The process comprises three phases: (1) A scanning or "encoding" phase where the raw measurements are performed by mapping the scanned pattern onto a binary (0, 1) matrix representation, a video or TV replica or an equivalent response curve. (2) A normalization and/or transformation phase where the raw image is processed to yield a number of normalized, preferably noise free and ideally invariant measurements. (3) A categorization phase where a decision is made as to which pattern class the sample belongs or is most likely to belong. This last phase may or may not be adaptive, whereby the automaton may be "trained" to recognize certain pattern classes, or even to improvise and select measurements for a particular set of pattern classes. The optimum separation of the measurement space M into k domains for the k admissible pattern classes is a problem in multivariate classification, which reduces if the measurements are independent, to a Bayes-Laplace optimization procedure. Furthermore, if the measurements are binary, every pattern class domain may reduce under certain conditions to a convex set of k-1 hyperplanes, in which case an iterative "training" procedure to determine the optimum set of hyperplanes converges to a solution. Several adaptive or trainable pattern recognition devices have been developed to determine the set of optimum hyperplanes by repeated exposure to sample patterns. These devices, known as Bayes nets, neural nets, and perceptron networks have a very limited ability to generalize, that is to cope with changes in position, size, slanting and other variations of the sample patterns. Fortunately these patterns may be rendered invariant to such transformations if properly normalized as described in Chapter IV. The normalizing transformation is, however, affected by such variations as the thickening of certain features in the printed or handwritten pattern, and by background noise. These variations can in turn be attenuated or even completely eliminated by adequate data reduction transformations such as thinning, filling, and piping, described in Chapter III.

Description

Keywords

Pattern recognition systems, Artificial intelligence, Machine learning, Optical character recognition

Citation