acm - an acm publication

Articles

The neural approach to pattern recognition

Ubiquity, Volume 2004 Issue April | BY John Peter Jesan 

|

Full citation in the ACM Digital Library

Artificial neural networks could surpass the capabilities of conventional computer-based pattern recognition systems.


The word "recognition" plays an important role in our lives. It is a basic property of all human beings; when a person sees an object, he or she first gathers all information about the object and compares its properties and behaviors with the existing knowledge stored in the mind. If we find a proper match, we recognize it.

For example, when we see a dog, first we recognize that it's an animal; when we take closer look, we recognize that it has four legs and a tail. As we are aware, most animals have four legs and a tail. The question is how we recognize that it's a dog. We look at other properties of dogs such as face, shape of ear, mouth, nose, body structure, teeth, eyes and voice. We might have seen lots of dogs and learned what a dog will look like. After making proper analysis of all the properties, we recognize and conclude that it is a dog. Going a step further, we can differentiate or recognize our own dog when our dog is with other dogs. This recognition concept is simple and familiar to everybody in the real world environment, but in the world of artificial intelligence, recognizing such objects is an amazing feat. The functionality of the human brain is amazing; it is not comparable with any artificial machines or software. Let us go deeper and analyze what is recognition and how it is done through machines.

Pattern Recognition

The act of recognition can be divided into two broad categories: recognizing concrete items and recognizing abstract items. The recognition of concrete items involves the recognition of spatial and temporal items. Examples of spatial items are fingerprints, weather maps, pictures and physical objects. Examples of temporal items are waveforms and signatures. Recognition of abstract items involves the recognition of a solution to a problem, an old conversation or argument, etc. In other words, recognizing items that do not exist physically.

In this article, I am concerned with recognition of concrete items. Applications include finger print identification, voice recognition, face recognition, character recognition, signature recognition and classification of objects in scientific/research areas such as astronomy, engineering, statistics, medical, machine learning and neural networks. Recognition of an item involves three levels of processing: input filtering, feature extraction and classification

Filtering is removing unwanted information or data from input. Depending on the application, the filter algorithm or method will change. For example, consider finger print identification. Each time we scan our fingerprints through a (non-ink) fingerprint device, the scanned output may be different. The difference may be due to a change in contrast or brightness or in the background of the image. There could be some distortion. In order to process the input, we may need only lines in the fingerprints and we may not need the other parts or background of the fingerprint. In order to filter out the unwanted portion of the image and replace it with a white background, we need a filter mechanism. Once the image is filtered through the filter mechanism, we will get standard clean finger prints only with lines, which in turn helps with the process of feature extraction.

Feature extraction is a process of studying and deriving useful information from the filtered input patterns. The derived information may be general features, which are evaluated to ease further processing. For example, in image recognition, the extracted features will contain information about gray shade, texture, shape or context of the image. This is the main information used in image processing. The methods of feature extraction and the extracted features are application dependent.

Classification is the final stage of the pattern recognition. This is the stage where an automated system declares that the inputted object belongs to a particular category. There are many classification methods in the field. Classification method designs are based on the following concepts.

Member-roster concept: Under this template-matching concept, a set of patterns belonging to a same pattern is stored in a classification system. When an unknown pattern is given as input, it is compared with existing patterns and placed under the matching pattern class.

Common property concept: In this concept, the common properties of patterns are stored in a classification system. When an unknown pattern comes inside, the system checks its extracted common property against the common properties of existing classes and places the pattern/object under a class, which has similar, common properties.

Clustering concept: Here, the patterns of the targeted classes are represented in vectors whose components are real numbers. So, using its clustering properties, we can easily classify the unknown pattern. If the target vectors are far apart in geometrical arrangement, it is easy to classify the unknown patterns. If they are nearby or if there is any overlap in the cluster arrangement, we need more complex algorithms to classify the unknown patterns. One simple algorithm based on the clustering concept is Minimum Distance Classification. This method computes the distance between the unknown pattern and the desired set of known patterns and determines which known pattern is closest to the unknown and, finally, the unknown pattern is placed under the known pattern to which it has minimum distance. This algorithm works well when the target patterns are far apart.

Neural Approach

The pattern recognition approaches discussed so far are based on direct computation through machines. Direct computations are based on math-related techniques. Next, I will discuss bionics-related concepts in recognizing patterns. Bionics means application of biological concepts to electronic machines. The neural approach applies biological concepts to machines to recognize patterns. The outcome of this effort is invention of artificial neural networks.

A neural network is an information processing system. It consists of massive simple processing units with a high degree of interconnection between each unit. The processing units work cooperatively with each other and achieve massive parallel distributed processing. The design and function of neural networks simulate some functionality of biological brains and neural systems. The advantages of neural networks are their adaptive-learning, self-organization and fault-tolerance capabilities. For these outstanding capabilities, neural networks are used for pattern recognition applications. Some of the best neural models are back-propagation, high-order nets, time-delay neural networks and recurrent nets.

Normally, only feed-forward networks are used for pattern recognition. Feed-forward means that there is no feedback to the input. Similar to the way that human beings learn from mistakes, neural networks also could learn from their mistakes by giving feedback to the input patterns. This kind of feedback would be used to reconstruct the input patterns and make them free from error; thus increasing the performance of the neural networks. Of course, it is very complex to construct such types of neural networks. These kinds of networks are called as autoassociative neural networks. As the name implies, they use back-propagation algorithms. One of the main problems associated with back-propagation algorithms is local minima. In addition, neural networks have issues associated with learning speed, architecture selection, feature representation, modularity and scaling. Though there are problems and difficulties, the potential advantages of neural networks are vast.

Conclusion

Pattern recognition can be done both in normal computers and neural networks. Computers use conventional arithmetic algorithms to detect whether the given pattern matches an existing one. It is a straightforward method. It will say either yes or no. It does not tolerate noisy patterns. On the other hand, neural networks can tolerate noise and, if trained properly, will respond correctly for unknown patterns. Neural networks may not perform miracles, but if constructed with the proper architecture and trained correctly with good data, they will give amazing results, not only in pattern recognition but also in other scientific and commercial applications.

About the Author John Peter Jesan is a software engineer working in CitiStreet, a joint venture of State Street and Citigroup companies. He has a master's degree in computer information systems and is actively doing research on neural networks. John Peter is a professional member in ACM, ACM's SIGART and an associate member in Sigma Xi (An American Research Society). He welcomes your feedback on this article.

COMMENTS

Hi Do we a have other alternatives apart from training them?

��� Akhilesh kumar, Mon, 26 Feb 2018 08:25:52 UTC

I downloaded your article on pattern & facial technology. I have a patent which describes pattern technology in its specifications. dose a photograph which focuses on a photo be regarded as an image and come under your headings as a component of facial /pattern technology.? thanks Ron Langford

��� ron langford, Sat, 07 Oct 2017 03:28:28 UTC

This one is really good and gives a brief idea about pattern recognition. Bt description in detail is missing that's required to appy this concept in real life .

��� Priyanka Roy, Sun, 15 Dec 2013 15:34:07 UTC

Hi sir, we require ACM Papers on Pattern or Pattern classification or pattern with neural networks

��� V.AMARNADH, Mon, 17 Sep 2012 06:19:14 UTC

POST A COMMENT
Leave this field empty