You are on page 1of 6

Representing Spatial and Temporal Patterns in Neural Networks

Introduction:-Representing space and time is an important issue in knowledge engineering. Space can be represented in a
neural network by: Using neurons that take spatial coordinates as input or output values. Fuzzy terms for representing location, such as "above," "near," and "in the middle" can also be used. Using topological neural networks, which have distance defined between the neurons and can represent spatial patterns by their activations. Such a neural network is the SOM; it is a vector quantizer, which preserves the topology of the input patterns by representing one pattern as one neuron in the topological output map.

Representing time in a neural network can be achieved by: Transforming time patterns into spatial patterns. Using a "hidden" concept, an inner concept, in the training examples. Using an explicit concept, a separate neuron or group of neurons in the neural network, takes time moments as values. Different connectionist models for representing "time" and the way they encode time are explained below 1. Feedforward networks may encode consecutive moments of time as input-output pairs. 2. Multiage prediction networks encode time in the input vector as well as in the case of (1). 3. Recurrent networks, in addition to (1) and (2), also encode time in the feedback connection. 4. Time-delay networks encode time in a similar way as (2) but some lags of input values from the past as well as from the future are used to calculate the output value. An interesting type of a neuron, which can be successfully used for representing temporal patterns, is the leaky integrator. The neuron has a binary input, a real-valued output, and a feedback connection. The output (activation) function is expressed as: y(t + 1) = x(t), if x is present, or y(t + 1) = f(y(t)), if x is not present, Where the feedback function is usually an exponential decay. So, when an input impulse is present (i.e. = 1), the output repeats it (y = 1). When the input signal is not present, the output value decreases overtime, but a"track" of the last input signal is kept at least for some time intervals. The neuron "remembers" an event x for some time, but "forgets" about it in the distant future. A collection of leaky integrators can be used, each representing one event happening over time, but all of them able to represent complex temporal patterns of time correlation between events. If another neural network, for example, ART, is linked to the set of leaky integrators, it can learn dynamic categories, which are defined by time correlation between the events.

Title: <Representing Spatial and Temporal Patterns in Neural Networks> Description :<Representing Spatial and Temporal Patterns in Neural Networks> Branch: <Computer Science > Year: <Year of the subject> Semester: <ODD or EVEN> Author: <Name of the author> Tags: <Representing-Spatial-and-Temporal-Patterns-in-Neural-Networks> Questions Q1:-How can Space be represented in a neural network? Q2:-Discuss various connectionist models for representing "time".

Pattern Recognition and Classification


Introduction:-Both supervised and unsupervised learning in neural networks have been used for pattern recognition and
classification. The steps to follow when creating a connectionist model for solving the problem are to define (1) The set of features to be used as input variables, and (2) The neural network structure and the learning and validation methods. A crucial point in using neural networks for pattern recognition is choosing the set of features X = {x1, x2,...,xn} which should represent unambiguously all the patterns from a given set P = {p1, p2, ...,pm}, In a simple case, the task of character recognition, some methods use the values of the pixel from a grid where the pattern is projected. Other methods use other features: lines, curves, the angle which a drawing hand makes with the horizontal axis, picked up at some points etc. Somepreprocessing operations may be needed to make the recognition scale-invariant, translation-invariant, and rotationinvariant. Preprocessing is a crucial task in pattern recognition. For the task of recognizing ambiguous, noisy, and ill-defined patterns, it is not recommended that primary signal elements be used, such as temporal samples of speech, waveforms, or the pixel of an image, etc. Instead of using pixels as features, the patterns can be represented by fewer features. A set of features could be a set of (Nc + Nr) features, where Nc is the number of the columns in the grid and Nr is the number of rows. A value for the feature Ni for a given pattern is the number of cells in the ith row (column) crossed by the pattern. Figure 5.22 shows an MLP trained with the backpropagation algorithm for recognizing 10 digits in this way. A system that recognizes ZIP codes was developed with the use of a five-layered MLP and the backpropagation algorithm. It was implemented on an electronic chip. The learning set consists of 7291 handwritten digits and 2549 printed digits in 35 different fonts. Other application areas for connectionist handwritten character recognition are banking, fraud detection, automated cartography, automatic data entry, and so forth. SOM and LVQ algorithms have been successfully applied to pattern recognition tasks. The training algorithm places similar input patterns into topologically close neurons on the output map.

When compared with statistical methods, connectionist methods for pattern recognition and classification have several advantages when: A large number of attributes (features) describe the input data and the interdependencies between them are not known. Irrelevant, contradictory, or ambiguous instances are present in the data set. Noise is present in the input data. Underlying distributions are unknown.

Title: <Pattern Recognition and Classification > Description :<A crucial point in using neural networks for pattern recognition is choosing the set of features X = {x1, x2,...,xn} which should represent unambiguously all the patterns from a given set P = {p1, p2, ...,pm}, In a simple case, the task of character recognition, some methods use the values of the pixel from a grid where the pattern is projected. Other methods use other features: lines, curves, the angle which a drawing hand makes with the horizontal axis, picked up at some points etc. Some preprocessing operations may be needed to make the recognition scale-invariant, translation-invariant, and rotationinvariant. Preprocessing is a crucial task in pattern recognition. > Branch: <Computer Science > Year: <Year of the subject> Semester: <ODD or EVEN> Author: <Name of the author> Tags: <Pattern-Recognition-and-Classification> Questions Q1:- What are the steps for creating a connectionist model for solving the problem. Q2:- What are the advantages of connectionist model?

Image Processing
Description:-Three main tasks in image processing, where connectionist methods can be successfully applied are:(1) Image compression and restoration, (2) Feature extraction, and (3) Image classification. Bart Kosko (1992) shows that competitive learning techniques lead to similar, even slightly better, image compression when compared with mathematical transformations. A satisfactory restoration was achieved when the 256 x 256 black-and-white images was compressed and represented by 0.5 bit/pixel. Modified SOM can also be used for imagecompression. Other connectionist methods for image compression use MLP with the backpropagation algorithm. The ability of the hidden layer to capture unique representation of the input vectors is exploited here. The hidden layer does the compression. The hidden layer in this case performs a principal component analysis. An MLP with n inputs and n outputs is trained with the same patterns for inputs and outputs, where n is the dimension of the input vectors. Restoration of an image is done after transmitting the activation values of the neurons in the hidden layer. A small simplified network of 256 inputs, 16 hidden nodes, and 256 outputs. The compression here is 0.5 bit/pixel, when one byte is assumed to represent the activation level of a hidden neuron. Better quality of the restored images can be achieved with the use of larger networks, and possibly, structured multinetwork systems, where one neural network is used for compression of only a portion of the original image. Another well-explored problem in image processing is features extraction. Features, such as contours, lines, curves, corners, junctions, roofs, ramps, and so on, can be extracted from an original image. For many image-processing systems these features are enough to classify the image or to apply successfully other processing methods. Connectionist models can be used for different types of feature extraction, such as those that are region-based, where areas of images with homogeneous properties are found in terms of boundaries; edge-based, where the local discontinuities are detected first and then curves are formed; and pixel-based, which classify pixels based on gray levels.

Title: <Image Processing> Description :<Three main tasks in image processing, where connectionist methods can be successfully applied are:-(1) Image compression and restoration,(2) Feature extraction, and(3) Image classification> Branch: <Computer Science > Year: <Year of the subject> Semester: <ODD or EVEN> Author: <Name of the author> Tags: <Image-Processing> Questions Q1:-What are the three main tasks in image processing? Q2:-Explain Restoration of an image.

You might also like