You are on page 1of 91

A NEW EFFICIENT ALGORITHM FOR REMOVING OF HIGH DENSITY SALT AND PEPPER NOISE THROUGH MODIFIED DBUT MEDIAN

FILTER FOR VIDEO RESTORATION

ABSTRACT
It is important to remove or minimize the degradations, noises in valuable ancient blurred colour images. The traditional available filtering methodologies are applicable for fixed window dimensions only these are not applicable for varying scale images. In this project we propose a new technique for digital image restoration, in this the noise free and noisy pixels are classified based on empirical multiple threshold values. Then the median filtering technique is applied. So that noise free pixels are getting preserved and only noisy pixels get restored. In this project, a novel decision-based filter, called the multiple thresholds switching (MTS) filter, is proposed to restore images corrupted by salt-pepper impulse noise. The filter is based on a detection-estimation strategy. The impulse detection algorithm is used before the filtering process, and therefore only the noise-corrupted pixels are replaced with the estimated central noise-free ordered mean value in the current filter window. The new impulse detector, which uses multiple thresholds with multiple neighbourhood information of the signal in the filter window, is very precise, while avoiding an undue increase in computational complexity. For impulse noise suppression without smearing fine details and edges in the image, extensive experimental results demonstrate that our scheme performs significantly better than many existing, well-accepted decision-based methods.The performance of our proposed algorithm will be analyzed based PSNR and MSE values

LITERATURE SURVEY
Median filters based on fuzzy rules and its application to image restoration A novel median-type filter controlled by fuzzy rules is proposed in order to remove impulsive noises on signals such as images. Median filter is well known for removing impulsive noises but this filter distorts the fine structure of signals as well. The filter proposed here is obtained as a weighted sum of the input signal and the output of the median filter, and the weight is set based on fuzzy rules concerning the states of the input signal sequence. Moreover, this weight is obtained optimally by a learning method, so that the mean square error of the filter output for some training signal data can be the minimum. Some results of image processing show the high performance of this filter. Moreover, the influences of the training signal on the filter performance. Selective Removal of Impulse Noise Based on Homogeneity Level Information We propose a decision-based, signal adaptive median filtering algorithm for removal of impulse noise. Our algorithm achieves accurate noise detection and high SNR measures without smearing the fine details and edges in the image. The notion of homogeneity level is defined for pixel values based on their global and local statistical properties. The cooccurrence matrix technique is used to represent the correlations between a pixel and its neighbors, and to derive the upper and lower bound of the homogeneity level. Noise detection is performed at two stages: noise candidates are first selected using the homogeneity level, and then a refining process follows to eliminate false detections. The noise detection scheme does not use a quantitative decision measure, but uses qualitative structural information, and it is not subject to burdensome computations for optimization of the threshold values. Empirical results indicate that our scheme performs significantly better than other median filters, in terms of noise suppression and detail preservation. A new efficient approach for the removal of impulse noise from highly corrupted images In this paper, a novel adaptive filter, called the adaptive two-pass median (ATM) filter based on support vector machines (SVMs), is proposed to preserve more image details while effectively suppressing impulse noise for image restoration. The proposed filter is composed of a noise decision-maker and two pass median filters. Our new approach basically uses an

SVM impulse detector to judge whether the input pixel is noise or not. If a pixel is detected as a corrupted pixel, the noise-free reduction median filter will be triggered to replace it. Otherwise, it keeps unchanged. Then, to improve the quality of the restored image, a decision impulse filter is put to work in the second pass filtering procedure. As for the noise suppressing on both fixed-valued and random-valued impulses without degrading the quality of the fine details, the results of our extensive experiments demonstrate that the proposed filter outperforms earlier median-based filters in the literature. In addition, our new filter also provides excellent robustness at various percentages of impulse noise. Application of partition-based median type filters for suppressing noise in images An adaptive median based filter is proposed for removing noise from images. Specifically, the observed sample vector at each pixel location is classified into one of M mutually exclusive partitions, each of which has a particular filtering operation. The observation signal space is partitioned based an the differences defined between the current pixel value and the outputs of CWM (center weighted median) filters with variable center weights. The estimate at each location is formed as a linear combination of the outputs of those CWM filters and the current pixel value. To control the dynamic range of filter outputs, a location-invariance constraint is imposed upon each weighting vector. The weights are optimized using the constrained LMS (least mean square) algorithm. Recursive implementation of the new filter is then addressed. The new technique consistently outperforms other median based filters in suppressing both random-valued and fixed-valued impulses, and it also works satisfactorily in reducing Gaussian noise as well as mixed Gaussian and impulse noise. A noise-filtering method using a local information measure A nonlinear-noise filtering method for image processing, based on the entropy concept is developed and compared to the well-known median filter and to the center weighted median filter (CWM). The performance of the proposed method is evaluated through subjective and objective criteria. It is shown that this method performs better than the classical median for different types of noise and can perform better than the CWM filter in some cases. Progressive Switching Median Filter for the Removal of Impulse Noise from Highly Corrupted Images

A new median-based lter, progressive switching median (PSM) lter, is proposed to restore images corrupted by saltpepper impulse noise. The algorithm is developed by the following two main points: 1) switching schemean impulse detection algorithm is used before ltering, thus only a proportion of all the pixels will be ltered and 2) progressive methods both the impulse detection and the noise ltering procedures are progressively applied through several iterations. Simulation results demonstrate that the proposed algorithm is better than traditional median-based lters and is particularly effective for the cases where the images are very highly corrupted.

2. INTRODUCTION
In image processing it is usually necessary to perform high degree of noise reduction in an image before performing higher-level processing steps, such as edge detection. The median filter is a non-linear digital filtering technique, often used to remove noise from images or other signals. The idea is to examine a sample of the input and decide if it is representative of the signal. This is performed using a window consisting of an odd number of samples. The values in the window are sorted into numerical order; the median value, the sample in the center of the window, is selected as the output. The oldest sample is discarded, a new sample acquired, and the calculation repeats. Median filtering is a common step in image processing. It is particularly useful to reduce speckle noise and salt and pepper noise. Its edge-preserving nature makes it useful in cases where edge blurring is undesirable Image synthesis is the process of creating new images from some form of image description. The kinds of images that are typically synthesized include: Test Patterns - scenes with simple two dimensional geometric shapes. Image Noise - images containing random pixel values usually generated from specific parameterized distributions. Computer Graphics - scenes or images based on geometric shape descriptions. Often the models are three dimensional, but may also be two dimensional. Synthetic images are often used to verify the correctness of operators by applying them to known images. They are also often used for teaching purposes, as the operator output on such images is generally `clean', whereas noise and uncontrollable pixel distributions in real images make it harder to demonstrate unambiguous results. The images could be binary, grey level or colour . 2.1 NOISE In common use the word noise means unwanted sound or noise pollution. In electronics noise can refer to the electronic signal corresponding to acoustic noise (in an audio system) or the electronic signal corresponding to the (visual) noise commonly seen as 'snow' on a degraded television or video image. In signal processing or computing it can be considered data

without meaning; that is, data that is not being used to transmit a signal, but is simply produced as an unwanted by-product of other activities. In Information Theory, however, noise is still considered to be information. In a broader sense, film grain or even advertisements in web pages can be considered noise. Noise can block, distort, or change the meaning of a message in both human and electronic communication. In many of these areas, the special case of thermal noise arises, which sets a fundamental lower limit to what can be measured or signaled and is related to basic physical processes at the molecular level described by well known simple formulae. 2.2 NOISE GENERATION 2.2.1 BRIEF DESCRIPTION Noises are random background events which have to be dealt with in every system processing real signals. They are not part of the ideal signal and may be caused by a wide range of sources, e.g. variations in the detector sensitivity, environmental variations, the discrete nature of radiation, transmission or quantization errors, etc. It is also possible to treat irrelevant scene details as if they were image noise (e.g. surface reflectance textures). The characteristics of noise depend on their source, as does the operator which best reduces their effects. Many image processing packages contain operators to artificially add noise to an image. Deliberately corrupting an image with noise allows us to test the resistance of an image processing operator to noise and assess the performance of various noise filters. HOW IT WORKS Noise can generally be grouped in two classes:

independent noise, and Noise which is dependent on the image data.

Image independent noise can often be described by an additive noise model, where the recorded image f(i,j) is the sum of the true image s(i,j) and the noise n(i,j):

eq 2.1 The noise n(i,j) is often zero-mean and described by its variance . The impact of the noise

on the image is often described by the signal to noise ratio (SNR), which is given by

eq 2.2 Where and are the variances of the true image and the recorded image, respectively.

In many cases, additive noise is evenly distributed over the frequency domain (i.e. white noise), whereas an image contains mostly low frequency information. Hence, the noise is dominant for high frequencies and its effects can be reduced using some kind of lowpass filter. This can be done either with a frequency filter or with a spatial filter. (Often a spatial filter is preferable, as it is computationally less expensive than a frequency filter.) In the second case of data dependent noise, (e.g. arising when monochromatic radiation is scattered from a surface whose roughness is of the order of a wavelength, causing wave interference which results in image speckle), it can be possible to model noise with a multiplicative, or non-linear, model. These models are mathematically more complicated, hence, if possible, the noise is assumed to be data independent. DETECTOR NOISE One kind of noise which occurs in all recorded images to a certain extent is detector noise. This kind of noise is due to the discrete nature of radiation, i.e. the fact that each imaging system is recording an image by counting photons. Allowing some assumptions (which are valid for many applications) this noise can be modeled with an independent, additive model where the noise n(i,j) has a zero-mean Gaussian distribution described by its standard deviation ( ), or variance. (The 1-D Gaussian distribution has the form shown in Figure 1.) This means that each pixel in the noisy image is the sum of the true pixel value and a random, Gaussian distributed noise value.

Figure 1 1D Gaussian distribution with mean 0 and standard deviation 1 SALT AND PEPPER NOISE Another common form of noise is data drop-out noise (commonly referred to as intensity spikes, speckle or salt and pepper noise). Here, the noise is caused by errors in the data transmission. The corrupted pixels are either set to the maximum value (which looks like snow in the image) or have single bits flipped over. In some cases, single pixels are set alternatively to zero or to the maximum value, giving the image a `salt and pepper' like appearance. Unaffected pixels always remain unchanged. The noise is usually quantified by the percentage of pixels which are corrupted. GUIDELINES FOR USE In this section we will show some examples of images corrupted with different kinds of noise and give a short overview of which noise reduction operators are most appropriate. A fuller discussion of the effects of the operators is given in the corresponding worksheets. GAUSSIAN NOISE We will begin by considering additive noise with a Gaussian distribution. If we add Gaussian noise with values of 8, we obtain the image. Increasing yields and for =13 and 20.

Compare these images to the original.

Gaussian noise can be reduced using a spatial filter. However, it must be kept in mind that when smoothing an image, we not only reduce the noise, but also the fine-scaled image details because they also correspond to blocked high frequencies. The most effective basic spatial filtering techniques for noise removal include: mean filtering, Median filtering and Gaussian smoothing. Crimmins Speckle Removal filter can also produce good noise removal. More sophisticated algorithms which utilize statistical properties of the image and/or noise fields exist for noise removal. For example, adaptive smoothing algorithms may be defined which adjust the filter response according to local variations in the statistical properties of the data. SALT AND PEPPER NOISE In the following examples, images have been corrupted with various kinds and amounts of drop-out noise. In, pixels have been set to 0 or 255 with probability p=1%. In pixel bits were flipped with p=3%, and in 5% of the pixels (whose locations are chosen at random) are set to the maximum value, producing the snowy appearance. For this kind of noise, conventional low pass filtering, e.g. mean filtering or Gaussian smoothing is relatively unsuccessful because the corrupted pixel value can vary significantly from the original and therefore the mean can be significantly different from the true value. Median filter filter removes drop-out noise more efficiently and at the same time preserves the edges and small details in the image better. Conservative smoothing can be used to obtain a result which preserves a great deal of high frequency detail, but is only effective at reducing low levels of noise. MEDIAN FILTERING What is noise? Noise is any undesirable signal. Noise is everywhere and thus we have to learn to live with it. Noise gets introduced into the data via any electrical system used for storage, transmission, and/or processing. In addition, nature will always plays a "noisy" trick or two with the data under observation. When encountering an image corrupted with noise you will want to improve its appearance for a specific application. The techniques applied are application-oriented. Also, the different procedures are related to the types of noise introduced to the image. Some examples of noise are: Gaussian or White, Rayleigh, Shot or Impulse, periodic, sinusoidal or coherent, uncorrelated, and granular.

When performing median filtering, each pixel is determined by the median value of all pixels in a selected neighborhood (mask, template, window). The median value m of a population (set of pixels in a neighborhood) is that value in which half of the population has smaller values than m, and the other half has larger values than m. This class of filter belongs to the class of edge preserving smoothing filters which are non-linear filters. These filters smooth the data while keeping the small and sharp details. Median filtering is a simple and very effective noise removal filtering process. Its performance is particularly good for removing shot noise. Shot noise consists of strong spike like isolated values. Shown below are the original image and the same image after it has been corrupted by shot noise at 10%. This means that 10% of its pixels were replaced by full white pixels. Also shown are the median filtering results using 3x3 and 5x5 windows; three (3) iterations of 3x3 median filter applied to the noisy image; and finally for comparison, the result when applying a 5x5 mean filter to the noisy image MEDIAN FILTER Common Names: Median filtering, Rank filtering Brief Description The median filter is normally used to reduce noise in an image, somewhat like the mean filter. However, it often does a better job than the mean filter of preserving useful detail in the image. How It Works Like the mean filter, the median filter considers each pixel in the image in turn and looks at its nearby neighbours to decide whether or not it is representative of its surroundings. Instead of simply replacing the pixel value with the mean of neighbouring pixel values, it replaces it with the median of those values. The median is calculated by first sorting all the pixel values from the surrounding neighbourhood into numerical order and then replacing the pixel being considered with the middle pixel value. (If the neighbourhood under consideration contains an even number of pixels, the average of the two middle pixel values is used.) Figure 1 illustrates an example calculation.

Figure 1 Calculating the median value of a pixel neighbourhood. As can be seen the central pixel value of 150 is rather unrepresentative of the surrounding pixels and is replaced with the median value: 124. A 33 square neighbourhood is used here --- larger neighbourhoods will produce more severe smoothing. Mean Filter Common Names: Mean filtering, Smoothing, Averaging, Box filtering Brief Description Mean filtering is a simple, intuitive and easy to implement method of smoothing images, i.e. reducing the amount of intensity variation between one pixel and the next. It is often used to reduce noise in images. How It Works The idea of mean filtering is simply to replace each pixel value in an image with the mean (`average') value of its neighbours, including itself. This has the effect of eliminating pixel values which are unrepresentative of their surroundings. Mean filtering is usually thought of as a convolution filter. Like other convolutions it is based around a kernel, which represents the shape and size of the neighbourhood to be sampled when calculating the mean. Often a 33 square kernel is used, as shown in Figure 1, although larger kernels (e.g. 55 squares) can be used for more severe smoothing. (Note that a small kernel can be applied more than

once in order to produce a similar - but not identical - effect as a single pass with a large kernel.)

Figure 1 33 averaging kernel often used in mean filtering

Computing the straightforward convolution of an image with this kernel carries out the mean filtering process. Crimmins Speckle Removal Common Names: Crimmins Speckle Removal Brief Description Crimmins Speckle Removal reduces speckle from an image using the Crimmins complementary hulling algorithm. The algorithm has been specifically designed to reduce the intensity of salt and pepper noise in an image. Increased iterations of the algorithm yield increased levels of noise removal, but also introduce a significant amount of blurring of high frequency details. How It Works Crimmins Speckle Removal works by passing an image through a speckle removing filter which uses the complementary hulling technique to reduce the speckle index of that image. The algorithm uses a non-linear noise reduction technique which compares the intensity of each pixel in an image with those of its 8 nearest neighbours and, based upon the relative

values, increments or decrements the value of the pixel in question such that it becomes more representative of its surroundings. The noisy pixel alteration (and detection) procedure used by Crimmins is more complicated than the ranking procedure used by the non-linear median filter. It involves a series of pairwise operations in which the value of the `middle' pixel within each neighbourhood window is compared, in turn, with each set of neighbours (N-S, E-W, NW-SE, NE-SW) in a search for intensity spikes. The operation of the algorithm is illustrated in Figure 1 and described in more detail below.

Figure 1 Crimmins Speckle Removal Algorithm.

For each iteration and for each pair of pixel neighbours, the entire image is sent to a Pepper Filter and Salt Filter as shown above. In the example case, the Pepper Filter is first called to determine whether the each image pixel is darker_than - i.e. by more than 2 intensity levels its northern neighbours. Comparisons where this condition proves true cause the intensity value of the pixel under examination to be incremented twice lightened, otherwise no change is effected. Once these changes have been recorded, the entire image is passed through the Pepper Filter again and the same series of comparisons are made between the current pixel and its southern neighbour. This sequence is repeated by the Salt Filter, where the conditions lighter_than and darken are, again, instantiated using 2 intensity levels.

Note that, over several iterations, the effects of smoothing in this way propagate out from the intensity spike to infect neighbouring pixels. In other words, the algorithm smoothes by reducing the magnitude of a locally inconsistent pixel, as well as increasing the magnitude of pixels in the neighbourhood surrounding the spike. It is important to notice that a spike is defined here as a pixel whose value is more than 2 intensity levels different from its surroundings. This means that after 2 iterations of the algorithm, the immediate neighbours of such a spike may themselves become spikes with respect to pixels lying in a wider neighbourhood.

Images are often corrupted by impulse noise when they are recorded by noisy sensors or sent over noisy transmission channels. Many impulse noise removal techniques have been developed to suppress impulse noise while preserving image details [1-7]. The median filter, the most popular kind of nonlinear filter, has been extensively used for the removal of impulse noise due to its simplicity. However, the median filter tends to blur fine details and lines in many cases. To avoid damage to good pixels, decision-based median filters realized by thresholding operations have been introduced in some recently published works [8-16]. In general, the decision-based filtering procedure consists of the following two steps: an impulse detector that classifies the input pixels as either noise-corrupted or noise-free, and a noise reduction filter that modifies only those pixels that are classified as noise-corrupted. In general, the main issue concerning the design of the decision-based median filter focuses on how to extract features from the local information and establish the decision rule, in such a way to distinguish noise-free pixels from contaminated ones as precisely as possible. In addition, to achieve high noise reduction with fine detail preservation, it is also crucial to apply the optimal threshold value to the local signal statistics. Usually a trade-off exists between noise reduction and detail preservation. In this paper, we propose a novel decision-based filter, named the multiple thresholds switching (MTS) filter, to overcome the drawbacks of the above methods. Basically, the proposed filter takes a new impulse detection strategy to build the decision rule and practice the threshold function. The new impulse detection approach based on multiple thresholds considers multiple neighborhood information of the filter window to judge whether impulse noise exists. The new impulse detector is very precise without, while avoiding an increase in computational complexity. The impulse detection algorithm is used before the filtering process starts, and therefore only the noise-corrupted pixels are replaced with the estimated central noise-free ordered mean value in the current filter window. Extensive experimental results demonstrate that the new filter is capable of preserving more details while effectively suppressing impulse noise in corrupted images.

One of the most intriguing questions in image processing is the problem of recovering the desired or perfect image from a degraded version. In many instances one has the feeling that the degradations in the image are such that relevant information is close to being recognizable, if only the image could be sharpened just a little.

Blurring is a form of bandwidth reduction of the image due to imperfect image formation process. It can be caused by relative motion between the camera and the original scene, or by optical system, which is out of focus.

1.1

What

is

Image

Restoration?

Image restoration - suppressing image degradation using knowledge about its nature Most image restoration methods are based on convolution applied globally to the whole image

Degradation causes:
o o o o o o o

defects of optical lenses, nonlinearity of the electro-optical sensor, graininess of the film material, relative motion between an object and camera wrong focus, atmospheric turbulence in remote sensing or astronomy, etc.

The objective of image restoration is to reconstruct the original image from its degraded version

1.2 Literature Survey Image restoration techniques - two groups:

Deterministic methods - applicable to images with little noise and a known degradation function.
o

The original image is obtained from the degraded one by a transformation inverse to the degradation

Stochastic techniques - the best restoration is sought according to some stochastic criterion, e.g., a least squares method.
o

In some cases the degradation transformation must be estimated first.

It is advantageous to know the degradation function explicitly. The better this knowledge is, the better are the results of the restoration

There are three typical degradations with a simple function:


o o o

Relative constant speed movement of the object with respect to the camera, wrong lens focus, and atmospheric turbulence.

Image restoration deals with methods to improve the quality of blurred images. It especially deals with the recovery of information that was lost to the human eye during some degradation process.

For a better understanding of the underlying processes we make use of a Degradation Model: G (u,v) = H (u,v)* F (u,v) + N (u,v) .............(eqn.1.1)

Fig.(1.1)

where G and F are the Fourier transforms of the degraded image g and the input image f, respectively. H is called the degradation function, and N is a noise term modeled as an additive value. 1.3 Requirements for Restoration

The successful restoration of blurred image requires accurate estimation of PSF parameters. In our project, we deal with images, which are blurred by the relative motion between the imaging system and the original scene. Thus, given a motion blurred and noisy image, the task is to identify the point spread function parameters and apply the restoration filter to get an approximation to the original scene.

Parameter estimation is based on the observation that image characteristics along the direction of motion are different than the characteristics in other directions. The PSF of motion blur is characterized by two parameters namely, blur direction and blur length.

1.4

Blur

Parameters

Point Spread Function (PSF): When the intensity of the observed point image is spread over several pixels, this is known as the Point Spread Function (PSF).

->

Original

Degraded

Fig.(1.2)

Length: Blur Length is the no. of pixels by which the image is degraded. It is the no. of pixel positions by which a pixel is shifted from its original position.

->

Original

Degraded

Angle: Blur Angle is the angle at which the image is degraded.

->

Degraded Original

1.5Types

of

Noise

Salt & Pepper: As the name suggests, this noise looks like salt and pepper. It gives the effect of "On and Off" pixels.

->

Original

Degraded

Gaussian: This is Gaussian White Noise. It requires mean and variance as the additional inputs.

->

Original

Degraded

Poisson: Poisson noise is not an artificial noise. It is a type of noise which is added from the data instead of adding artificial noise to the data.

->

Original

Degraded

Speckle: It is a type of multiplicative noise. It is added to the image using the equation J=I+n*I, where n is uniformly distributed random noise with mean 0 and variance V.

->

Original MTS Filter

Degraded

Before introducing the proposed MTS filter, some notation must be defined first. Let the filter window w(k) (or a sliding window) sized 2n + 1 cover the image X from left to right, top to bottom in a raster scan fashion. w(k) = (x-n(k), , x-1(k), x0(k), x1(k), , xn(k)), ...........(eqn.2.1)

where x0(k) (or x(k)) is the original central vector-valued pixel at location k. In this work, w(k) centered around x0(k) w(k) = (x-4(k), , x-1(k), x0(k), x1(k), , x4(k)). ...........(eqn.2.2) Impulse noise can appear because of a random bit error on a communication channel. In this work, the source images are corrupted only by salt-pepper impulse noise, which means a noisy pixel has a high value due to positive impulse noise, or has a low value due to a negative impulse noise.

2.1 Software Requirements

Software: Operating System:

Matlab 6.5 or above simulation tool Windows 2000/XP

Digital Images:

JPEG/BMP format images

EXISTING METHOD:-

Standard Median Filter. Adaptive Median Filter

Tolerance based switched adaptive median filter Decision Based Algorithm.

Standard Median Filter:The main idea of the median filter is to run through the signal entry by entry, replacing each entry with the median of neighboring entries. The pattern of neighbors is called the "window", which slides, entry by entry, over the entire signal. For 1D signals, the most obvious window is just the first few preceding and following entries, whereas for 2D (or higher-dimensional) signals such as images, more complex window patterns are possible (such as "box" or "cross" patterns). Note that if the window has an odd number of entries, then the median is simple to define: it is just the middle value after all the entries in the window are sorted numerically. For an even number of entries, there is more than one possible median, see median for more details. Worked 1D example To demonstrate, using a window size of three with one entry immediately preceding and following each entry, a median filter will be applied to the following simple 1D signal: x = [2 80 6 3] So, y[1] y[2] y[3] = = the = Median[2 Median[80 median filtered Median[2 80 6 6] 3] = = output 2 signal 80] Median[2 Median[3 6 6 y will = 80] 80] = = be: 2 6 6

y[4] = Median[6 3 3] = Median[3 3 6] = 3 i.e. y = [2 6 6 3]. Boundary issues Note that, in the example above, because there is no entry preceding the first value, the first value is repeated, as with the last value, to obtain enough entries to fill the window. This is one way of handling missing window entries at the boundaries of the signal, but there are other schemes that have different properties that might be preferred in particular circumstances:

Avoid processing the boundaries, with or without cropping the signal or image boundary afterwards,

Fetching entries from other places in the signal. With images for example, entries from the far horizontal or vertical boundary might be selected,

Shrinking the window near the boundaries, so that every window is full.

2D median filter pseudo code Code for a simple 2D median filter algorithm might look like this: allocate outputPixelValue[image width][image height] edgex := (window width / 2) rounded down edgey := (window height / 2) rounded down for x from edgex to image width - edgex for y from edgey to image height - edgey allocate colorArray[window width][window height] for fx from 0 to window width for fy from 0 to window height colorArray[fx][fy] := inputPixelValue[x + fx - edgex][y + fy - edgey] sort all entries in colorArray[][] outputPixelValue[x][y] := colorArray[window width / 2][window height / 2] Note that this algorithm:

Processes one color channel only, Takes the "not processing boundaries" approach (see above discussion about boundary issues).

Use of a median filter to improve an image severely corrupted by defective pixels Algorithm implementation issues

Typically, by far the majority of the computational effort and time is spent on calculating the median of each window. Because the filter must process every entry in the signal, for large signals such as images, the efficiency of this median calculation is a critical factor in determining how fast the algorithm can run. The "vanilla" implementation described above sorts every entry in the window to find the median; however, since only the middle value in a list of numbers is required, selection algorithms can be much more efficient. Furthermore, some types of signals (very often the case for images) use whole number representations: in these cases, histogram medians can be far more efficient because it is simple to update the histogram from window to window, and finding the median of a histogram is not particularly onerous.

Edge preservation properties Median filtering is one kind of smoothing technique, as is linear Gaussian filtering. All smoothing techniques are effective at removing noise in smooth patches or smooth regions of a signal, but adversely affect edges. Often though, at the same time as reducing the noise in a signal, it is important to preserve the edges. Edges are of critical importance to the visual appearance of images, for example. For small to moderate levels of (Gaussian) noise, the median filter is demonstrably better than Gaussian blur at removing noise whilst preserving edges for a given, fixed window size.[1] However, its performance is not that much better than Gaussian blur for high levels of noise, whereas, for speckle noise and salt and pepper noise (impulsive noise), it is particularly effective.[2] Because of this, median filtering is very widely used in digital image processing. What a median filter is and what it does? Median filtering follows this basic prescription. The median filter is normally used to reduce noise in an image, somewhat like the mean filter. However, it often does a better job than the mean filter of preserving useful detail in the image. This class of filter belongs to the class of edge preserving smoothing filters which are non-linear filters. This means that for two images A(x) and B(x):

These filters smooths the data while keeping the small and sharp details. The median is just the middle value of all the values of the pixels in the neighborhood. Note that this is not the same as the average (or mean); instead, the median has half the values in the neighborhood larger and half smaller. The median is a stronger "central indicator" than the average. In particular, the median is hardly affected by a small number of discrepant values among the pixels in the neighborhood. Consequently, median filtering is very effective at removing various kinds of noise. Figure 1 illustrates an example of median filtering.

Figure 1 Like the mean filter, the median filter considers each pixel in the image in turn and looks at its nearby neighbors to decide whether or not it is representative of its surroundings. Instead of simply replacing the pixel value with the mean of neighboring pixel values, it replaces it with the median of those values. The median is calculated by first sorting all the pixel values from the surrounding neighborhood into numerical order and then replacing the pixel being considered with the middle pixel value. (If the neighborhood under consideration contains an even number of pixels, the average of the two middle pixel values is used.) Figure 2 illustrates an example calculation.

Figure 2 Calculating the median value of a pixel neighborhood. As can be seen, the central pixel value of 150 is rather unrepresentative of the surrounding pixels and is replaced with the median value: 124. A 33 square neighborhood is used here --- larger neighborhoods will produce more severe smoothing.

What is noise? Noise is any undesirable signal. Noise is everywhere and thus we have to learn to live with it. Noise gets introduced into the data via any electrical system used for storage, transmission, and/or processing. In addition, nature will always plays a "noisy" trick or two with the data under observation. When encountering an image corrupted with noise you will want to improve its appearance for a specific application. The techniques applied are applicationoriented. Also, the different procedures are related to the types of noise introduced to the image. Some examples of noise are: Gaussian or White, Rayleigh, Shot or Impulse, periodic, sinusoidal or coherent, uncorrelated, and granular. Noise Models Noise can be characterized by its: Probability density function (pdf): Gaussian, uniform, Poisson, etc. Spatial properties: correlation Frequency properties: white noise vs pink noise

Figure 3 Original Image

Figure 4 Images and histograms resulting from adding Gaussian, Rayleigh and Gamma noise to the original image.

Figure 4 (continued) Images and histograms resulting from adding Exponential, Uniform and Salt & Pepper noise to the original image. Comparison between the median filter and the average filter Sometimes we are confused by median filter and average filter, thus lets do some comparison between them. The median filter is a non-linear tool, while the average filter is a linear one. In smooth, uniform areas of the image, the median and the average will differ by very little. The median filter removes noise, while the average filter just spreads it around evenly. The performance of median filter is particularly better for removing impulse noise than average filter.

As Figure 5 shown below are the original image and the same image after it has been corrupted by impulse noise at 10%. This means that 10% of its pixels were replaced by full white pixels. Also shown are the median filtering results using 3x3 and 5x5 windows; three (3) iterations of 3x3 median filter applied to the noisy image; and finally for comparison, the result when applying a 5x5 mean filter to the noisy image.

a)Original image;

b)Added Impulse Noisy at 10%

a)3x3 Median Filtered;

b)5x5 Median Filtered

Comparison of the non-linear Median filter and the linear Mean filter.

a)3x3 Median Filtered applied 3 times; Figure 5

b)5x5 Average Filter

The disadvantage of the median filter Although median filter is a useful non-linear image smoothing and enhancement technique. It also has some disadvantages. The median filter removes both the noise and the fine detail since it can't tell the difference between the two. Anything relatively small in size compared

to the size of the neighborhood will have minimal affect on the value of the median, and will be filtered out. In other words, the median filter can't distinguish fine detail from noise.

Adaptive Median Filtering Therefore the adaptive median filtering has been applied widely as an advanced method compared with standard median filtering. The Adaptive Median Filter performs spatial processing to determine which pixels in an image have been affected by impulse noise. The Adaptive Median Filter classifies pixels as noise by comparing each pixel in the image to its surrounding neighbor pixels. The size of the neighborhood is adjustable, as well as the threshold for the comparison. A pixel that is different from a majority of its neighbors, as well as being not structurally aligned with those pixels to which it is similar, is labeled as impulse noise. These noise pixels are then replaced by the median pixel value of the pixels in the neighborhood that have passed the noise labeling test. Purpose 1). Remove impulse noise 2). Smoothing of other noise 3). Reduce distortion, like excessive thinning or thickening of object boundaries How it works? Adaptive median filter changes size of Sxy (the size of the neighborhood) during operation. Notation Zmin = minimum gray level value in Sxy Zmax = maximum gray level value in Sxy Zmed = median of gray levels in Sxy Zxy = gray level at coordinates (x, y) Smax = maximum allowed size of Sxy Algorithm

Level A: A1 = Zmed - Zmin A2 = Zmed - Zmax if A1 > 0 AND A2 < 0, go to level B else increase the window size if window size < Smax, repeat level A else output Zxy Level B: B1 = Zxy - Zmin B2 = Zxy - Zmax if B1 > 0 AND B2 < 0, output Zxy else output Zmed Explanation Level A: IF Zmin < Zmed < Zmax, then Zmed is not an impulse (1) go to level B to test if Zxy is an impulse ... ELSE Zmed is an impulse (1) the size of the window is increased and (2) level A is repeated until ... (a) Zmed is not an impulse and go to level B or (b) Smax reached: output is Zxy Level B: IF Zmin < Zxy < Zmax, then Zxy is not an impulse (1) output is Zxy (distortion reduced)

ELSE either Zxy = Zmin or Zxy = Zmax (2) output is Zmed (standard median filter) Zmed is not an impulse (from level A) Advantages The standard median filter does not perform well when impulse noise is a. Greater than 0.2, while the adaptive median filter can better handle these noises. b. The adaptive median filter preserves detail and smooth non-impulsive noise, while the standard median filter does not. See example form a) to d) in figure 6.

a) Image corrupted by impulse noise with a probability of 0.1;

b) Result of arithmetic mean filtering;

DESIGN OF THE PROPOSED MULTI THRESHOLD FILTER 3.1 A New Approach to Judge Whether an Impulse Noise Exists Basically, the proposed MTS filter uses multiple thresholds to classify the signal as either noise-free or noise-corrupted so that only noisy signals are filtered while good signals are preserved. Fig. 1 shows the structure of the new MTS filter. In the detection process, if x(k) is judged to be the maximum or minimum in the filter window, then the decision rules (threshold functions) are used on the neighboring pixels of x(k) to decide whether it is a noise corrupted pixel. To identify corrupted pixels, input pixels can be separated into two classes, A and B. Pixels in class A are supposedly much more likely to be impulses than those in class B. To make sure that this happens, first, we check x(k) to see whether it is a maximum or minimum in the filter window. If x(k) is a maximum or minimum, it will be classified into class A, otherwise it will be classified into class B. When x(k) is classified into class A, the pixels of the 3 3 filter window (excluding x(k)) are sorted in ascending order. The sorted vector can be defined as s(k) = (s1(k), s2(k), , s8(k)), .(eqn.3.1) where s1(k), s2(k), , s8(k) are the elements of w(k) arranged in ascending order.

The differences between the input pixel x(k) and each of the elements of s(k) provide an efficient measurement to identify noisy pixels.

3.2 Schematic Diagram Proposed System

Fig.(3.1): Block Diagram of the MTS filter. Definition 1 d(k) = (d1(k), d2(k), , d8(k)) ..(eqn.3.2)

The grade-ordered difference di(k) [0, 255] provides information about the likelihood of corruption for x(k). Furthermore, the full differences from d1(k) through d8(k) in the current 3 3 filter window can reveal more information about pixels on a line or at an edge, even when highly corrupted impulses are present in the current filter window. Since si(k) has been arranged in ascending order, Eq. (4) shows that di(k) [0, 255] is also in ascending order.

Thus, if any d1(k), d2(k), , d8(k) is greater than the corresponding threshold value, then x(k) is detected as corrupted by the impulse noise detector. The input pixel is detected as a noisy pixel if any of the following decision rules is true: di(k) > Ti, i = 1, 2, , 8, (5)

where T1, T2, , T8 are threshold values, T1 < T2 < < T8. Then, the noise flag map B(i,j) can record the location of the impulse noise in the noisy image.

Based on the heuristic approach, proper threshold values can be easily obtained through extensive empirical tests involving a large variety of test images. The first threshold T1 is found experimentally by applying the MTS filter, and then the other thresholds can be expanded and obtained one by one. For example, when the one threshold value T1 is tested, x(k) can be indicated to be either corrupted or not according to the value of T1. After that, along with threshold value T1, the second threshold value T2 (T2 >T1) can be tested. Likewise, each of the other threshold values T3, T4, , T8 can be obtained based on the former threshold values. After testing a large variety of images, we can pick out the most suitable threshold values to obtain the best results and highest PSNR values. Note that the more grade-ordered differences di considered, the more information about noise can be obtained. That is, the more decision thresholds have are considered, the more accurately the impulse noise can be detected. The effect of the number of thresholds and threshold values Ti on the scheme is discussed in the next section. 3.3 Noise Filtering If the input pixel is classified as an impulse according to the binary noise flag map B(i, j), the pixel value is replaced by the estimated central noise-free ordered mean value. Otherwise, its original intensity is the output. However, the estimated mean value here is computed from only the noise-free pixels within the filter window w(k). The noise-free pixels can be sorted in ascending order, which defines the vector as

where f1(k) f2(k) fC(k) are elements of w(k) and are good pixels according to their corresponding binary noise flag map B(i, j). C denotes the number of all the pixels with B(i, j) = 0 in the filter window w(k),

where CMEAN refers to the central mean operation and y(k) is the filtering result of the left neighboring pixel of x(k). The new nonlinear MTS filter is defined as

where map B(i, j) at x(k). If B(i, j) is 1, the pixel x(k output y(k) of the noise filtering process is CMEAN f(k). Otherwise, if B(i, j) is 0, the y(k) is the identity x(k). The result is that the new MTS filter can suppress impulse noise without degrading the quality of the fine details. Add the following types of noise to it to generate 4 noisy images: 1. Gaussian noise 2. Poisson noise 3. Salt & pepper noise 4. Speckle noise Apply the following spatial filters to the noisy images: 1. Arithmetic mean 2. Geometric mean

3. Harmonic mean 4. Contra-harmonic mean 5. Median filter 6. Min 7. Max 8. Mid-point 9. Alpha trimmed mean filter Submit the following with your code: 1. Print a. the original image, b. the noisy images, and c. the results of all the filters on each noisy image. 2. Determine which type of filtering worked well for each type of noise. References: 1. Rafael C. Gonzalez and Richard E. Woods, Digital Image Processing, III edition, Prentice Hall, pages 322-325, 2008. 2. Gonzalez, Woods and Eddins, Digital Image Processing with MATLB, I edition, Prentice Hall, pages 160-164, 2009.

Types of Noise 1. Gaussian noise - Gaussian noise is statistical noise that has a probability density function of the normal distribution (also known as Gaussian distribution). In other words, the values that the noise can take on are Gaussian-distributed. It is most commonly used as additive white noise to yield additive white Gaussian noise (AWGN). 2. Poisson noise - Poisson noise has a probability density function of a Poisson distribution. 3. Salt & pepper noise - It represents itself as randomly occurring white and black pixels. An effective noise reduction method for this type of noise involves the usage of a median filter. Salt and pepper noise creeps into images in situations where quick transients, such as faulty switching, take place. The image after distortion from salt and pepper noise looks like the image attached. 4. Speckle noise - Speckle noise is a granular noise that inherently exists in and degrades the quality of images. Speckle noise is a multiplicative noise, i.e. it is in direct proportion to the local grey level in any area. The signal and the noise are statistically independent of each other. Adding noise to images MATLAB provides a function imnoise to conveniently add the desired type of noise to an image. For e.g.
m = 0; v = 0.01; J1 = imnoise(I,'gaussian',m,v);

Adds Gaussian noise with mean=0 and variance=0.01 to the image. See MATLABs documentation for imnoise. Use MATLABs default parameters when none have been specified here.

Filter descriptions

g(x,y)
Corrupted image
( )

Filter

f(x,y)
Filtered image

Sxy = set of coordinates in a rectangular subimage window (neighborhood) of size (m x n) centered at point (x,y). n

Center m (x,y)

(3 x 3) window

1. Arithmetic mean filter: This filter computes the average value of the pixels intensity values in the sub-image (of size m x n). 2. Geometric mean filter: It is similar to an arithmetic mean filter, but it tends to lose less detail in the process. 3. Harmonic mean filter: This filter computes the harmonic mean of the pixels intensity values. 4. Contraharmonic mean filter: This filter computes the contraharmonic mean of the pixels intensity values. Note that the contraharmonic filter reduces to the mean filter for Q = 0, and to the harmonic mean filter for Q = -1. 5. Median filter: Replaces the value of the pixel by the median of the pixels in the subimage. 6. Max filter: For d = 1, replaces the value of the pixel by the maximum of the pixel intensity values in the sub-image. For d>1, uses the mean of the top d values.

7. Min filter: For d = 1, replaces the value of the pixel by the minimum of the pixel intensity values in the sub-image. For d>1, uses the mean of the lowest d values. 8. Mid-point filter: Replaces the value of the pixel by the mid-point of the pixels in the sub-image. 9. Alpha trimmed mean filter: Replaces the value of the pixel by the mean of the remaining pixel intensity values after discarding the top d/2 and lowest d/2 intensity values. Reference: Pages 322-325 of textbook.

Taken from: Gonzalez, Woods and Eddins, Digital Image Processing with MATLB, I edition, Prentice Hall, 2009, page 160.

Filtering the image Filtering the image is a neighborhood operation. The 3x3 neighborhood of a pixel (x,y) is shown:

(source: http://www.comp.dit.ie/bmacnamee/ materials/dip/lectures/ImageProcessi ng5-SpatialFiltering1.ppt)

Linear filters can be implemented by multiplying the pixel neighborhood with the filter kernel. This is done conveniently using the imfilter function. But in this project, with the exception of the averaging filter, all other filters are non-linear filtering operations and cannot be implemented using imfilter. There are several other ways to implement non-linear filtering in MATLAB: 1. nlfilter general sliding neighborhood operations. 2. colfilt column wise neighborhood operations. This is a bit more complex to implement than nlfilt, but is optimized for speed. 3. ordfilt2 performs order statistic filtering. It is only applicable to median, midpoint, min, and max filters. Please see the MATLAB documentation for these functions. They have several examples to illustrate their usage.

UNSYMMETRICAL TRIMMED FILTER The crux behind the above filter is to eliminate the outliers inside the current window. Certain type of non linear filters such as Alpha trimmed mean filter (ATMF), Alpha trimmed midpoint (ATMP) etc., works on the above principle. These filters use a parameter called which decides the number of pixels to be eliminated. It was found that when is increased, the filter fared well. For high noise densities it does not preserve the image information due to the elimination of outlier values. So to overcome the drawback of fine detail preservation and removal of impulse noise during heavy noise conditions Decision based midpoint (DBMP) is proposed.

Over the years sorting algorithm is a basic operation behind all the median filters. All the existing sorting algorithms require more comparators as shown in table 1. In this paper a new snake like improved shear sorting algorithm is proposed for ordering the entire array of processed pixels as shown in figure 1. Let D be an m x n matrix which is mapped with linear integer sequence W. Sorting the sequence W is then equivalent to sorting the elements of D in some Pre determined indexing scheme. The proposed Snake like modified algorithm consists of three basic operations row sorting, column sorting and semi diagonal sorting. The algorithm of the proposed snake like improved shear sorting algorithm is as follows. Step1: The considered 2D processing window as shown in figure 1.a Step2: Sort the 1th and 3rd rows of the 2D array in ascending order and 2nd row in descending order independently .The sorted sequence is fed to step3 as shown in figure1.b. Step3: Sort the three columns of the 2D array in ascending order .The sorted sequence is fed to step4 as shown in figure 1.c. Step4: Repeat step 2 and 3 once again as shown in figure1.d and e. Step5: Now Sort the upper semi diagonal of the semi sorted 2D array in ascending order as shown in figure1.e. Step6: Sort the Lower semi diagonal sorted array in ascending order as shown in fig.

The Decision based modified Rank ordered mean

filter (DBMROMF) initially detects

impulse and corrects it subsequently. All the pixels of an image lie between the dynamic ranges [0,255]. If the processed pixel holds minimum (0) or maximum (255), pixel is considered as noisy and processed by DBMMF else as not noisy and the pixel is unaltered. The brief illustration of the algorithm is as follows

Illustration of the proposed sorting methodology Step 1: Choose 2-D window of size 3x3. The processed pixel in current window is assumed as pxy. Step 2: Check for the condition 0 < pxy < 255, if the condition is true then pixel is considered as not noisy and left unaltered. Step 3: If the processed pixel pxy holds 0 or 255 i.e. (pxy=0 or pxy =255) then pixel pxy is considered as corrupted pixel. Convert 2D array into 1D array. Sort the 1D array which is assumed as Sxy.

Step 4: Initialize two counters, forward counter (F) and reverse counter (L) with 1 and 9 respectively. When a 0 or 255 are encountered inside the window F is increased by 1 or L is decremented by 1 respectively. When pixel is noisy there happens to be two possible cases. Case I: If the processing pixel is noisy and the current Processed window contains few 0s and 255s. So check for 0 or 255 in sorted array Sxy, simultaneously counters would propagate along the Sxy array thereby eliminating outliers retaining only the pixel that hold values other than 0 and 255. After checking all the pixels F and L would hold a particular value indicating the number of outliers replaced by the midpoint of the sorted array. Case II: If every pixels that reside inside the kernel is the combination of 0 or 255. Even this condition is addressed by the case I operation. There by making the algorithm simple. When all the pixel elements hold 0 or 255 then the values are retained, assuming it as texture of the image. Step 5: Steps 1 to 4 is repeated until all pixels of the entire image is processed. eliminated on either sides. The noisy pixel is

The salt and pepper noise is initially detected by comparing the processed pixel with 0 or 255. This process is done on entire pixels in the image. The bigger matrix refers to image and values enclosed inside a rectangle is considered to be the current processing window. The element encircled refers to processed pixel. Step 2 is illustrated in case (1). Step 3 and 4 are visualized along with the case I in case (2) .Case II is briefed in case (3). Case (1): In the illustration given below, check the processed pixel for 0 < pxy < 255. In this case the processed pixel is 106. Hence processed pixel is not 0 or 255. So pixel is considered as noise free and pixel is unaltered. Case(2): In the selected window the processed pixel holds 0 (or 255). So the processed pixel is considered as noisy. Initialize forward counter F=1 and reverse counter L=9. Convert the 2D array into 1D array and sort the converted array. F and L counter moves in forward and reverse directions respectively. Unsorted array: 94 0 0 0 0 122 255 127 255 Sorted array Sxy 0 0 0 0 94 122 127 255 255

Now check for the presence of 0 or 255 in the sorted array. Every time a 0 is detected F is incremented by1 and 255 is detected L is decremented by1. In the above example there are three 0 and two 255. Hence F is incremented by four times and L is decremented by 2 times. Now finally F is holding 5 and L is holding 7. Now the corrupted pixel is replaced with midpoint of the trimmed array i.e. corrupted pixel is replaced by (S(4)+S(7))/2 = (94 +127)/2= 110.

Case (3): This sub case works if the entire pixel inside the current window is either pepper (0) or salt (255). Initialize F=1 and L=9 and convert the elements of 2D window into 1D. Sort the 1D array

Unsorted 1D array: 0 255 0 0 0 255 255 255 255 Sorted 1D array Sxy: 0 0 0 0 255 255 255 255 255 Now the F counters propagates forward and L in reverse direction. Finally F and L hold 5 and 4 respectively. Hence the noisy pixel is replaced by (S (5) + S(4))/2= (255+0)/2=122.

BLOCK DIAGRAM:-

Input video:-

Number of frames per second Frame rate, the number of still pictures per unit of time of video, ranges from six or eight frames per second (frame/s) for old mechanical cameras to 120 or more frames per second for new professional cameras. PAL (Europe, Asia, Australia, etc.) and SECAM (France, Russia, parts of Africa etc.) standards specify 25 frame/s, while NTSC (USA, Canada, Japan, etc.) specifies 29.97 frame/s. Film is shot at the slower frame rate of 24 photograms/s, which complicates slightly the process of transferring a cinematic motion picture to video. The minimum frame rate to achieve the illusion of a moving image is about twelve to fifteen frames per second. Interlaced vs progressive Video can be interlaced or progressive. Interlacing was invented as a way to reduce flicker in early mechanical and CRT video displays without increasing the number of complete frames per second, which would have required sacrificing image detail in order to remain within the limitations of a narrow bandwidth. The horizontal scan lines of each complete frame are treated as if numbered consecutively and captured as two fields: an odd field (upper field) consisting of the odd-numbered lines and an even field (lower field) consisting of the evennumbered lines. Analog display devices reproduce each frame in the same way, effectively doubling the frame rate as far as perceptible overall flicker is concerned. When the image capture device acquires the fields one at a time, rather than dividing up a complete frame after it is captured, the frame rate for motion is effectively doubled as well, resulting in smoother, more lifelike reproduction (although with halved detail) of rapidly moving parts of the image when viewed on an interlaced CRT display, but the display of such a signal on a progressive scan device is problematic. NTSC, PAL and SECAM are interlaced formats. Abbreviated video resolution specifications often include an i to indicate interlacing. For example, PAL video format is often specified as 576i50, where 576 indicates the total number of horizontal scan lines, i indicates interlacing, and 50 indicates 50 fields (half-frames) per second.

In progressive scan systems, each refresh period updates all of the scan lines of each frame in sequence. When displaying a natively progressive broadcast or recorded signal, the result is optimum spatial resolution of both the stationary and moving parts of the image. When displaying a natively interlaced signal, however, overall spatial resolution will be degraded by simple line doubling and artifacts such as flickering or "comb" effects in moving parts of the image will be seen unless special signal processing is applied to eliminate them. A procedure known asdeinterlacing can be used to optimize the display of an interlaced video signal from an analog, DVD or satellite source on a progressive scan device such as an LCD Television, digital video projector or plasma panel. Deinterlacing cannot, however, produce video quality that is equivalent to true progressive scan source material. Aspect ratio

Comparison of common cinematography and traditional television (green) aspect ratios Aspect ratio describes the dimensions of video screens and video picture elements. All popular video formats are rectilinear, and so can be described by a ratio between width and height. The screen aspect ratio of a traditional television screen is 4:3, or about 1.33:1. High definition televisions use an aspect ratio of 16:9, or about 1.78:1. The aspect ratio of a full 35&nbsp;mm film frame with soundtrack (also known as theAcademy ratio) is 1.375:1. Ratios where the height is taller than the width are uncommon in general everyday use, but do have application in computer systems where the screen may be better suited for a vertical layout. The most common tall aspect ratio of 3:4 is referred to as portrait mode and is created by physically rotating the display device 90 degrees from the normal position. Other tall aspect ratios such as 9:16 are technically possible but rarely used. (For a more detailed discussion of this topic please refer to the page orientation article.) Pixels on computer monitors are usually square, but pixels used in digital video often have non-square aspect ratios, such as those used in the PAL and NTSC variants of the CCIR 601 digital video standard, and the corresponding anamorphic widescreen formats. Therefore,

an NTSC DV image which is 720 pixels by 480 pixels is displayed with the aspect ratio of 4:3 (which is the traditional television standard) if the pixels are thin and displayed with the aspect ratio of 16:9 (which is the anamorphic widescreen format) if the pixels are fat. Colour space and bits per pixel

Example of U-V color plane, Y value=0.5 Color model name describes the video color representation. YIQ was used in NTSC television. It corresponds closely to the YUV scheme used in NTSC and PAL television and the YDbDr scheme used by SECAM television. The number of distinct colors that can be represented by a pixel depends on the number of bits per pixel (bpp). A common way to reduce the number of bits per pixel in digital video is by chroma subsampling (e.g. 4:4:4, 4:2:2, 4:2:0/4:1:1). Video quality Video quality can be measured with formal metrics like PSNR or with subjective video quality using expert observation. The subjective video quality of a video processing system may be evaluated as follows:

Choose the video sequences (the SRC) to use for testing. Choose the settings of the system to evaluate (the HRC). Choose a test method for how to present video sequences to experts and to collect their ratings.

Invite a sufficient number of experts, preferably not fewer than 15. Carry out testing.

Calculate the average marks for each HRC based on the experts' ratings.

Many subjective video quality methods are described in the ITU-T recommendation BT.500. One of the standardized method is the Double Stimulus Impairment Scale (DSIS). In DSIS, each expert views an unimpaired reference video followed by an impaired version of the same video. The expert then rates the impaired video using a scale ranging from "impairments are imperceptible" to "impairments are very annoying".

Video quality is a characteristic of a video passed through a video transmission/processing system, a formal or informal measure of perceived video degradation (typically, compared to the original video). Video processing systems may introduce some amounts of distortion or artifacts in the video signal, so video quality evaluation is an important problem.

Since the time when the world's first video sequence was recorded, many video processing systems have been designed. In the ages of analog video systems, it was possible to evaluate quality of a video processing system by calculating the system's frequency response using some traditional test signal (for example, a collection of color bars and circles). Nowadays, digital video systems are replacing analog ones, and evaluation methods have changed. Performance of a digital video processing system can vary significantly and depends on dynamic characteristics of input video signal (e.g. amount of motion or spatial details). That's why digital video quality should be evaluated on diverse video sequences, often from the user's database. Objective video evaluation techniques are mathematical models that approximate results of subjective quality assessment, but are based on criteria and metrics that can be measured objectively and automatically evaluated by a computer program. Objective methods are classified based on the availability of the original video signal, which is considered to be of high quality (generally not compressed). Therefore, they can be classified as Full Reference Methods (FR), Reduced Reference Methods (RR) and No-Reference Methods (NR). FR metrics compute the quality difference by comparing every pixel in each image of the distorted video to its corresponding pixel in the original video. RR metrics extract some features of both videos and compare them to give a quality score. They are used when all the original video is not available, e.g. in a transmission with a limited bandwidth. NR metrics try

to assess the quality of a distorted video without any reference to the original video. These metrics are usually used when the video coding method is known. The most traditional ways of evaluating quality of digital video processing system (e.g. video codec like DivX, Xvid) are calculation of the signal-to-noise ratio (SNR) and peak signal-tonoise ratio(PSNR) between the original video signal and signal passed through this system. PSNR is the most widely used objective video quality metric. However, PSNR values do not perfectly correlate with a perceived visual quality due to the non-linear behavior of the human visual system. Recently a number of more complicated and precise metrics were developed, for example UQI, VQM,PEVQ, SSIM, VQuad-HD and CZD. Based on a benchmark by the Video Quality Experts Group (VQEG) in the course of the Multimedia Test Phase 2007-2008 some metrics were standardized as ITU-T Rec. J.246 (RR), J.247 (FR) in 2008 and J.341 (FR HD) in 2011. The performance of an objective video quality metric is evaluated by computing the correlation between the objective scores and the subjective test results. The latter is called mean opinion score(MOS). The most frequently used correlation coefficients are : linear correlation coefficient, Spearman's rank correlation coefficient, kurtosis, kappa coefficient and outliers ratio. When estimating quality of a video codec, all the mentioned objective methods may require repeating post-encoding tests in order to determine the encoding parameters that satisfy a required level of visual quality, making them time consuming, complex and impractical for implementation in real commercial applications. For this reason, much research has been focused on developing novel objective evaluation methods which enable prediction of the perceived quality level of the encoded video before the actual encoding is performed

In telecommunications and computing, bit

rate (sometimes

written bitrate or

as

variable R[1]) is the number of bits that are conveyed or processed per unit of time. The bit rate is quantified using the bits per second (bit/s or bps) unit, often in conjunction with an SI prefix such as kilo- (kbit/s or kbps), mega- (Mbit/s or Mbps), giga- (Gbit/s or Gbps) or tera- (Tbit/s or Tbps). Note that, unlike many other computer-related units, 1 kbit/s is traditionally defined as 1,000-bit/s, not 1,024-bit/s, etc., also before 1999 when SI prefixes were introduced for units of information in the standard IEC 60027-2. Uppercase K as in Kbit/s or Kbps should never be used.

The formal abbreviation for "bits per second" is "bit/s" (not "bits/s", see writing style for SI units). In less formal contexts the abbreviations "b/s" or "bps" are sometimes used, though this risks confusion with "bytes per second" ("B/s", "Bps"), and the use of the abbreviation ps is also inconsistent with the SIsymbol for picosecond. One byte per second (1 B/s) corresponds to 8 bit/s. The display resolution of a digital television, computer monitor or display device is the number of distinct pixels in each dimension that can be displayed. It can be an ambiguous term especially as the displayed resolution is controlled by different factors in cathode ray tube (CRT), Flat panel display which includes Liquid crystal displays, or projection displays using fixed picture-element (pixel) arrays. It is usually quoted as width height, with the units in pixels: for example, "1024 768" means the width is 1024 pixels and the height is 768 pixels. This example would normally be spoken as "ten twenty-four by seven sixty-eight" or "ten twenty-four by seven six eight". One use of the term display resolution applies to fixed-pixel-array displays such as plasma display panels (PDPs), liquid crystal displays (LCDs), digital light processing (DLP)

projectors, or similar technologies, and is simply the physical number of columns and rows of pixels creating the display (e.g., 1920 1080). A consequence of having a fixed-grid display is that, for multi-format video inputs, all displays need a "scaling engine" (a digital video processor that includes a memory array) to match the incoming picture format to the display. Note that for broadcast television standards the use of the word resolution here is a misnomer, though common. The term display resolution is usually used to mean pixel dimensions, the number of pixels in each dimension (e.g., 1920 1080), which does not tell anything about the pixel density of the display on which the image is actually formed: broadcast television resolution properly refers to the pixel density, the number of pixels per unit distance or area, not total number of pixels. In digital measurement, the display resolution would be given in pixels per inch. In analog measurement, if the screen is 10 inches high, then the horizontal resolution is measured across a square 10 inches wide. This is typically stated as "lines horizontal resolution, per picture height;"[1] for example, analog NTSCTVs can typically display about 340 lines of "per picture height" horizontal resolution from over-the-air sources, which is equivalent to about 440 total lines of actual picture information from left edge to right edge

PROPOSED METHOD:Decision Based Un-symmetric Trimmed Median Filter Digital images are contaminated by impulse noise during image acquisition or transmission due to malfunctioning pixels in camera sensors, faulty memory locations in hardware, or transmission in a noisy channel. Salt and pepper noise is one type of impulse noise which can corrupt the image, where the noisy pixels can take only the maximum and minimum gray values in the dynamic range. The linear filter like mean filter and related filters are not effective in removing impulse noise. Non-linear filtering techniques like Standard Median Filter (SMF), Adaptive Median Filter (AMF) are widely used to remove salt and pepper noise due to its good de-noising power and computational efficiency [1]. SMF is effective only at low noise densities. Several methods have been proposed for removal of impulse noise at higher noise densities [2-5]. The window size used in these methods is small which results in minimum computational complexity. However, small window size leads to insufficient noise reduction. Switching based median filtering has been proposed as an effective alternative for reducing computational complexity [6]. Recent methods like Decision Based Algorithm (DBA), Modified Decision Based Algorithm

(MDBA), are one of the fastest and efficient algorithms capable of impulse noise removal at noise densities as high as 80% [7-8]. A major drawback of this algorithm is streaking effect at higher noise densities. To overcome this drawback, Modified Decision Based

Unsymmetric Trimmed Median Filter (MDBUTMF) is used to remove salt and pepper noise at very high densities as 80 -90% [9]. In this algorithm, at high noise density, the processing pixel is replaced by the mean value of elements within the window. This will lead to blurring of fine details in the image. To avoid this problem, we have introduced fuzzy thresholding is used to preserve the edges and fine details in this paper.These filters are removing the salt and pepper noise at medium noise variance 50- 60%. Hence, we have proposed a new algorithm is the combination of fuzzy logic and unsymmetric trimmed median filter in this paper. This algorithm algorithms. The algorithm starts with the detection of impulse noise. That is, if the processing pixel lies within the maximum and minimum gray level values, then it is noise free pixel, it is left unchanged. If the processing pixels take the maximum or minimum gray level then it is noisy pixel which is processed by the proposed algorithm. gives better performance than the existing

The steps followed in the proposed algorithm are given below: Step 1: Select 2-D window of size 3 x 3. The processing pixel is denoted as Pij. Step 2: If 0 < Pij < 255 then Pij is a noise free pixel and its value is unaltered. Step 3: If Pij = 0 or Pij = 255 then Pij is a noisy pixel then apply the proposed algorithm to the processing pixel. Step 3a: In the selected window (3 x 3) if all the elements are not 0s and 255s, then replace Pij with the trimmed median value [8]. Step 3b: If the selected window contain all the elements as 0s and 255s, then four possible combinations defined based on impulse noise density using fuzzy rule are Very High, Very Low, Low and High. Here Very High refers to frequent occurrence of 255 and Very Low corresponds to frequent occurrence of gray level 0. Then replace the processing pixel by fuzzy membership function output value as given in the flow chart shown in figure. Step 4: Repeat steps 1 to 3 until all the pixels in the entire image are processed.

The performance of the proposed algorithm is tested with different gray scale and colour images. The noise variance is varied from 50% to 95%. For implementing our algorithm, we have used MATLAB 7 on a 2.80 GHz Pentium R processor with 1 GB of RAM. The performances of the proposed algorithm are quantitatively measured by the Peak Signal to Noise Ratio (PSNR) and Image respectively. Enhancement Factor (IEF) as defined in (3) and (5)

where MSE stands for Mean Square Error, M x N is size of the image, Y represents the original image, Y denotes the de-noised image and represents the noisy image. The PSNR values of the proposed algorithm are compared against the existing algorithms by varying the noise variance from 50 to 95% and are given in table 1 and table 2. From the table 1, it can be evident that the PSNR value of the proposed algorithm is better than the existing algorithm at high noise densities above 85% for Lena gray scale image. The PSNR value for Bird color image is tabulated in table 2. From the table 2, it can be observed that the performance of the proposed algorithm is better than the existing algorithms at high noise densities. Not all the elements in a selected 3 x 3 window is 255s or zeros at medium noise density. Hence, the proposed MDBUTMF at medium noise density. A plot of PSNR against noise density for Bird image is shown in figure 3. From the figure, it shows that the performance of the proposed algorithm is better than existing algorithms like SMF, AMF, PSMF, DBA, and MDBA at all the noise densities. But the performance of the proposed algorithm is on par with MDBUTMF at high noise densities in the range from above 85%. The proposed algorithm is also quantitatively measured with image algorithm is almost same PSNR value against

enhancement factor (IEF) and the results are given in table 3 and 4. From the table 3, it indicates that the result of proposed algorithm is better than the existing algorithm for Bird image at all noise densities. In table 4, shows the IEF values or different noise removal filters for Lena gray scale image against noise variance. From the table, it can be concluded that the performance of the proposed algorithm outperforms the existing algorithms. A plot of IEF against noise variances for Lena (Colour) image is shown in figure 4. From the figure, it is possible to observe that the performance of the proposed algorithm is better than the existing algorithms. The results for 256 x 256 Lena (Gray) image for 90% salt and pepper noise is shown in figure 5. From this figure, the result of proposed algorithm is better than the existing algorithms.

HD content High-definition image sources include terrestrial broadcast, direct broadcast satellite, digital cable, high definition disc (BD), digital cameras, internet downloads and the latest generation of video game consoles.

Most computers are capable of HD or higher resolutions over VGA, DVI, and/or HDMI. The optical disc standard Blu-ray Disc can provide enough digital storage to store hours of HD video content. Digital Versatile Discs or DVDs (that hold 4.7 GB* for a Single layer or 8.5 GB* for a Double layer), look best on screens that are smaller than 36 inches (91 cm)[citation needed], so they are not always up to the challenge of today's high-definition (HD) sets. Storing and playing HD movies requires a disc that holds more information, like a Blu-ray Disc (which hold 25 GB* in single layer form and 50 GB* for double layer) or High Definition Digital Versatile Discs HD-DVDs which hold 15 GB* or 30 GB* in single and double layer. * = Gigabyte = 1 Billion bytes

Blu-ray Discs were jointly developed by 9 initial partners including Sony, Phillips (which developed CDs),and Pioneer (which developed its own Laser-disc previously with some success) among others. HD-DVD discs were primarily developed by Toshiba and NEC with some backing from Microsoft, Warner Bros., Hewlett Packard, and others. On February 19, 2008 Toshiba announced it was abandoning the format and would discontinue development, marketing and manufacturing of HD-DVD players and drives. Types of recorded media The high resolution photographic film used for cinema projection is exposed at the rate of 24 frames per second but usually projected at 48, each frame getting projected twice helping to minimise flicker. One exception to this was the 1986 National Film Board of Canada short

film Momentum, which briefly experimented with both filming and projecting at 48 frame/s, in a process known asIMAX HD. Depending upon available bandwidth and the amount of detail and movement in the image, the optimum format for video transfer is either 720p24 or 1080p24. When shown on television in PAL system countries, film must be projected at the rate of 25 frames per second by accelerating it by 4.1 percent. In NTSC standard countries, the projection rate is 30 frames per second, using a technique called 3:2 pull-down. One film frame is held for three video fields (1/20 of a second), and the next is held for two video fields (1/30 of a second) and then the process is repeated, thus achieving the correct film projection rate with two film frames shown in 1/12 of a second. Older (pre-HDTV) recordings on video tape such as Betacam SP are often either in the form 480i60 or 576i50. These may be upconverted to a higher resolution format (720i), but removing the interlace to match the common 720p format may distort the picture or require filtering which actually reduces the resolution of the final output. Non-cinematic HDTV video recordings are recorded in either the 720p or the 1080i format. The format used is set by the broadcaster (if for television broadcast). In general, 720p is more accurate with fast action, because it progressively scans frames, instead of the 1080i, which uses interlaced fields and thus might degrade the resolution of fast images. 720p is used more for Internet distribution of high-definition video, because computer monitors progressively scan; 720p video has lower storage-decoding requirements than either the 1080i or the 1080p. This is also the medium for high-definition broadcasts around the world and 1080p is used for Blu-ray movies. HD in filmmaking Film as a medium has inherent limitations, such as difficulty of viewing footage while recording, and suffers other problems, caused by poor film development/processing, or poor monitoring systems. Given that there is increasing use of computer-generated or computeraltered imagery in movies, and that editing picture sequences is often done digitally, some directors have shot their movies using the HD format via high-end digital video cameras. While the quality of HD video is very high compared to SD video, and offers improved signal/noise ratios against comparable sensitivity film, film remains able to resolve more image detail than current HD video formats. In addition some films have a wider dynamic range (ability to resolve extremes of dark and light areas in a scene) than even the best HD

cameras. Thus the most persuasive arguments for the use of HD are currently cost savings on film stock and the ease of transfer to editing systems for special effects. Depending on the year and format in which a movie was filmed, the exposed image can vary greatly in size. Sizes range from as big as 24 mm 36 mm for VistaVision/Technirama 8 perforation cameras (same as 35 mm still photo film) going down through 18 mm 24 mm for Silent Films or Full Frame 4 perforations cameras to as small as 9 mm 21 mm in Academy Sound Aperture cameras modified for the Techniscope 2 perforation format. Movies are also produced using other film gauges, including 70 mm films (22 mm 48 mm) or the rarely used 55 mm and CINERAMA. The four major film formats provide pixel resolutions (calculated from pixels per millimeter) roughly as follows:

Academy Sound (Sound movies before 1955): 15 mm 21 mm (1.375) = 2,160 2,970 Academy camera US Widescreen: 11 mm 21 mm (1.85) = 1,605 2,970 Current Anamorphic Panavision ("Scope"): 17.5 mm 21 mm (2.39) = 2,485 2,970 Super-35 for Anamorphic prints: 10 mm 24 mm (2.39) = 1,420 3,390

In the process of making prints for exhibition, this negative is copied onto other film (negative interpositive internegative print) causing the resolution to be reduced with each emulsion copying step and when the image passes through a lens (for example, on a projector). In many cases, the resolution can be reduced down to 1/6 of the original negative's resolution (or worse).Note that resolution values for 70 mm film are higher than those listed above.

MODIFIED METHOD:Modified Decision Based Un-symmetric Trimmed Median Filter:In the proposed method first the noisy image is read then based on some decision salt and pepper noise detection takes place. At the end of the detection stage the noisy and noise-free pixels get separated. The noise-free pixel is left unchanged and the noisy pixel is given to the Modified Decision Based Unsymmetric Trimmed Median Filter (MDBUTMF). The MDBUTMF produces an image as its throughput that is a partially noise removed one. And it is further processed by Fuzzy Noise Reduction Method (FNRM). Finally the FNRM provides a restored image that is fully free from noise. The proposed method provides the final output image with higher PSNR value. The data flow of the proposed method is shown in Fig.2. The flow goes in the way that, first the noisy image is given to a noisy image reader. Followed by this is the salt and pepper noise detection. After this, based on the state of the elements the corrupted pixel is either replaced by Type-I or replaced by Type-II. As a result of this a partially noise removed image is being produced. It is further processed by Fuzzy First Sub-Filter and Fuzzy Second Sub-Filter. Finally a restored image without Salt and Pepper noise is obtained as throughput. Type-I): If the selected window contains all the elements as 0s and 255s means, then replace the processing pixel by the mean value of the elements present in that window. Type-II): If the selected window contains not all elements as 0s and 255s. Then eliminate 0s and 255s and find the median value of the remaining elements. Replace the processing pixel with the median value. The clear explanation of Type-I and Type-II with examples is given in Section III. The output images produced by the combination of Modified Decision Based Unsymmetric Trimmed Median Filter (MDBUTMF) plus the Fuzzy Noise Reduction Method (FNRM) contain excellent Peak Signal-to-Noise Ratio (PSNR) than the existing methods. A higher Peak Signal-to-Noise Ratio (PSNR) would normally indicate that, the reconstruction is of very high quality. The Peak Signal-to-Noise Ratio (PSNR) is defined as the ratio of the maximum possible power of a signal to the power of corrupting noise signal. The expression for Peak Signal-to-Noise Ratio (PSNR). The clear explanation of Modified Decision Based Unsymmetric Trimmed Median Filter (MDBUTMF) .

Flow chart:-

The Modified Decision Based Unsymmetric Trimmed Median Filter (MDBUTMF) has several stages of operations. The stages are given below with examples. Stage 1: The MDBUTM Filter selects a 2D-window of size 33. The center pixel in the selected window is the processing pixel and it is denoted as Pij. It is given in Fig.3. The neighboring pixels of the processing pixel Pij are present in the directions NW, N, NE, W, E, SW, S, and SE. The positions of these directions are (i-1,j-1), (i-1,j), (i-1,j+1), (i,j-1), (i,j+1), (i+1,j-1), (i+1,j) and (i+1,j+1) respectively. The directions are clearly mentioned in the following Fig.3. The X-axis is considered for i and Y-axis is considered for j. Followed by the Stage 1 are the Stage 2 and Stage 3. The Stage 3 consists of two types Type I and Type II. Stage 2: If the processing pixel (Pij) lies between the values 0s and 255s (0<Pij<255) then the processing pixel is considered as a noise free pixel and it is left unchanged. For example if the processing pixel value is 70 that is (0<70<255) then it is consider as a noise-free pixel and it is left unchanged. Stage 3: If Pij=0 or Pij=255 then the processing pixel is considered as a corrupted one and it is get processed by one of the following types. The selection among the types is based on the state of the elements present in the selected window. The types are given below with examples. Type-I): If the processing pixel (Pij) in a selected window takes the value as either 0 or 255 that is pepper or salt noise and the neighborhood pixels also take the values either as 0 or 255 or both. Then the mean value of the selected window is found and the processing pixel is replaced by the mean value. It is clearly given below by an example through Fig.5.the selected window contain the processing pixel (Pij) value as 0 that is pepper noise and the neighborhood pixels also take the values as 0s and 255s. So if we take the median value of the selected window it will be either 0 or 255 which is again noisy. To avoid this problem the mean value of the selected window is found and the processing pixel Pij is replaced by the mean value. Here the mean value of the selected window is 170. So the processing pixel is replaced by the mean value of 170. The mean value is found by adding all the pixel values present in the selected window and it is get divided by the total number of elements present in that selected window.

Type-II): If the selected window contains some of the elements with values 0s and 255s, and the remaining elements with values between 0 and 255 means then we have to take the trimmed median value and replace the processing pixel by using that trimmed median value. The noisy pixel values (0s and 255s) are first removed from the selected window and then the median value is found by using the remaining pixel values present in that selected window. the selected window contains the processing pixel with value 255 that is Pij=255. And some of neighborhood contain noisy pixels with values 0s and 255s and some other contain noise-free pixels. So in order to find the trimmed median value we have to first form the 1-D array of the selected window elements as [ 68 94 0 120 255 0 97 255 83 ]. After eliminating the 0s and 255s from the selected window we have the remaining pixels from that selected window is [68 94 120 97 83]. From this we get a median element with value 120. Hence replace the processing pixel Pij by 120.

Application:Photo shop application:Photoshop files have default file extension as .PSD, which stands for "Photoshop Document." A PSD file stores an image with support for most imaging options available in Photoshop. These include layers with masks, transparency, text, alpha channels and spot colors, clipping paths, and duotone settings. This is in contrast to many other file formats (e.g. .JPG or .GIF) that restrict content to provide streamlined, predictable functionality. A PSD file has a maximum height and width of 30,000 pixels, and a length limit of 3 Gigabytes. Photoshop files sometimes have the file extension .PSB, which stands for "Photoshop Big" (also known as "large document format"). A PSB file extends the PSD file format, increasing the maximum height and width to 300,000 pixels and the length limit to around 4 Exabytes. The dimension limit was apparently chosen arbitrarily by Adobe, not based on computer arithmetic constraints (it is not close to a power of two, as is 30,000) but for ease of software testing. PSD and PSB formats are documented.[11] Because of Photoshop's popularity, PSD files are widely used and supported to some extent by most competing software. The .PSD file format can be exported to and from Adobe's other apps like Adobe Illustrator, Adobe Premiere Pro, and After Effects, to make professional standard DVDs and provide non-linear editing and special effects services, such as backgrounds, textures, and so on, for television, film, and the web. Photoshop's primary strength is as a pixel-based image editor, unlike vector-based image editors. Photoshop also enables vector graphics editing through its Paths, Pen tools, Shape tools, Shape Layers, Type tools, Import command, and Smart Object functions. These tools and commands are convenient to combine pixel-based and vector-based images in one Photoshop document, because it may not be necessary to use more than one program. To create very complex vector graphics with numerous shapes and colors, it may be easier to use software that was created primarily for that purpose, such as Adobe Illustrator or CorelDRAW. Photoshop's non-destructive Smart Objects can also import complex vector shapes.

Satellite application:In the context of spaceflight, a satellite is an object which has been placed into orbit by human endeavor. Such objects are sometimes called artificial satellites to distinguish them from natural satellites such as the Moon. The world's first artificial satellite, the Sputnik 1, was launched by the Soviet Union in 1957. Since then, thousands of satellites have been launched into orbit around the Earth. Some satellites, notably space stations, have been launched in parts and assembled in orbit.

Artificial satellites originate from more than 50 countries and have used the satellite launching capabilities of ten nations. A few hundred satellites are currently operational, whereas thousands of unused satellites and satellite fragments orbit the Earth as space debris. A fewspace probes have been placed into orbit around other bodies and become artificial satellites to the Moon, Mercury, Venus, Mars, Jupiter,Saturn, and the Sun. Satellites are used for a large number of purposes. Common types include military and civilian Earth observation satellites,communications satellites, navigation satellites, weather satellites, and research satellites. Space stations and human spacecraft in orbit are also satellites. Satellite orbits vary greatly, depending on the purpose of the satellite, and are classified in a number of ways. Well-known (overlapping) classes include low Earth orbit, polar orbit, and geostationary orbit. Satellites are usually semi-independent computer-controlled systems. Satellite subsystems attend many tasks, such as power generation, thermal control, telemetry, attitude control and orbit control.

Medical line process:A prevention or preventive measure is a way to avoid an injury, sickness, or disease in the first place, and generally it will not help someone who is already ill (though there are exceptions). For instance, many babies in developed countries are given a polio vaccination soon after they are born, which prevents them from contracting polio. But the vaccination does not work on patients who already have polio. A treatment or cure is applied after a medical problem has already started. A treatment treats a problem, and may lead to its cure, but treatments often ameliorate a problem only for as long as the treatment is continued, especially in chronic diseases. For example, there is no cure for AIDS, but treatments are available to slow down the harm done by HIV and delay the fatality of the disease. Treatments don't always work. For example, chemotherapy is a treatment for some types of cancer. In some cases, chemotherapy may cause a cure, but not in all cases for all cancers. When nothing can be done to stop or improve a medical condition, beyond efforts to make the patient more comfortable, the condition is said to be untreatable. Some untreatable conditions naturally resolve on their own; others do not.Cures are a subset of treatments that reverse illnesses completely or end medical problems permanently. Many diseases that cannot be cured are still treatable.

CONCLUSIONS:The developed algorithms are tested usingN512X512, 8bits/pixel image Lena (Gray), Parrot (colour), Barbara (colour). The performance of the Proposed algorithm is tested for various levels of noise corruption and compared with standard filters namely standard median filter (SMF), adaptive Median filter (AMF) and decision based Algorithm (DBA). Each time the test image is corrupted by salt and pepper noise of different density ranging from 10 to 90 with an increment of 10 and it will be applied to various filters. In addition to the visual quality, the performance of The developed algorithm and other standard algorithms are quantitatively measured by the following parameters such as peak signal-to-noise ratio (PSNR), Mean square error (MSE) and Image Enhancement Factor (IEF). All the filters are implemented in MATLAB 7.5 on a PC equipped with 2.4 GHz CPU and 1 GB RAM memory for the evaluation of computation time of all algorithms.

RESULT ANALYSIS:function varargout = guimain(varargin) % GUIMAIN M-file for guimain.fig % GUIMAIN, by itself, creates a new GUIMAIN or raises the existing % singleton*. % % H = GUIMAIN returns the handle to a new GUIMAIN or the handle to % the existing singleton*. % % GUIMAIN('CALLBACK',hObject,eventData,handles,...) calls the local % function named CALLBACK in GUIMAIN.M with the given input arguments. % % GUIMAIN('Property','Value',...) creates a new GUIMAIN or raises the % existing singleton*. Starting from the left, property value pairs are % applied to the GUI before guimain_OpeningFcn gets called. An % unrecognized property name or invalid value makes property application % stop. All inputs are passed to guimain_OpeningFcn via varargin. % % *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one % instance to run (singleton)". % % See also: GUIDE, GUIDATA, GUIHANDLES % Edit the above text to modify the response to help guimain % Last Modified by GUIDE v2.5 27-May-2013 11:11:49 % Begin initialization code - DO NOT EDIT gui_Singleton = 1; gui_State = struct('gui_Name', mfilename, ... 'gui_Singleton', gui_Singleton, ... 'gui_OpeningFcn', @guimain_OpeningFcn, ... 'gui_OutputFcn', @guimain_OutputFcn, ... 'gui_LayoutFcn', [] , ... 'gui_Callback', []); if nargin && ischar(varargin{1}) gui_State.gui_Callback = str2func(varargin{1}); end

if nargout [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:}); else gui_mainfcn(gui_State, varargin{:}); end % End initialization code - DO NOT EDIT

% --- Executes just before guimain is made visible. function guimain_OpeningFcn(hObject, eventdata, handles, varargin) % This function has no output args, see OutputFcn. % hObject handle to figure % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % varargin command line arguments to guimain (see VARARGIN) % Choose default command line output for guimain handles.output = hObject; a = ones(256,256); axes(handles.axes1); imshow(a); axes(handles.axes2); imshow(a); axes(handles.axes3); imshow(a); axes(handles.axes4); imshow(a); axes(handles.axes5); imshow(a); axes(handles.axes6); imshow(a); % Update handles structure guidata(hObject, handles); % UIWAIT makes guimain wait for user response (see UIRESUME) % uiwait(handles.figure1);

% --- Outputs from this function are returned to the command line. function varargout = guimain_OutputFcn(hObject, eventdata, handles) % varargout cell array for returning output args (see VARARGOUT); % hObject handle to figure % eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA) % Get default command line output from handles structure varargout{1} = handles.output;

% --- Executes on button press in browse. function browse_Callback(hObject, eventdata, handles) % hObject handle to browse (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) %%%%%%%%% To browse the Host Image........ [file,path] = uigetfile('*.png;*.bmp;*.jpg','Pick an Image File'); if isequal(file,0) | isequal(path,0) warndlg('User Pressed Cancel'); else k = imread(file); a=imresize(k,[256 256]); axes(handles.axes1); imshow(a); %set(handles.text17,'string','Input Image'); end handles.a=a; handles.k=k; guidata(hObject, handles);

% --- Executes on button press in MDBUT. function MDBUT_Callback(hObject, eventdata, handles) % hObject handle to MDBUT (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) a=handles.a; b=handles.b; Padd_Im = padarray(b,[1 1],'both'); [r c p]=size(Padd_Im); restored = mdbutmf(Padd_Im(:,:,1),r,c); if p==3 restored(:,:,2) = mdbutmf(Padd_Im(:,:,2),r,c); restored(:,:,3) = mdbutmf(Padd_Im(:,:,3),r,c);

end

axes(handles.axes6); imshow(restored,[]); %set(handles.text16,'string','MDBUTMF Output'); [PSNR MSE] = psnrmse(a,restored); set(handles.psnridba,'string',PSNR); set(handles.mseidba,'string',MSE); handles.restoredidba = restored; guidata(hObject, handles); % --- Executes on selection change in addnoise. function addnoise_Callback(hObject, eventdata, handles) % hObject handle to addnoise (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Hints: contents = get(hObject,'String') returns addnoise contents as cell array % contents{get(hObject,'Value')} returns selected item from addnoise a=handles.a; contents=get(hObject,'Value'); switch contents case 1 b=imnoise(a,'salt & pepper',0.01); axes(handles.axes1); imshow(b,[]); %set(handles.text17,'string','Noisy Input'); case 2 b=imnoise(a,'salt & pepper',0.03); axes(handles.axes1); imshow(b,[]); %set(handles.text17,'string','Noisy Input'); case 3 b=imnoise(a,'salt & pepper',0.05); axes(handles.axes1); imshow(b,[]); %set(handles.text17,'string','Noisy Input'); case 4 b=imnoise(a,'salt & pepper',0.07); axes(handles.axes1); imshow(b,[]); %set(handles.text17,'string','Noisy Input');

otherwise b=imnoise(a,'salt & pepper',0.09); axes(handles.axes1); imshow(b,[]); %set(handles.text17,'string','Noisy Input'); end handles.b=b; guidata(hObject, handles); % --- Executes during object creation, after setting all properties. function addnoise_CreateFcn(hObject, eventdata, handles) % hObject handle to addnoise (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles empty - handles not created until after all CreateFcns called % Hint: popupmenu controls usually have a white background on Windows. % See ISPC and COMPUTER. if ispc && isequal(get(hObject,'BackgroundColor'), get(0,'defaultUicontrolBackgroundColor')) set(hObject,'BackgroundColor','white'); end

% --- Executes on button press in Standard_med. function Standard_med_Callback(hObject, eventdata, handles) % hObject handle to Standard_med (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) a=handles.a; b=handles.b; [r c p] = size(b); smf=medfilt2(b(:,:,1),[3 3]); if p==3 smf(:,:,2)=medfilt2(b(:,:,2),[3 3]); smf(:,:,3)=medfilt2(b(:,:,3),[3 3]); end axes(handles.axes2); imshow(smf,[]); %set(handles.text12,'string','SMF Output'); [PSNR MSE] = psnrmse(a,smf); set(handles.psnrsmf,'string',PSNR); set(handles.msesmf,'string',MSE); handles.restoredsmf = smf;

guidata(hObject, handles);

% --- Executes on button press in DBA. function DBA_Callback(hObject, eventdata, handles) % hObject handle to DBA (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) a=handles.a; b=handles.b; Padd_Im = padarray(b,[1 1],'both'); [r c p]=size(Padd_Im); restored = dbafilt(Padd_Im(:,:,1),r,c); if p==3 restored(:,:,2) = dbafilt(Padd_Im(:,:,2),r,c); restored(:,:,3) = dbafilt(Padd_Im(:,:,3),r,c); end

axes(handles.axes5); imshow(restored,[]); %set(handles.text15,'string','DBA Output'); [PSNR MSE] = psnrmse(a,restored); set(handles.psnrdba,'string',PSNR); set(handles.msedba,'string',MSE); handles.restoreddba = restored; guidata(hObject, handles);

% --- Executes on button press in adaptivemedian. function adaptivemedian_Callback(hObject, eventdata, handles) % hObject handle to adaptivemedian (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) a=handles.a; b=handles.b; Padd_Im = padarray(b,[2 2],'both'); [r c p]=size(Padd_Im); restored = amffilt(Padd_Im(:,:,1),r,c); if p==3 restored(:,:,2) = amffilt(Padd_Im(:,:,2),r,c); restored(:,:,3) = amffilt(Padd_Im(:,:,3),r,c); end

axes(handles.axes3); imshow(uint8(restored)); %set(handles.text13,'string','AMF Output'); [PSNR MSE] = psnrmse(a,restored); set(handles.psnramf,'string',PSNR); set(handles.mseamf,'string',MSE); handles.restoredamf = restored; guidata(hObject, handles);

% --- Executes on button press in tsamft. function tsamft_Callback(hObject, eventdata, handles) % hObject handle to tsamft (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) a=handles.a; b=double(handles.b); Padd_Im = padarray(b,[1 1],'both'); [r c p]=size(Padd_Im); restored = tsamftfilt(Padd_Im(:,:,1),r,c); if p==3 restored(:,:,2) = tsamftfilt(Padd_Im(:,:,2),r,c); restored(:,:,3) = tsamftfilt(Padd_Im(:,:,3),r,c); end axes(handles.axes4); imshow(uint8(restored)); %set(handles.text14,'string','TSAMFT Output'); [PSNR MSE] = psnrmse(a,restored); set(handles.psnrtsamft,'string',PSNR); set(handles.msetsamft,'string',MSE); handles.restoredtsamft = restored; guidata(hObject, handles);

% --- Executes on button press in clear. function clear_Callback(hObject, eventdata, handles) % hObject handle to clear (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA)

a = ones(256,256); axes(handles.axes1); imshow(a); axes(handles.axes2); imshow(a); axes(handles.axes3); imshow(a); axes(handles.axes4); imshow(a); axes(handles.axes5); imshow(a); axes(handles.axes6); imshow(a); set(handles.psnrtsamft,'string',' '); set(handles.msetsamft,'string',' '); set(handles.psnramf,'string',' '); set(handles.mseamf,'string',' '); set(handles.psnrdba,'string',' '); set(handles.msedba,'string',' '); set(handles.psnrsmf,'string',' '); set(handles.msesmf,'string',' '); set(handles.psnridba,'string',' '); set(handles.mseidba,'string',' '); %set(handles.text12,'string',' '); %set(handles.text13,'string',' '); %set(handles.text14,'string',' '); %set(handles.text15,'string',' '); %set(handles.text16,'string',' '); %set(handles.text17,'string',' '); % --- Executes on button press in exit. function exit_Callback(hObject, eventdata, handles) % hObject handle to exit (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) exit;

% --- Executes on button press in fr. function fr_Callback(hObject, eventdata, handles) % hObject handle to fr (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB

% handles structure with handles and user data (see GUIDATA) [filename, pathname] = uigetfile('*.avi', 'Pick an Image'); if isequal(filename,0) | isequal(pathname,0) warndlg('image is not selected'); else a=aviread(filename); %axes(handles.one); figure(22) movie(a); end handles.filename=filename; handles.inputimage=a; guidata(hObject, handles); % --- Executes on button press in pushbutton11. function pushbutton11_Callback(hObject, eventdata, handles) % hObject handle to pushbutton11 (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA)

filename=handles.filename; str1='frame'; str2='.bmp'; file=aviinfo(filename); % to get inforamtaion abt video file frm_cnt=file.NumFrames % No.of frames in the video file h = waitbar(0,'Please wait...'); for i=1:frm_cnt frm(i)=aviread(filename,i); % read the Video file frm_name=frame2im(frm(i)); % Convert Frame to image file filename1=strcat(strcat(num2str(i)),str2); imwrite(frm_name,filename1); % Write image file waitbar(i/frm_cnt,h) end close(h) helpdlg('Frame seperation is Completed');

% --- Executes on button press in pushbutton12. function pushbutton12_Callback(hObject, eventdata, handles) % hObject handle to pushbutton12 (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) frm_cnt=30; number_of_frames=frm_cnt; filetype='.bmp'; display_time_of_frame=1; cd 'output'; mov = avifile('new2.avi'); count=0; for i=1:number_of_frames name1=strcat(num2str(i),filetype); a=imread(name1); while count<display_time_of_frame count=count+1; axes(handles.axes1); imshow(a); F=getframe(gca); mov=addframe(mov,F); end count=0; end % close all mov=close(mov); cd ..

% --- Executes on button press in pushbutton13. function pushbutton13_Callback(hObject, eventdata, handles) [filename, pathname] = uigetfile('*.avi', 'Pick an video'); if isequal(filename,0) | isequal(pathname,0) warndlg('image is not selected'); else a=aviread(filename); %axes(handles.one); figure(22) movie(a); end

handles.filename=filename; handles.inputimage=a; guidata(hObject, handles); % hObject handle to pushbutton13 (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA)

% --- Executes on button press in pushbutton14. function pushbutton14_Callback(hObject, eventdata, handles) filename=handles.filename; str1='frame'; str2='.bmp'; file=aviinfo(filename); % to get inforamtaion abt video file frm_cnt=file.NumFrames % No.of frames in the video file h = waitbar(0,'Please wait...'); for i=1:frm_cnt frm(i)=aviread(filename,i); % read the Video file frm_name=frame2im(frm(i)); % Convert Frame to image file filename1=strcat(strcat(num2str(i)),str2); imwrite(frm_name,filename1); % Write image file waitbar(i/frm_cnt,h) end close(h) helpdlg('Frame seperation is Completed'); % hObject handle to pushbutton14 (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA)

% --- Executes on button press in pushbutton15. function pushbutton15_Callback(hObject, eventdata, handles) frm_cnt=30; number_of_frames=frm_cnt; filetype='.bmp'; display_time_of_frame=1; cd 'output'; mov = avifile('project.avi'); count=0; for i=1:number_of_frames

name1=strcat(num2str(i),filetype); a=imread(name1); while count<display_time_of_frame count=count+1; axes(handles.axes1); imshow(a); F=getframe(gca); mov=addframe(mov,F); end count=0; end % close all mov=close(mov); cd .. % hObject handle to pushbutton15 (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA)

IMAGE SELECTED:-

BROWSE VIDEO:-

FRAME SEPERATION:-

FRAME SELECTION:-

GENERATING THE NOISE LEVEL:-

FIRST LEVEL:-

SECOND LEVEL:-

THIRD LEVEL:-

FOURTH LEVEL:-

FIFTH LEVEL:-

STANDARD MEDIAN FILTER:-

ADAPTIVE MEDIAN FILTER:-

TOLERANCE SWITCHED ADAPTIVE MEDIAN FILTER:-

DECISION BASED ALGORITHM:-

MODIFIEDDECISION BASED UNSYMMETRIC TRIMMEDMEDIAN FILTER:-

AMF PROGRAM:function restored = amffilt(Padd_Im,r,c) for i=3:r-2 for j=3:c-2 Sxy = Padd_Im(i-1:i+1,j-1:j+1); Zxy = Padd_Im(i,j); Zmin = min(min(Sxy)); Zmax = max(max(Sxy)); Zmed = median(median(Sxy)); [sr sc] = size(Sxy); if ((Zmin<Zmed)&&(Zmed<Zmax)) %%%%%%%%%% Level B if ((Zmin<Zxy)&&(Zxy<Zmax)) restored(i-2,j-2) = Zxy; else restored(i-2,j-2) = Zmed; end %%%%%%%%%%%%% Level A... else m = 2; if ((i-m)&(j-m)) Sxy = Padd_Im(i-m:i+m,j-m:j+m); [sr sc] = size(Sxy); else sr = 6; restored(i-2,j-2) = Zxy; end while sr<=5 Zxy = Padd_Im(i,j); Zmin = min(min(Sxy)); Zmax = max(max(Sxy)); Zmed = median(median(Sxy)); if ((Zmin<Zmed)&&(Zmed<Zmax)) if ((Zmin<Zxy)&&(Zxy<Zmax)) restored(i-2,j-2) = Zxy; sr = 6; else restored(i-2,j-2) = Zmed; sr = 6; end else

m = m+1; if (((i-m)&(j-m)) && (((i+m)<=r)&&((j+m)<=c))) Sxy = Padd_Im(i-m:i+m,j-m:j+m); [sr sc] = size(Sxy); else sr = 6; end restored(i-2,j-2) = Zxy; end end end end end

TSAMF PROGRAM:-

function restored = tsamftfilt(Padd_Im,r,c) for i=2:r-1 for j=2:c-1 in=Padd_Im(i-1:i+1,j-1:j+1); Noise_ind = find(~(in==0 | in==255)); Len = length(Noise_ind); if Len>=3 arith_mean = round(sum(in(Noise_ind))/Len); else arith_mean = round(sum(in(:))/9); end Diff = abs(arith_mean - in(5)); if Diff>=35

restored(i-1,j-1) = arith_mean; else restored(i-1,j-1) = in(5); end end end

DBA PROGRAM:-

function restored = dbafilt(Padd_Im,r,c) for i=2:r-1 for j=2:c-1; in=Padd_Im(i-1:i+1,j-1:j+1); %%%%%%%%%% Sort the Rows of the Window Sort_Res1(1,:) = sort(in(1,:)); Sort_Res1(2,:) = sort(in(2,:)); Sort_Res1(3,:) = sort(in(3,:)); %%%%%%%%%%%%%%% Sort in Column.......... Sort_Res = sort(Sort_Res1); %%%%%%%%%% Right Diagonal Sort..... Sort_Diag = sort([Sort_Res(3) Sort_Res(5) Sort_Res(7)]); Sort_Res(7) = Sort_Diag(1); Sort_Res(5) = Sort_Diag(2); Sort_Res(3) = Sort_Diag(3); pmin = Sort_Res(1); pmax = Sort_Res(9); pmed = Sort_Res(5); if (((Padd_Im(i,j)>pmin) && (Padd_Im(i,j)<pmax))&& ((pmin>0)&&(pmax<255)))%%%%%%%%%% Uncorrupted Pixel restored(i-1,j-1) = Padd_Im(i,j); elseif (((pmed>pmin) && (pmed<pmax)) && ((pmed>0)&&(pmed<255))) restored(i-1,j-1) = pmed; else restored(i-1,j-1) = Padd_Im(i,j-1); end end

end

MDBUTMF PROGRAM:-

function restored = mdbutmf(Padd_Im,r,c) for i=2:r-1 for j=2:c-1; in=Padd_Im(i-1:i+1,j-1:j+1); Sort_Res = sort(in(:));

pmin = 0; pmax = 255; pmed = Sort_Res(5); if (Padd_Im(i,j)>pmin) && (Padd_Im(i,j)<pmax)%%%%%%%%%% Uncorrupted Pixel restored(i-1,j-1) = Padd_Im(i,j); elseif (pmed>pmin) && (pmed<pmax) restored(i-1,j-1) = pmed; else restored(i-1,j-1) = round(mean([Padd_Im(i-1,j-1) Padd_Im(i-1,j) Padd_Im(i-1,j+1) Padd_Im(i,j-1)])); end

end end

MSE and PSNR CALCULATION:function [PSNR MSE] = psnrmse(Image1,Image2) x = double(Image1); y = double(Image2); [r c p] = size(x); [r c p] = size(y);

mse = (x-y) .^ 2; MSE = sum(mse(:)) / (r*c*p); % Calculate PSNR (Peak Signal to noise ratio) PSNR = 10 * log10( 256^2 /MSE); if p==3 PSNR = sum(PSNR)/3; MSE = sum(MSE)/3; end

You might also like