You are on page 1of 9

Explain the significance of digital image processing in Gamma-ray Imaging and Imaging in the Visible and Infrared Bands.

Ans Significance of digital image processing in Gamma-ray Imaging Major uses of imaging based on gamma rays include nuclear medicine and astronomical observations. In nuclear medicine, the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays. Images are produced from the emissions collected by gamma ray detectors. Images of this sort are used to locate sites of bone pathology, such as infections or tumors. another major modality of nuclear imaging called positron emission tomography (PET).The principle is the same as with X-ray tomography. However, instead of using an external source of X-ray energy, the patient is given a radioactive isotope that emits positrons as it decays. When a positron meets an electron, both are annihilated and two gamma rays are given off. These are detected and a tomographic image is created using the basic principles of tomography. The image shown in Fig. 1.5(b) is one sample of a sequence that constitutes a 3-D rendition of the patient. This image shows a tumor in the brain and one in the lung, easily visible as small white masses.

Significance of digital image processing in Imaging in the Visible and Infrared Bands: Considering that the visual band of the electromagnetic spectrum is the most familiar in all our activities, it is not surprising that imaging in this band outweighs by far all the others in terms of scope of application. The infrared band is often used in conjunction with visual imaging, so we have grouped the visible and infrared bands in this section for the purpose of illustration. Fig. 1.8 shows several examples of images obtained with a light microscope. The examples range from pharmaceuticals and microinspection to materials characterization. Even in just microscopy, the application areas are too numerous to detail here. It is not difficult to conceptualize the types of processes one might apply to these images, ranging from enhancement to measurements.

Explain the properties and uses of electromagnetic spectrum. Ans: The electromagnetic (EM) spectrum is the range of all types of EM radiation. Radiation is energy that travels and spreads out as it goes the visible light that comes from a lamp in your house and the radio waves that come from a radio station are two types

of electromagnetic radiation. The other types of EM radiation that make up the electromagnetic spectrum are microwaves, infrared light, ultraviolet light, X-rays and gamma-rays.

White light can be split up using a prism to form a spectrum. A prism is a block of glass with a triangular cross-section. The light waves are refracted as they enter and leave the prism. The shorter the wavelength of the light, the more it is refracted. As a result, red light is refracted the least and violet light is refracted the most, causing the coloured light to spread out to form a spectrum.

Visible light is just one type of electromagnetic radiation. There are various types of electromagnetic radiation, some with longer wavelengths than visible light and some with shorter wavelengths than visible light.

Type of electromagnetic radiation and uses Radio waves broadcasting communications satellite transmissions

Microwaves cooking communications satellite transmissions

Infrared cooking thermal imaging short range communications optical fibres television remote controls security systems

visible light vision

photography illumination

ultraviolet X-rays observing the internal structure of objects airport security scanners medical X-rays security marking fluorescent lamps detecting forged bank notes disinfecting wate

gamma rays sterilising food and medical equipment detection of cancer and its treatment

Differentiate between Monochromatic photography and Color photography Ans Monochromatic photography The most common material for photographic image recording is silver halide emulsion, depicted in Fig. 5.3. In this material, silver halide grains are suspended in a transparent layer of gelatin that is deposited on a glass, acetate or paper backing. If the backing is transparent, a transparency can be produced, and if the backing is a white paper, a reflection print can be obtained. When light strikes a grain, an electrochemical conversion process occurs, and part of the grain is converted to metallic silver. A development center is then said to exist in the grain. In the development process, a chemical developing agent causes grains with partial silver content to be converted entirely to metallic silver. Next, the film is fixed by chemically removing unexposed grains. The photographic process described above is called a nonreversal process. It produces a negative image in the sense that the silver density is inversely proportional to the exposing light. A positive reflection print of an image can be obtained in a twostage process with nonreversal materials. First, a negative transparency is produced, and then the negative transparency is illuminated to expose negative reflection print paper. The resulting silver density on the developed paper is then proportional to the

light intensity that exposed the negative transparency. A positive transparency of an image can be obtained with a reversal type of film. This film is exposed and undergoes a first development similar to that of a nonreversal film. At this stage in the photographic process, all grains that have been exposed to light are converted completely to metallic silver. In the next step, the metallic silver grains are chemically removed. The film is then uniformly exposed to light, or alternatively, a chemical process is performed to expose the remaining silver halide grains. Then the exposed grains are developed and fixed to produce a positive transparency whose density is proportional to the original light exposure.

Color photography Modern color photography systems utilize an integral tripack film, as illustrated in Fig. 5.4, to produce positive or negative transparencies. In a cross section of this film, the first layer is a silver halide emulsion sensitive to blue light. A yellow filter following the blue emulsion prevents blue light from passing through to the green and red silver emulsions that follow in consecutive layers and are naturally sensitive to blue light. A transparent base supports the emulsion layers. Upon development, the blue emulsion layer is converted into a yellow dye transparency whose dye concentration is proportional to the blue exposure for a negative transparency and inversely proportional for a positive transparency. Similarly, the green and red emulsion layers become magenta and cyan dye layers, respectively. Color prints can be obtained by a variety of processes. The most common technique is to produce a positive print from a color negative transparency onto nonreversal color paper. In the establishment of a mathematical model of the color photographic process, each emulsion layer can be considered to react to light as does an emulsion layer of a monochrome photographic material. To a first approximation, this assumption is correct. However, there are often significant interactions between the emulsion and dye layers. Each emulsion layer possesses a characteristic sensitivity, as shown by the typical curves of Fig. 5.5. The integrated exposures of the layers are given by where dR, dG, dB are proportionality constants whose values are adjusted so that the exposures are equal for a reference white illumination and so that the film is not saturated. In the chemical development process of the film, a positive transparency is produced with three absorptive dye layers of cyan, magenta and yellow dyes.

Define and explain Dilation and Erosion concept. Ans Dilation

With dilation, an object grows uniformly in spatial extent. Generalized dilation is expressed symbolically as

where F(j, k), for 1 j, k N is a binary-valued image and H(j, k) for , 1 j, k L, where L is an odd integer, is a binary-valued array called a structuring element. For notational simplicity, F(j,k) and H(j,k) are assumed to be square arrays. Generalized dilation can be defined mathematically and implemented in several ways. The Minkowski addition definition is

It states that G(j,k) is formed by the union of all translates of F(j,k) with respect to itself in which the translation distance is the row and column index of pixels of H(j,k) that is a logical 1. Fig. 6.3 illustrates the concept.

Erosion With erosion an object shrinks uniformly. Generalized erosion is expressed symbolically as

where H(j,k) is an odd size L * L structuring element. Generalized erosion is defined to be

The meaning of this relation is that erosion of F(j,k) by H(j,k) is the intersection of all translates of F(j,k) in which the translation distance is the row and column index of pixels of H(j,k) that are in the logical one state. Fig. 6.4 illustrates this. Fig. 6.5 illustrates generalized dilation and erosion.

What is mean by Image Feature Evaluation? Which are the two quantitative approaches used for the evaluation of image features? Ans

An image feature is a distinguishing primitive characteristic or attribute of an image. Some features are natural, while others are artificial features. Natural features include the luminance of a region of pixels and gray scale textural regions. Image amplitude histograms and spatial frequency spectra are examples of artificial features.

There are two quantitative approaches to the evaluation of image features: prototype performance and figure of merit. In the prototype performance approach for image classification, a prototype image with regions (segments) that have been independently categorized is classified by a classification procedure using various image features to be evaluated. The classification error is then measured for each feature set. The best set of features is, of course, that which results in the least classification error. The prototype performance approach for image segmentation is similar in nature. A prototype image with independently identified regions is segmented by a segmentation procedure using a test set of features. Then, the detected segments are compared to the known segments, and the segmentation error is evaluated. The problems associated with the prototype performance methods of feature evaluation are the integrity of the prototype data and the fact that the performance indication is dependent not only on the quality of the features but also on the classification or segmentation ability of the classifier or segmenter. The figure-of-merit approach to feature evaluation involves the establishment of some functional distance measurements between sets of image features such that a large distance implies a low classification error, and vice versa. Faugeras and Pratt have utilized the Bhattacharyya distance figure-of-merit for texture feature evaluation. The method should be extensible for other features as well.

Explain about the Region Splitting and merging with example. Let R represent the entire image and select predicate P. One approach for segmenting R is to subdivide it successively into smaller and smaller quadrant regions so that, for ant region, Ri, P(Ri) = TRUE. We start with the entire region. If P( R )= FALSE then the image is divided into quadrants. If P is FALSE for any quadrant, we subdivide that quadrant into sub quadrants, and so on. This particular splitting technique has a convenient representation in the form of a so called quad tree (that is, a tree in which nodes have exactly four descendants), as shown in Fig. (10.3.3) 10.4. The root of the tree corresponds to the entire image and that each node corresponds to a subdivision. In this case, only R4 was sub divided further.

If only splitting were used, the final partition likely would contain adjacent regions with identical properties. This draw back may be remedied by allowing merging, as well as splitting. Satisfying the constraints of section 10.3.1 requires merging only adjacent regions whose combined pixels satisfy the predicate P. That is, two adjacent regions Rj and Rk are merged only if = TRUE.

1. Split into four disjoint quadrants any region Ri for which where P ( Ri ) = FALSE 2. Merge any adjacent regions Ri and Rk for which 3. Stop when no further merging or splitting is possible. = TRUE.

You might also like