You are on page 1of 7

JOURNAL OF TELECOMMUNICATIONS, VOLUME 20, ISSUE 2, JUNE 2013

Video Compression Using Multiwavelet Critically Sampling Transformation


Laith Ali Abdul-Rahaim and Rau'm Saadon Mohammad
Abstract - Video Compression is a process of reducing the size file without degrading the video quality. This is achieved by reducing the redundancy between consecutive frames and within a single frame. Motion estimation is used to reduce temporal redundancy between frames by dividing the frame in to blocks and search for the best match according to the cost function. The proposed compression system uses the conventional red, green and blue color space representation and applies a two dimension discrete Multiwavelet Critically Sampling transform (DMWTCS) on the error residual signal before estimating the motion between frames. A value of threshold is assigned to the subband coefficients; a second of threshold is assigned to the second level detail subband coefficients. The second value is less than the first value. The resulting coefficients are quantized using a non-uniform quantizer and encoded using the EZW encoding algorithm. The system will be tested using two kinds of videos (simple, complex). The performance of the system will be tested by three main parameters; compression ratio (CR), Peak signal to noise ratio(PSNR) and processing time(PT). All graphics and codes are executed using MATLAB2008a. Index Terms: Video Compression, Peak signal to noise ratio (PSNR), CR, DMWTCS

u
,.

1. INTRODUCTION
A video is a sequence of frames, so when dealing with any video we in fact deal with frames. Nowadays, video quality is a very important factor in entertainment world. A movie could be interesting only because of its quality and its graphics [1, 2]. To send and store these high quality videos, we have to compress them, rejecting part of the unneeded data to reduce the video capacity [1, 3, 4]. Many compression techniques are created to choose which part of the data to be rejected. Among these techniques there are Pel-Recursive techniques which calculate the displacement vector for each pixel of the frame in a recursive manner by moving pixel by pixel in both spatial and temporal order [5, 6, 8]. Also, there are gradient techniques based on the image luminance invariance during motion. There are also the frequency domain based techniques that use Fourier transform and fast Fourier transform with filters [5, 6, 7]. The best compression technique is called Motion Vector (MV) estimation Technique, the most commonly used techniques in video compression. One of the MV techniques is the Full Search (FS) Motion Estimation. In this technique, each block in the current frame will be compared with every block in the reference frame in order to find the best matching block. But FS technique is very hard to be implemented, as it uses huge hardware [10, 11]. Other type of the MV estimation techniques is Binary Motion Estimation (BME) technique. This technique reduces the number of candidate blocks compared to that of FS algorithm.

The binary motion technique classifies each block to a class, in which each pixel is given a value of "0" or "1" depending on the pixel value. When the reference block have the same class of the current block, this reference block is considered a candidate block, otherwise the reference block is rejected [7, 8]. The idea of the proposed algorithm is taken from BME technique, but each pixel is given a value of "0", "1", "2", or "3" depending on the pixel value after frequency transformation. Then after adding the 64 pixels of the block, this block is classified as one of 8 classes. None of the previous algorithms are implemented so far, but the proposed algorithm is implemented on DMWTCS. Matching Pursuits technique have been applied to code the motion prediction error signal in[1]. The prediction error frame is transformed by using DCT. The Matching Pursuits coder divides each motion residual error into blocks, and measures the energy of each block. The center of the block with the largest energy value is adopted as an initial estimate for inner product search. [2] proposed new algorithm converts motion compensated compressed video into sequence of DCT- domain blocks corresponding to the spatial domain blocks of the current frame alone, without prediction based on other frame, i.e. removing the inter frame element of compression /decompression. The algorithm receives as the input DCT blocks of the motion compensated compressed video, and provide DCT blocks of the corresponding spatial domain, blocks of the current frame alone without Reference to past or future frames. Dual wavelet based video compression was investigated in [3]. Initial results shows that dual tree complex wavelet transforms yield improved Peak Signal to Noise Ratio compared to discrete wavelet and discrete cosine transform. The motion estimation technique in wavelet domain to reduce temporal redundancy [4]. The technique used in the wavelet domain is low band shift method to overcome the shift variant property.

L. A. Abdul-Rahaim is with the Electrical Engineering Department, University of Babylon, Babylon, Iraq. Rau'm Saadon Mohammad is with the Department of Computer Science, University of Babylon, Babylon, Iraq.

JOURNAL OF TELECOMMUNICATIONS, VOLUME 20, ISSUE 2, JUNE 2013

Adaptive codec scheme of H.263 standard that depends on transform coding (DCT) and quantization process in compression is proposed. In addition motion estimation search algorithms were used to estimate the predictive frames [5]. In this work a new fast method for color digital video compression is presented. The method states to convert each frame of the video sequence from the color space RGB to YCrCb and then use two dimensional L-Level discrete wavelet transform, a proposed limiter is used to limit the coefficient value for the elimination of the spatial and band redundancy and Adaptive Road Pattern Search for motion estimation to eliminate the temporal redundancy. The research is interest in required processing time for execution, peak signal to noise ratio, and compression ratio. The proposed method keeps the system away from the shift variant which results from the motion estimation and compensation based on discrete wavelet transform in spatial compression. The method proposed in [3] using dual complex discrete multiwavelet transform (DMWTCS) requires high processing time than the conventional DWT. The feature of the proposed method is less processing time, higher compression ratio and increase Peak signal to noise ratio.

displayed over time, therefore; digital video is considered as 3D-matrix signal and its size are very large. The original video signal has a lot of redundancy. Video compression is an operation which implies two main operations to remove redundancy from a video signal: 1. Spatial and band domain compression. 2. Temporal domain compression.

3.1 Spatial and Band Domain Compression Spatial compression is the process of removing redundancy within a single video frame; this is achieved by decorrelating the image pixels. Many techniques have been used to remove spatial redundancy. Discrete cosine Transform (DCT) analyzes the image in the time domain and has been used to decorrelate image pixels. The DCT has some disadvantages such as blocking artifacts. The Discrete Wavelet Transform (DWT) has become a vital technique used to decorrelate image pixels because it analyze the image in spatial bands domains and detect the redundancy in two domains; time and frequency, in addition, DWT reduces computation time and it is easy to implement. The DWT carries out a decomposition of single video frames or motion compensated residuals to a multi-resolution subband representation.
A single video frame is passed through different cut off frequency filters at different scales. The 2D-DMWT decomposition is applied by using 1D-DMWT in the horizontal and vertical directions. The output of this operation are four subbands; LL, LH, HL and HH. The band which represents a smaller low resolution version of the original image is the LL band which is called "Approximation". The other subband images LH,HL and HH are high pass samples and represent a smaller residual version of the original images which are called "Details" [8]. The decomposition continuous to the LL band until L-level [9].

2. COLOR SPACE YCrCb:


Actual information stored in digital image is the brightness information in each band (i.e. red, green, and blue). When image is displayed, the corresponding brightness information is displayed on screen by picture element that emits light energy corresponding to that particular color. The RGB color space is not an efficient representation for compression because there is significant correlation between color components in typical RGB images. In some compression algorithms, a luminance chrominance representation such as (YCrCb, YIQ, YUV, etc) is considered superior to RGB representation. In this work we will be using YCrCb. YCrCb color space is a subset that scales and shifts the chrominance values into the range of 0 to 1. There is no correlation among the spaces of YCrCb. The features of the YCrCb are: The lower inter-component correlation between YCrCb. The eye is less sensitive to color, therefore the Cr and Cb components can be sub sampled. The color space YCrCb can achieve higher compression ratio than RGB representation. The linear transform from RGB to YCrCb generates one luminance space Y and two chrominance (Cr and Cb) spaces [6]. Y=0.299R+0.587G+0.114B (1) Cr= ((B-Y)/2) + 0.5 (2) Cb=((R-Y)/1.6)+0.5 (3) The small-bandwidth chrominance signals Cr and Cb are usually subsampled before actual compression.

Fig.1, 1-level for DWT and 2-level decomposition of DMWT

3-DIGITAL COLOR VIDEO COMPRESSION


Digital video coding has become essential since the MPEG-1 first appeared. It had great impact on video delivery, storage. With respect to analog video, digital video coding achieves higher compression rates without significant loss of subjective picture quality. This eliminates the need of high band-width as required in analog video delivery. A digital color video consists of 3D-frames which are

The DMWT decomposition is a lossless operation. The decompressed image can be reconstructed by applying inverse DMWT (IDMWT). The IDMWT recombines the subband images so that the original image is reconstructed [8].

3.2 Temporal Domain Compression


Video frames are similar to its immediate neighboring frames, therefore; temporal redundancy exists between video frames. There are many procedures used to eliminate temporal redundancy, Motion estimation is one of the popular procedures in temporal compression. Motion

JOURNAL OF TELECOMMUNICATIONS, VOLUME 20, ISSUE 2, JUNE 2013

estimation forms a model of the current frame based on available data in one or more previously encoded reference frames with acceptable computational complexity. The model takes advantage of the strong correlation between frame to frame along the temporal dimension. Block matching Algorithms (BMA) are suitable and simple. The simplest Block matching algorithms is known as full search, but it has great computational operations in software implementation, more than the remaining components of the encoder. It divides the frame into blocks; each block is matched with a reference frame. The matching between macroblocks depends on the output of a criteria function(cost function). According to the output of the criteria function, there will be a best match between the current and the reference block. Two dimension vectors represent the estimated motion which is known as" Motion Vectors". If the current macroblock is found at the same place in the previous picture. This would result in a zero motion vector together with a null difference [5, 6]. Matching criterion can be used to obtain motion vectors between one macro block and another[8]. There are many types of cost functions, one of the most popular and less computational complexity is Mean Squared Error (MSE) given by Eq.(4)[10]. Another type of cost function is Mean Absolute Difference (MAD) given byEq.(5) [10].
MSE = 1 N *M

predicted motion vector which is determined by using one of the matching criteria on the direct left, with respect to the current block and the rood search in this stage is performed only once. The point that will have the least cost function is found and made the center of the search in the next stage. In this work the cost function is Mean Absolute difference (MAD). In the second search stage, a rood pattern with unitsize is exploited repeatedly until the point with minimum error is found to be at the center of the search pattern. ARPS saves on computation by directly putting the search in an area where there is a high probability of finding a good matching block which makes it best over DS [13].

4-PROPOSED SYSTEM FOR FAST VIDEO COMPRESSION AND DECOMPRESSION


The proposed compression system compresses each video frame by using DMWTCS L-level transform. The color video frame or image is segmented to three RGB layers. The three layers are transformed to YCrCb color space and each layer is processed individually. The three 2D layers are decomposed by using L-level of 2-D DMWTCS. There will be four output matrices from each level of DMWTCS, one matrix represents the approximation coefficients and the others represent the detail coefficients matrices. Most information about the image or matrix exists in approximation coefficients while the other coefficients contains all most spatial and band domain redundancy. The decomposition is continued for the approximation coefficients in each level until L-level. In each decomposition level a number of zeros are produced. The compression process depends on increasing number of zeros in the decomposed matrices and this occurs by applying the threshold and limiter process. In this work the hard threshold is used Eq.(7)[14], for the details coefficients of each component of the layers and keep the approximation coefficients without any change, therefore; large number are produced without affecting the quality of the reconstructed image. The motion estimation between the frames is after the spatial compression process completely. The motion estimation compares between previous and current frame to estimate the motion vector using adaptive road pattern search (ARS) and store the delta information between the current frame and reference frame. In decompression side the motion compensation between the previous and current decompressed frame achieved to reconstruct all the video scene frames. The block diagram of the proposed system is shown in Figure 3. The motion estimation between the frames is after the spatial compression process completely. The important disadvantage with using motion estimation in DMWT domain is that it is shift-variant and some distortion appears in some reconstructed compressed video frames. Dual tree complex wavelet transform has been suggested as a method to solve a part of this problem in reference [3], but the processing time increases as computational process in this method increases too. In this work, we solve both problems shift-variant and processing time by using the proposed system shown in Figure 3.

(C
i =0 j =0

N 1 M 1

2 ij

Rij )

(4) (5)

MAD =

1 N *M

| C
i =0 j =0

N 1 M 1

ij

Rij |

Where N and M are the sides of the macro block, Cij and Rij are the pixels being compared in current and reference macro block respectively. Other algorithms have reduced the number of searches but these algorithms can get trapped into local minimum matching error points, namely, the four step search [11] and the diamond search [12]. To overcome this problem, Adaptive Road Pattern Search (ARPS) was presented [13]. APRS has been used in this work. ARPS adopts a rood-shape search pattern and the size of the rood arms can be adaptively adjusted during the search procedure. This algorithm is a type of block matching algorithm. The main principle of this algorithm depends on the similarity of motion between the current macro block and the motion of the macro blocks around it (belonging to the same moving object), as a result, the current macro block's motion vector can be predicted [13]. The road pattern is formed of four vertexes and is symmetrical the step size (SS) which is the distance between the vertex and the center is given by Eq.(6) [13]:

SS = H + V
Where:

.(6)

H : is the horizontal coordinate.

V : is the vertical coordinate.


The search pattern consists of two sequential search stages: first stage; the size of the rood is decided according to the

JOURNAL OF TELECOMMUNICATIONS, VOLUME 20, ISSUE 2, JUNE 2013

10

decomposition
Multiwavelet critically sampling for luma

Original video sequence U

Round to nearest integer

Multiwavelet critically sampling for croma

Round to nearest integer

decode

In-multiwavelet critically sampling for luma

reconstruction

decode

In-multiwavelet critically sampling for croma

Fig. 2, the proposed video compression and decompression system

4.1 Discrete Multiwavelet Transform Computation Algorithms


Multiwavelets filter banks require a vector-valued input signal. This is another issue to be addressed when Multiwavelets are used in the transform process, a scalarvalued input signal must somehow be converted into a suitable vector-valued signal. This conversion is called preprocessing [13-16]. There are a number of ways to produce such a signal from 2-D signal data. Furthermore in data compression applications, one is seeking to remove redundancy not to increase it as in the case of repeated row preprocessing. Hence in this chapter also, ApproximationBased Preprocessing Algorithms have been studied and verified as critical sampled representations of the signal. These minimize redundancy for data compression applications.

preprocessing) described in [16]. By using a criticallysampled scheme of preprocessing (approximation-based scheme of preprocessing), the DMWT matrix has the same dimensions of the input which should be a square matrix NN where N must be power of 2. Transformation matrix dimensions which should be equal to 2D-Signal Matrix dimensions after preprocessing will be NN for a criticalsampled scheme of preprocessing. There are two orders of approximation types of criticallysampled preprocessing [16] 1st order and 2nd order approximations. For the equation
( 0) v1 ,n = ( 0) v2 ,n =

2 (1) f [2n + 1] 2 (1 / 2) f [2n + 2] 2 (3 / 2) f [2n] 2 (1)1 (1 / 2)


f [2n + 2] 2 (1)

(7)

and using GHM scaling function graph (Fig. 3), values for

4.1.1 A General Procedure for Computing 1D-DMWT Using a Critically-Sampled Scheme of Preprocessing:
A general procedure can be made for computing a singlelevel 2-D discrete Multiwavelets transform using GHM four Multifilters and using a critically-sampled scheme of preprocessing (approximation-based scheme of

1 (1/2), 2 (1/2), 2 (1) and 2 (3/2) should be found for


first order approximation. For any NN 2D-Signal Matrix and using the eq. (9), 1st order approximation-based preprocessing can be summarized as follows where every two rows generate two new rows: a- For any odd row,

JOURNAL OF TELECOMMUNICATIONS, VOLUME 20, ISSUE 2, JUNE 2013

11

new odd - row = [ 2 (1)[same odd - row] 2 (1 / 2)[next even - row] 2 (3/2)[previous even - row]]/ 2 (1)1 (1 / 2)
(8)

(t)

(t)

(a)

(t)

(t)
(b)

Fig. (3): GHM Pair of, (a) Scaling Functions, (b) Multiwavelets

b-For any even-row,

new even - row =

same even - row 2 (1)

It is obvious now why the dimension of the resulting matrix after approximation-based preprocessing has the same dimension as before preprocessing. (9) The following procedure for computing DMWT using approximation-based preprocessing is valid for both 1st and 2nd order of approximation with one exception of using Eqs.(12) and (13) for 1st order approximation preprocessing step and Eqs. (14) and (15) for 2nd order approximations preprocessing step: 1- Checking input dimensions: Input vector should be of length N1, where N must be power of two. 2- Constructing a transformation matrix: Using the transformation matrix equation

It can be seen from Fig. (3) that the values of 1 (t ) and

2 (t ) are non-zero for t values of [0, 2]. Since these functions are generated from a 256 sample then:
1. 2.

1 (1/2) = the 64th value in the iterated vector of 1 ,


2 (1/2) = the 64th value in the iterated vector of 2
(1) = the 128th value in the iterated vector of
2

2 (3/2),
3.

2 .

Substituting values of 1 (1/2), 2 (1) and 2 (1/2) in Eqs.(10) and (11) for 1st order approximation results, new odd - row = (0.373615)[same odd - row] + (0.11086198)[next even - row] + (0.11086198)[previous even - row] (10)

new even - row = ( 2 1)[same even - row] (11)


for 2nd order approximation, Eqs. (10) and (11) become [16]:
new odd - row = (10 / 8 2 )[same odd - row] + (3 / 8 8 )[next even - row] + (3 / 8 2 )[previous even - row] (12)

H 0 0 H 2 W = G0 0 0 G 2

H1 0 H3 G1 0 0 G3

H2 H0 0 G2 G0 0 0

H3 H1 0 G3 G1 0 0

0 H2 0 G2 0 0

0 0 0 0 0 0

0 0 0 0 0 0

0 0 H0 0 0 G0

H3 0 0 0

G3

G0 G1 G2

0 0 H 1 0 0 (14) G3 G1

an N/2N/2 transformation matrix should be constructed using GHM low- and high-pass filters matrices given in eq.(16)

new even - row = [same even - row]

(13)

It should be noted that when computing the first odd row, the previous even-row in eq. (12) is equals to zero. In the same manner, when computing the last odd row, the next even-row in Eq. (13) is equals to zero. The same thing is valid for equation (14).

JOURNAL OF TELECOMMUNICATIONS, VOLUME 20, ISSUE 2, JUNE 2013

12

3 H 0 = 5 2 1 20

3 , H1 = 2 2 9 10 2 20

2 5 3

0 1 2

0 0 0 0 H 2 = 9 3 , H 3 = 1 0 20 10 2 20 3 1 1 9 20 10 2 , G = 20 2 G0 = 3 1 9 1 0 10 10 2 10 2
9 20 G2 = 9 10 2 3 1 10 2 , G = 20 3 1 3 10 10 2 0 0

(15)

Fig. (4): The original file (AVI video)

after video segmentation into frames/images

After substituting GHM matrix filter coefficients values as given by eq (17), an NN transformation matrix results with same dimensions of input 2D-Signal Matrix dimensions after preprocessing [16]. 3- Preprocessing rows: Approximation-based row preprocessing can be computed by applying Eqs. (12) and (13) to the odd- and even-rows of the input NN matrix respectively for the 1st order approximation preprocessing. For 2nd order approximation preprocessing, Eqs. (12) and (13) are replaced with Eqs. (14) and (15) for preprocessing odd- and even-rows of the input N1 matrix respectively. Input matrix dimensions after row preprocessing is the same N1. 4- Transformation of Input matrix: i- Apply matrix multiplication to the NN constructed transformation matrix by the N1 row preprocessed input 1D- matrix. ii- Permute the resulting N1 matrix rows by arranging the row pairs 1,2 and 5,6 , N3, N2 after each other at the upper half of the resulting matrix rows, then the row pairs 3,4 and 7,8,, N1,N below them at the next lower half. Constructed transformation matrix by the Nx1 preprocessing input vector.

Fig(5) After video segmentation into frames/images

5. COMPUTER TEST
A Computer program for computing a single-level DMWT/IDMWT using a critical-sampled scheme of preprocessing is written using visual studio 2010 for general signal (images sequence of video) . An example test is applied by segmenting a video into frames/images and by using this computer program of the proposed method for computing discrete multiwavelet transform using a critical-sampled scheme of preprocessing and then computing inverse discrete multiwavelet transform using the same scheme of post processing and the results are shown as in the below:

Fig(6) After video decompression


the processed video frames will be collected and saved in "new " that represent the compressed file as show in fig(5)

5. CONCLUSIONS
In this paper, a proposed video compression system was presented. The proposed system has cured the shift variant problem which was produced as a result of motion estimation and compensation in the DMWT domain. Color components in the YCrCb color space are less sensitive to the eye; higher thresholds are applied to coefficients of these components. The results which are presented show that the proposed system yields better visual results and high compression ratio. The processing time has also been lowered because the motion estimation compares between

JOURNAL OF TELECOMMUNICATIONS, VOLUME 20, ISSUE 2, JUNE 2013

13

two redundancies-less frames (Reference and current Frame). The results also show improved PSNR because of the efficient performance of the proposed quantization

REFERENCES
[1] Snow C., Lampe L., and Schober R. 2009. " Design and Implementation of a video compression technique for High Definition videos implemented on a FPGA ", IEEE 2011 21st International Conference on Systems Engineering. [2] Hadeel Nasrat Al-Taai , "Optical Flow Estimation Using DSP Techniques" , University of Technology , 2005. [3] R.F.Rezai, and W. Kinsner , Modified Gabor Wavelets for Image Decomposition and Perfect Reconstruction, 2011. [4] W. A. Mahmoud, Z. J. M. Saleh, and N.K. Wafi, A Simple and Easy to Verify Algorithm For Computing GHM Multiwavelets Transform and Inverse Transform Using an Over-Sampled Scheme of Preprocessing and Postprocessing Respectively, Journal of Engineering, College of Engineering ,University of Baghdad, accepted for publication, EEN/68, 2004. [5] George C. Donovan, Jeffrey S. Geronimo and Douglas P.Hardin, 1998"Orthogonal polynomials and the construction of piecewise polynomial smooth wavelets", SIAM Journal on Mathematical Analysis, vol.30, no.5, pp.1029-1056. [6] V. Strela and A. T. Walden, Orthogonal and biorthogonal multiwavelet for signal denoising and image compression, Proc. SPIE, 3391:96-107, 1998. [7] Iain E. G.Richardson Video Codec Design, John Wiley &Sons, Ltd, 2002.. [8] Lai-Man Po, and Wing-Chung Ma."A noval Four step Search Algorithm for Fast Block Motion Estimation". IEEE Trans.Circuits and Systems For Video Technology, vol 6,no. 3,pp.313-317, June 1996. [9] N.Venkateswaran and Y.V.Ramana Rao, "K-Means Clustering Based Image Compression in wavelet Domain". Asian Network for Scientific Information, Information Technology Journal 6(1).148-153,2007.. [10] Kamrul Hasan Talukder, Koichi Harada," Haar Wavelet Based Approach for Image Compression and Quality Assessment of Compressed Image", IAENG International Journal of Applied Mathematics, 36:1, IJAM_36_1_9, 1 February 2007. [11] Ajnadeen Khalil "Block Matching Binary Motion Estimation Algorithm Based on Discrete Wavelet Transform of Video Sequences" June 2008. [12] 15. Shilpa P. Metkar, Sanjay N. Talbar " Fast motion estimation using modified orthogonal search algorithm for video compression", journal on Signal, Image and Video Processing, Springer London, DOI 10.1007/s11760-009-01049 , January 22, 2009.. [13] O.K. Al-Shayk, E. Milo, T.Nomura, R. Neff and A.Zahhor, "Video Compression Using Matching Pursuits", Depatment of Electrical Engineering &Computer Science, California University, IEEE Transaction on circuits & System for video technology, February 1999. [14] N.Merhav & V.Bhaskaran, A Fast "A Fast Algorithm for DCT-domain Inverse motion compensation",Computer Syatems laboratory &Hewlett-Packard laboratories(HP) in Isreal Science Center, Technion city, Haifa-3-2000. [15] Unan Yusmaniar Oktiawati,Vooi Voon Yap," Video compression using dual tree complex wavelet transform". IEEE international conference on intelligent and advanced systems 2007 ICIAS 2007, Volume, Issue, 25-28 Nov. 2007 Page(s):775 - 778. [16] 5. Ala'a Basil Hussain," Motion Estimation for Video Compression and Decompression Techniqes", Nahrain University, Master Thesis, chapter Three, page 47, 2009. [17] Joseph Yeh, Martin Vetterli and Masoud Khansari, "Motion Compensation of Motion Vectors", 1995, Infoscience,"http://infoscience.epfl.ch/record/34152/filles". [18] N.Venkateswaran and Y.V.Ramana Rao, "K-Means Clustering Based Image Compression in wavelet Domain". Asian Network for Scientific Information, Information Technology Journal 6(1).148-153,2007.

[19] S.Radhakarishnan, G.Subbarayan and Karthich Vikram, "Wavelet Based Video Encoder Using KCDS", International Arab Journal of Information Technology, pp. 245-249, Vol. 6, No. 3, July 2009 [20] 15. Shilpa P. Metkar, Sanjay N. Talbar " Fast motion estimation using modified orthogonal search algorithm for video compression", journal on Signal, Image and Video Processing, Springer London, DOI 10.1007/s11760-009-01049 , January 22, 2009. [21] T. Borer, and T. Davies, Dirac video compression using open standards, BBC R&D White Paper, WHP 117, Sept. 2005.. Laith Ali Abdul-Rahaim (Member IEEE) was born in Babylon-1972, Iraq. He received the B.Sc. degree in Electronics and Communications Department from the University of Baghdad (1995)-Iraq, M.Sc. and Ph.D. degrees in Electronics and Communication Engineering from the University of Technology-Iraq in 2001 and 2007 respectively. Since 2003, he has been with the University of Babylon-Iraq, where he is now head of Electrical Engineering Department. His research interests include MC-CDMA, OFDM, MIMO-OFDM, CDMA, Space Time Coding, Modulation Technique, Image processing. Rau'm Saadon Mohammad was born in Babylon-1985, Iraq. She received the B.Sc. degree in Computer Science Department from the University of Babylon (2008)-Iraq, she work for M.Sc. Computer Science Dept ./ College of Science / University of Babylon.

You might also like