You are on page 1of 2

FEATURE

New Standard

AVS2

A New
Video
Coding
Standard

requires even less of the original video information


re-creates parts of the original video by artificial intelligence
saves on the amount of data required to transmit
makes use of software developed for gaming
94 TELE-audiovision International The Worlds Leading Digital TV Industry Publication 11-12/2014 www.TELE-audiovision.com

www.TELE-audiovision.com 11-12/2014 TELE-audiovision International

95

FEATURE

New Standard

HD video

How to artificially
re-create a HD video
Jacek Pawlowski

AVS2 is a third generation coding standard under development


by the Audio Video Coding Standard Workgroup of China. It will
be a successor of AVS. The AVS
standard is comparable in performance with the H.264/MPEG-4
AVC commonly used all over the
world for coding HD video. The
tests showed that both standards achieved almost identical
performance for HD signals. Performance is here understood as
the PSNR (Peak Signal-to-Noise
Ratio). Only for smaller resolutions (SD), MPEG-4 proved to be
a little better. An important feature of the AVS standard is its
lower complexity of the encoder
and decoder what makes it more
practical.
Another reason for China developing
the AVS standard is saving money by
not paying royalties for MPEG-4. Most
of the patents in AVS standard belong
to Chinese companies and organizations. They charge for them much less
than the Western world companies do
for MPEG-4. In this way, a Chinese
consumer can save maybe 5-10%
when buying a AVS receiver without
MPEG-4 decoder.
But China didnt stop at AVS and
developed the standard further one.
AVS2 will have a better compression
ratio and thanks to that it will be more
adequate for ultra high definition TV.
Actually, AVS2 can be seen at the Chinese answer to the new HEVC/H.265
standard published recently by ISO/
IEC and ITU.
In what ways will AVS2 ensure better
performance? Well, the very accurate
explanation is extremely complex, requires good background in mathematics and only a narrow group of experts
really can fully understand that. But
what we can understand are at least

the basic concepts underlying


methods used in AVS2.

the

Lets start with the texture analysis


and synthesis. The readers more familiar with computer games certainly
understand that their software games
synthesize various textures on different objects required in a game. And
there is no need to store every pixel
of the surface. The software program
can create complex texture knowing
only a small pattern of a bigger area.
New compression algorithms in AVS2
can also do that. And instead of transferring information of many pixels of
a wavy sea or a distant flowerbed the
AVS2 encoder will analyze what texture is needed for this part of the picture and will send to the receiver only
this information (only a small picture).
Now, the decoder in your receiver will
fill in holes in an image by synthesizing
non-repetitive parts of an image, as in
inpainting.
Another interesting method is superresolution based video coding. To put
it simply, a high resolution image is
reconstructed from multiple sequential low resolution images. During this
process high frequency modeling as
well as spacial-temporal interpolation
is performed. Interpolation means reconstructing correct values for an unknown image pixel located between
known pixels. Located either in space
(left/right/top/bottom) or time (previous/next).
Learning based video coding is maybe even more interesting. The encoder
analyzes the video sequence in which
one or more objects are moving. It
yields information about size, location and motion of the objects. Using
computer graphics methods, it creates
models of each object. It sends to the
receiver decoder information about
the model and animation information.
This is sufficient for low resolution vid-

eo. To make it suitable for HD video,


some additional information containing residual pixel signal is sent. This
is simply the difference between the
model and the real image processed
by the encoder.
Except for the above new concepts
described above, there are also more
improvements in the methods and
algorithms used so far. So, AVS2 will
take advantage of: Super-macroblock
prediction, Adaptive Block-size Transform (ABT), Directional transform, Advanced motion vector prediction and
Rate Distortion Optimization Quantization (RDOQ).
We can say that up to the second
generation (AVS and MPEG-4) video
coding standards relied mainly on taking advantage of various imperfection of the human eye to achieve high
compression ratio. AVS2 takes a step
further. Some elements of the video
will be in fact computer animations or
computer generated textures resembling real things. This is another step
away from transmitting the original
video: what you see on your monitor is
a brand new artificially created video,
which looks as the original, but in reality it is re-created by using only some
parts of the original video.
Compressed video already went
away from a 1:1 transmission, as in
the old analog times (the original is
identical to the copy). Now even less of
the original is needed, to re-created it.
Will AVS2 achieve similar compression ratio improvement over AVS as
HVEC has demonstrated over MPEG4? Can it save up to 50% of the bandwidth? Some scientific papers report
up to 37% of bandwidth reduction.
This can change because the standard
is not finalized yet and very few test
results have been published. So lets
see where all this will eventually end.

96 TELE-audiovision International The Worlds Leading Digital TV Industry Publication 11-12/2014 www.TELE-audiovision.com

You might also like