Kanwar Ahmad Mustafa
Department Of Electronics and Communication
The University of Lahore
1-Km Thokar Raiwand Road Lahore, Pakistan
Abstract—Signal compression is concern with the reduction of amount of data i.e., efficient transmission speed of data can be achieved. Redundant data is removed in compression and added during decompression. It includes different techniques which help to get the job done. These techniques include Lossless and Lossy compression and they can be used to compress text, video, audio etc.
Keywords—LossyCompression, Lossless Compression, Huffman Algorithm, DCT and JPEG. I.
Compression is a process by which size of data is reduced. There are two main types of data compression lossless and lossy compression. These are further divided into different methods which are Huffman, run-length and Lempel-ziv. JPEG file compression applied with the help of DCT matrix. Sometimes the given data contains some data which has no relevant information, or restates/repeats the known information it is thus said to contain data redundancy. The following paper can be organized as follow section II briefly describe about compression and principles. Section III describes the lossless compression. Section IV describes the lossy compression and section V describes the conclusion of data compression.
Data compression is the representation of an information source (e.g. a data file, a speech signal, an image, or a video signal) as accurately as possible using the fewest number of bits. Data compression is about storing and sending a smaller number of bits. Although many methods are used for this purpose, in general these methods can be divided into two broad categories: lossless and lossy methods. Compression is possible because information usually contains redundancies, or information that is often repeated. Examples include reoccurring letters; numbers or pixels File compression programs remove this redundancy.  and .
Fig 1. Types of Data Compression
There are three main data redundancies used in image compression which are: Coding redundancy, Inter-pixel redundancy, Psycho visual redundancy. A. Coding redundancy:
In coding redundancy information theory, are not limited to images, but apply to any digital information. So speak of “symbols”. Instead of “pixel values” and “sources” instead of “images”. Instead of natural binary code, where each symbol is encoded with a fixed-length code word, exploit no uniform probabilities of symbols and use a variable-length code. Assign the more frequent symbols short bit strings and the less frequent symbols longer bit strings. Best compression when redundancy is high. Two common methods are employed Huffman coding and arithmetic coding. B. Inter-pixel Redundancy:
This type of redundancy is related with the inter-pixel correlations within an image. Much of the visual contribution of a single pixel is redundant and can be guessed from the values of its neighbors. C. Psycho visual Redundancy:
It says that our eyes do not response to all visual information. This concept is related to visual perception of eye. Although our eye is one of the best sensors in the world, but our eye is not very sensitive to sense a small variation in information. For example you change the illumination a little bit in the room then our eye will not perceive this. Infect when you see a picture, if there are small level of variations in gray scale image, your eye will not respond to this, thus the brain will also discard this variation. It means that there is no need to show or store small amount of variation in an image that our eye cannot perceive. III.
In lossless data compression, the integrity of the data is preserved. The original data and the data after compression and decompression are exactly the same because, in these methods, the compression and...
References: . “The essentials of computer organization and architecture” by Linda Null and Julia Nobur.
. Abramson, N. 1963. Information Theory and Coding. McGraw-Hill, New York.
. Ash, R. B.1965. Information Theory.IntersciencePublishers, New York.
. A.K .Jain,“Fundamentals of Digital Image Processing,” New Jersey:Prentice Hall Inc.,1989.
. Cormack, G. V., and Horspool, R. N. 1984. Algorithms for Adaptive Huffman Codes. Inform. Process.Lett. 18, 3 (Mar.), 159-165.
. Cortesi, D.1982. An Effective Text-Compression Algorithm. BYTE 7, 1 (Jan.), 397-403.
. Gonzalez, R. C., and Wintz, P. 1977. Digital Image Processing.Addison-Wesley, Reading, Mass.
. McIntyre, D. R., and Pechura, M. A. 1985. Data Compression Using Static Huffman Code-Decode Tables. Commun.ACM 28, 6 (June), 612-616.
. Rao, K. R., and YIP, P. 1990. “Discrete Cosine Transform: Algorithms, Advantages, Applications”, Academic Press.
Please join StudyMode to read the full document