Image Compression

Topics: Data compression, Huffman coding, JPEG Pages: 12 (3790 words) Published: July 9, 2013
Compression Assignment
Kanwar Ahmad Mustafa
Department Of Electronics and Communication
The University of Lahore
1-Km Thokar Raiwand Road Lahore, Pakistan
punjabians50@hotmail.com

Abstract—Signal compression is concern with the reduction of amount of data i.e., efficient transmission speed of data can be achieved. Redundant data is removed in compression and added during decompression. It includes different techniques which help to get the job done. These techniques include Lossless and Lossy compression and they can be used to compress text, video, audio etc.

Keywords—LossyCompression, Lossless Compression, Huffman Algorithm, DCT and JPEG. I. INTRODUCTION
Compression is a process by which size of data is reduced. There are two main types of data compression lossless and lossy compression. These are further divided into different methods which are Huffman, run-length and Lempel-ziv. JPEG file compression applied with the help of DCT matrix. Sometimes the given data contains some data which has no relevant information, or restates/repeats the known information it is thus said to contain data redundancy. The following paper can be organized as follow section II briefly describe about compression and principles. Section III describes the lossless compression. Section IV describes the lossy compression and section V describes the conclusion of data compression.

II. DATA COMPRESSION
Data compression is the representation of an information source (e.g. a data file, a speech signal, an image, or a video signal) as accurately as possible using the fewest number of bits. Data compression is about storing and sending a smaller number of bits. Although many methods are used for this purpose, in general these methods can be divided into two broad categories: lossless and lossy methods. Compression is possible because information usually contains redundancies, or information that is often repeated. Examples include reoccurring letters; numbers or pixels File compression programs remove this redundancy. [1] and [6].

Fig 1. Types of Data Compression
There are three main data redundancies used in image compression which are: Coding redundancy, Inter-pixel redundancy, Psycho visual redundancy. A. Coding redundancy:
In coding redundancy information theory, are not limited to images, but apply to any digital information. So speak of “symbols”. Instead of “pixel values” and “sources” instead of “images”. Instead of natural binary code, where each symbol is encoded with a fixed-length code word, exploit no uniform probabilities of symbols and use a variable-length code. Assign the more frequent symbols short bit strings and the less frequent symbols longer bit strings. Best compression when redundancy is high. Two common methods are employed Huffman coding and arithmetic coding. B. Inter-pixel Redundancy:

This type of redundancy is related with the inter-pixel correlations within an image. Much of the visual contribution of a single pixel is redundant and can be guessed from the values of its neighbors. C. Psycho visual Redundancy:

It says that our eyes do not response to all visual information. This concept is related to visual perception of eye. Although our eye is one of the best sensors in the world, but our eye is not very sensitive to sense a small variation in information. For example you change the illumination a little bit in the room then our eye will not perceive this. Infect when you see a picture, if there are small level of variations in gray scale image, your eye will not respond to this, thus the brain will also discard this variation. It means that there is no need to show or store small amount of variation in an image that our eye cannot perceive. III. LOSSLESS COMPRESSION

In lossless data compression, the integrity of the data is preserved. The original data and the data after compression and decompression are exactly the same because, in these methods, the compression and...


References: [1]. “The essentials of computer organization and architecture” by Linda Null and Julia Nobur.
[2]. Abramson, N. 1963. Information Theory and Coding. McGraw-Hill, New York.
[3]. Ash, R. B.1965. Information Theory.IntersciencePublishers, New York.
[4]. A.K .Jain,“Fundamentals of Digital Image Processing,” New Jersey:Prentice Hall Inc.,1989.
[5]. Cormack, G. V., and Horspool, R. N. 1984. Algorithms for Adaptive Huffman Codes. Inform. Process.Lett. 18, 3 (Mar.), 159-165.
[6]. Cortesi, D.1982. An Effective Text-Compression Algorithm. BYTE 7, 1 (Jan.), 397-403.
[7]. Gonzalez, R. C., and Wintz, P. 1977. Digital Image Processing.Addison-Wesley, Reading, Mass.
[8]. McIntyre, D. R., and Pechura, M. A. 1985. Data Compression Using Static Huffman Code-Decode Tables. Commun.ACM 28, 6 (June), 612-616.
[9]. Rao, K. R., and YIP, P. 1990. “Discrete Cosine Transform: Algorithms, Advantages, Applications”, Academic Press.
Continue Reading

Please join StudyMode to read the full document

You May Also Find These Documents Helpful

  • Essay about A Novel efficient Block Based Segmentation Algorithm for Compound Image Compression
  • Compression of Medical Images Using Deflate Algorithm Essay
  • Lossy compression Essay
  • Arithmetic Coding for Images Essay
  • Image Compression Essay
  • From Movement-Image to Time-Image Essay
  • An image that speaks Essay
  • Image vs. Word in Advertising Essay

Become a StudyMode Member

Sign Up - It's Free