Preview

A Novel efficient Block Based Segmentation Algorithm for Compound Image Compression

Powerful Essays
Open Document
Open Document
2501 Words
Grammar
Grammar
Plagiarism
Plagiarism
Writing
Writing
Score
Score
A Novel efficient Block Based Segmentation Algorithm for Compound Image Compression
A Novel efficient Block Based Segmentation Algorithm for Compound Image Compression

ABSTRACT
Compound image comprises of the combination of text, graphics and natural images. Compression and transmission of these compound images are the essential processes in the real time applications where segmentation plays a crucial role. Compound image transmission for real-time applications require that the compression technique should not only attains high compression ratio, moreover has low complexity, high PSNR value and a required level of security. Several approaches are being proposed in the past for the process of segmentation and compression of compound images. This paper proposes a novel method for block-based compound image compression. The pictorial blocks and text/graphics blocks are separated from the compound image. The pictorial blocks are compressed using discrete haar wavelet transformation and text/graphics colors are mapped to primary colors using color quantization algorithm and the resulting index values are then compressed using lossless Huffman coding technique. The compressed image is then encrypted using Advanced Encryption Standard algorithm. The proposed model provides several benefits like low complexity, high compression ratio, and a high PSNR value .Moreover it ensures that the transmission is fast and highly secured. Experimental results conclude that the proposed block based segmentation algorithm provides better results, than most of the other image compression techniques.

Keywords: Compound image compression, Haar wavelet transformation, Huffman coding, Advanced Encryption Standard algorithm
1. INTRODUCTION
Internet, fax, mobiles phones etc are the most common communication medium in use nowadays. The information used in these media is in digital format. Transmission of these digital forms of documents is done in a fraction of seconds. High compression and bit rates are required during transmission, to avoid expenses and



References: . [1] Wenpeng Ding, Yan Lu and Feng Wu, "Enable Efficient Compound Image Compression in H.264/AVC Intra Coding", IEEE International Conference on Image Processing, Vol. 2, pp: 337 - 340, 2007. [2] Joan Daemen, Vincent Rijmen, "The Block Cipher Rijndael", Proceeding CARDIS '98 Proceedings of the The International Conference on Smart Card Research and Applications, pp: 277 - 284, 2000. [3] Jagadhish H.and Lohit M. Kadlaskar, "A NEW LOSSLESS METHOD OF IMAGE COMPRESSION AND DECOMPRESSION USING HUFFMAN CODING TECHNIQUES", Journal of Theoretical and Applied Information Technology, pp: 18-23, 2010. [4]Tony Lin, Pengwei Hao, Sang Uk Lee, "Efficient Coding of Computer Generated Compound Images", IEEE International Conference onImage Processing, Vol.1, pp: 561-564, 2005. [5] Xin Li, Shawmin Lei, "Block-based Segmentation and Adaptive Coding for Visually Lossless Compression of Scanned Documents", ICIP, Vol.3, pp: 450-453, 2001.

You May Also Find These Documents Helpful

  • Powerful Essays

    2: Redundant via Hamming code; an error-correcting code is calculated across corresponding bits on each data disk, and the bits of the code are stored in the corresponding bit positions on multiple parity disks. 3: Bit-interleaved parity; similar to level 2 but instead of an error-correcting code, a simple parity bit is computed for the set of individual bits in the same position on all of the data disks. 4: Block-interleaved parity; a bit-by-bit parity strip is calculated across corresponding strips on each data disk, and the parity bits are stored in the corresponding strip on the parity disk. 5: Block-interleaved distributed parity; similar to level 4 but distributes the parity strips across all disks. 6: Block interleaved dual distributed parity; two different parity calculations are carried out and stored in separate blocks on different…

    • 1721 Words
    • 7 Pages
    Powerful Essays
  • Powerful Essays

    In this report, I will talk about my understanding of the impact in various file format, the different compression techniques, and different resolutions and different colour depth. I will show print screens of different types of compression techniques, megapixels under image quality & resolution and three different types of colour depths. I will explain the reasons why some are larger compared to others. The format includes PSD, JPEG, TIFF, BMP and GIF. I will also go into detail why colour depths will affect the quality of the final output that the file has created.…

    • 2192 Words
    • 9 Pages
    Powerful Essays
  • Better Essays

    2. Radeon UVD support – deals with hardware decoding of H.264 and VC-1 video codes…

    • 1499 Words
    • 6 Pages
    Better Essays
  • Powerful Essays

    Hansen, R., Harris, A., Phenix, A., Thornton, D. (2003). STATIC 99 Coding Rules – Revised 2003. Retrieved June 18, 2009 from http://ww2.ps-sp.gc.ca/publications/corrections/pdf/Static-99-coding-Rules_e.pdf…

    • 2465 Words
    • 10 Pages
    Powerful Essays
  • Good Essays

    When accounting files are sent to the archives at the end of the year, the portion taken up by the accounts payable documents usually exceeds that of all other documents combined. For some companies with high accounts payable files, it is a major expense to remove all the paperwork, box it up and identify it, and ship it off to a warehouse, from which it must be recalled occasionally for various tasks. Digitizing the documents is a means of avoiding the expense of archiving. Digitizing a document means that it is laid on a scanner that converts the document image into an electronic image stored in the computer database, which can be recalled by anyone with access to the database. To digitize a document, there should be a high-speed scanner available that is linked to a computer network. Documents are fed into the scanner and assigned one or more index numbers or codes, so that it will be easy to recall the correct documents from storage. For example, a document can be indexed by its purchase order number, date, or supplier number. A combination of several indexes is the best approach, since one can still recall a document, even if one does not remember the first index number. The document images are usually stored on an optical disk since it can hold enormous amounts of storage space (and digitized documents take up a lot of computer storage space). There will probably be many optical disks to provide a sufficient amount of storage, so the disks are usually stored in a “jukebox,” which gives the user access to all the data on all the storage disks. Users can then call up the images from any terminal that is linked to the network where the information is stored.…

    • 556 Words
    • 3 Pages
    Good Essays
  • Powerful Essays

    Procedurally generated content in general is considered to be produced by a program rather than explicitly defined by a data structure. However, a program itself can be seen as an explicitly defined data structure (Ebert, Musgrave, Peachey, Perlin & Worley, 2003). Even with that in mind, any person could draw the distinction between a photograph rendered by a computer and a similar scene rendered as a 3D scene. This distinction becomes important when copyright law comes into play.…

    • 1793 Words
    • 8 Pages
    Powerful Essays
  • Powerful Essays

    Machine Learning Week 6

    • 4020 Words
    • 17 Pages

    In this exercise, you will implement the K-means clustering algorithm and apply it to compress an image. In the second part, you will use principal component analysis to find a low-dimensional representation of face images. Before starting on the programming exercise, we strongly recommend watching the video lectures and completing the review questions for the associated topics. To get started with the exercise, you will need to download the starter code and unzip its contents to the directory where you wish to complete the exercise. If needed, use the cd command in Octave to change to this directory before starting this exercise.…

    • 4020 Words
    • 17 Pages
    Powerful Essays
  • Better Essays

    Expansion Techniques

    • 1518 Words
    • 7 Pages

    4. Lawrence, Shelly. “Views: Expansion and Compression. “ 2007. Vol. 24, Issue 11. RID Inc.…

    • 1518 Words
    • 7 Pages
    Better Essays
  • Satisfactory Essays

    6) What type of a backup repository doesn’t support Veeam transport services and requires a Windows proxying server to write to it?…

    • 154 Words
    • 1 Page
    Satisfactory Essays
  • Good Essays

    128 Bit Encryption

    • 662 Words
    • 2 Pages

    Bradford, C. (2014). 5 Common Encryption Algorithms and the Unbreakables of the Future - StorageCraft. Storagecraft.com. Retrieved 11 February 2015, from http://www.storagecraft.com/blog/5-common-encryption-algorithms/…

    • 662 Words
    • 2 Pages
    Good Essays
  • Better Essays

    N. S. Jayant and P. Noll, Digital Coding of Waveforms: Principles and Applications to Speech and Video, Prentice-Hall, Englewood Cliffs, N. J., 1984.…

    • 1331 Words
    • 6 Pages
    Better Essays
  • Satisfactory Essays

    Transcoding Jpeg

    • 461 Words
    • 2 Pages

    There is a transcoding scheme that proposed to compresses existing JPEG files without losing the quality. It does it by using the H.264-like block-adaptive intra prediction to use inter-block relationship of quantized DCT coefficients that are stored in the JPEG file. This prediction is performed in spatial domain of each block composed of 8×8 pels, but the corresponding prediction residuals are calculated in DCT domain to ensure lossless reconstruction of the original coefficients. Block-based classification is carried out to allow accurate modeling of PDF’s. A multi-symbol arithmetic coder and the PDF model is used for entropy coding to predicting residual of each DCT coefficient located in the in JPEG files. For monochrome JPEG images end result usually shows the reduction of coding rates about 18 – 28 %.…

    • 461 Words
    • 2 Pages
    Satisfactory Essays
  • Powerful Essays

    3. Doctronic, "BCD to seven segment decoder." Beastie Zone. 1999. Doctronic Educational Publication. <http://www.doctronics.co.uk/4511.htm> [Accessed: 2 Mar 2009]…

    • 3471 Words
    • 14 Pages
    Powerful Essays
  • Powerful Essays

    Adobe Photoshop Essay

    • 1474 Words
    • 6 Pages

    * Produce digital files where the whole image is part of a "fixed" pixel structure (the "bitmap")…

    • 1474 Words
    • 6 Pages
    Powerful Essays
  • Good Essays

    What Is Midi ?

    • 522 Words
    • 3 Pages

    • Basic Ideas of compression (see next Chapter) used as integral part of audio format --…

    • 522 Words
    • 3 Pages
    Good Essays

Related Topics