| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Activity Tracking 3

Page history last edited by sunny9312 11 years, 2 months ago

Chan Lok Sun (SID: 1155029396)

Chen Ka Yee (SID: 1155030664)

Our topic is:

Image compression via Wavelets(Linear transforms)

 

 

NTRODUCTION

With technology developing rapidly and people’s living standard improving, almost every household in modern city has one or few computers. Every day, people upload different types of data to the web. We often find ourselves dealing with vast amount of information, especially digital information (Facebook,instagram) and this can often present difficulties. So how can we store digital information in a more efficient way? Practically, wavelet image compression seems to be a way out.
COMPRESSION STEPS

The steps needed to compress an image are as follows:

  1. Digitize the source image into a signal s, which is a string of numbers.
  2. Decompose the signal into a sequence of wavelet coefficients w.
  3. Use thresholding to modify the wavelet coefficients from w to another sequence w'.
  4. Use quantization to convert w' to a sequence q.
  5. Apply entropy coding to compress q into a sequence e.

DIGITIZATION

. The digitized image can be characterized by

- its intensity levels

-scales of gray which range from 0 (black) to 255 (white),

- its resolution, or how many pixels per square inch.

Each of the bits involved in creating an image takes up both time and money, so a tradeoff must be made.


THRESHOLDING

In certain signals, many of the wavelet coefficients are close or equal to zero. Through a method called thresholding, these coefficients may be modified so that the sequence of wavelet coefficients contains long strings of zeros. Through a type of compression known as entropy coding, these long strings may be stored and sent electronically in much less space. 

 

Types of thresholding.

- hard thresholding, 

-a tolerance is selected.

-Any wavelet whose absolute value falls below the tolerance is set to zero with the goal to introduce many zeros without losing a great amount of detail.

-the larger the threshold that is chosen the more error that is introduced into the process.

- soft thresholding.  

-a tolerance, h, is selected.

-If the absolute value of an entry is less than the tolerance, than that entry is set to zero. All other entries, d, are replaced with sign(d)||d| - h|.

-similartp\o a translation of the signal toward zero by the amount h.

- quantile thresholding. 

-a percentage p of entries to be eliminated are selected.

-The smallest (in absolute value) p percent of entries are set to zero.  

 

entropy coding for compressing the data.  

-lossless data compression.

-arrange the image components in a "zigzag" order

-employ run-length encoding (RLE) algorithm that groups similar frequencies together

-insert length coding zeros

-useHuffman coding on what is left.

-integer sequence, q a shorter sequence, e, with the numbers in e being 8 bit integers.

-only use two or three numbers for coding

-the numbers that are expected to appear the most often in q, need the least amount of space in e.

Huffman coding

 

A source generates 4 different symbols with probability {0.4; 0.35; 0.2; 0.05}. A binary tree is generated from left to right taking the two least probable symbols and putting them together to form another equivalent symbol having a probability that equals the sum of the two symbols. The process is repeated until there is just one symbol. The tree can then be read backwards, from right to left, assigning different bits to different branches. The final Huffman code is:

Symbol 

Code 

a1

0

a2

10

a3

110

a4

111

The standard way to represent a signal made of 4 symbols with probability is by using 2 bits/symbol, but the entropy of the source is 1.74 bits/symbol. If this Huffman code is used to represent the signal, then the average length is lowered to 1.85 bits/symbol; it is still far from the theoretical limit because the probabilities of the symbols are different from negative powers of two.


QUANTIZATION

-converts a sequence of floating numbers w' to a sequence of integers q.

- round to the nearest integer. /

-multiply each number in w' by a constant k, then round to the nearest integer

-lossy because it introduces error into the process, since the conversion of w' to q is not a one-to-one function.

 

 

 

 

 

Comments (1)

sidjaggi said

at 3:58 pm on Nov 18, 2012

again, details would be nice! what is a wavelet? how would you implement a wavelet trasnform using maxima?

You don't have permission to comment on this page.