In this article, a lossless compression technique modified from Huffman Coding will be reconstructed and analysed. Huffman Coding encodes data using a set of code words whose lengths are indirectly proportional to the frequency of occurrence of represented data. A modification to Huffman Coding was published by Hankamer and his algorithm claims to reduce the size of memory required for usual Huffman procedure (Hankamer, 1979) .
While Huffman Coding managed to accomplish compression, it is necessary for decoder to know code words’ probabilities to decode. The decoder can assume these probabilities but small deviations in assumption can lead to error in decoded data. So, it is practical to pass the code words’ probabilities to decoder, either by pre-defining or by bundling with encoded data.
In practice, code words’ probabilities are bundled as a code book containing data statistics. As the number of code words increases, the codebook become longer and longer. The Hankamer technique reduces the code book size by shortening the length of less frequent code words (Dougherty, 1994).
In this report, the Hankamer technique will be tested and experimented to compare with normal Huffman Coding Performance. A new technique based on Hankamer procedure, which could further reduce the codebook, would also be proposed. This proposal will be tested, analysed and compared with conventional Huffman Coding and Hankamer method.
SP Myanmar Concert 2009 Promotional video I created for SP Myanmar Concert 2009. Various clips from SP Myanmar Concert 2008 were used.
I do not own background music. It belongs to Soul Eater. Copyright infringement unintended.
I Collect Passport Collecting Machine, a Co-op Project by Singapore Polytechnic and NEC for ICA. My works include controller board and electrical installation and programming.