Next post: MS Paint Animation

Bit Packing Preprocessor

Binary formats sometimes use all of the bits they are given - when I was working with MIDI, I came across this a lot. (If you have an int, you might as well use all 32 of those guys.)

For example, 16bit color is often 5bits red, 6bits green, and 5bits blue. That means it's all packed into rrrrrggggggbbbbb, where each letter represents a bit. I wrote some Python to write the C code for me. The Python script takes a string like "00rrr000" and outputs C code with the appropriate shifts and masks.

import bittools
print bittools.pattern('00rrr000', True)
    r is a 3bit number
    Packing:
    assert(r<8);
    unsigned char packed |= r<<3;
    Unpacking:
    unsigned char r = (packed & 0x3f)>>3; //packed & 0b00111111

print bittools.pattern('00000bbb')
    b is a 3bit number
    Packing:
    unsigned char packed |= b;
    Unpacking:
    unsigned char b = packed & 0x7; //packed & 0b00000111

print bittools.necessarybits(640)
    10 bits are required to store values up to 640
    2 ** 10 = 1024

print bittools.tobinary(46)
    00101110

print bittools.frombinary('1100_1100') 
    204


I find "00rrr000" to be a lot clearer than '(packed & 0x3f)>>3' .

The source is on GitHub here (it's not pretty yet) under the GPLv3.