The smallest addresable information unit in a computer is a byte. The main memory can be thought of as a linear string of bytes, each having a unique address. One byte is lower than another byte if its address is smaller. Simple.
Endianness is what arises when we deal with multi-byte data. Consider an 32-bit (4 byte) integer. How is it to be stored in memory ? Well, it appears there are two common methods. One is to store the least significant byte of the integer in the lowest address. This is called "little endian" (since the little end is first) and is used by x86 processors. Another is to store the most significant byte of the integer in the lowest address - this is "big endian" (since the big end is first) and is used by Motorola's processors and by SPARC). Read this for further enlightment.
Bit endianness is a concept based on the same principle as byte endianness. A byte consists of 8 bits, each with a different "weight". If the most significant bit (MSB) is in the lowest location, it's big endian, etc.
Now, while byte endianness is important when moving data between computers with CPUs of different architectures, bit endianness is usually much less an issue. That's because hardly anyone cares about the "location" of a bit. What does the regular programmer care where in its registers the CPU stores each bit ?
But it does matter, in certain cases - and I ran into one of them with my BitStream class. Say you want to see the file as a stream of bits - then it's very important to decide which bit in each byte to treat as first. Consider a file consisting of the string "ab". It's bytes, when viewed in a hex editor are: 0x61 0x62, which is 01100001 nd 01100010 respectively. Now, which bit is the first one in the file ? Do we treat the bytes as little-endian and then the stream is 1000011001000110, or do we treat it as big endian and the stream is 0110000101100010 ? After spending an awful amount of time thinking about it and considering it from every angle, I decided that the answer is "it depends" (how typical :-). So, I built it in as an option in my BitStream class - when opening a file you tell it to treat bytes with MSB first or LSB first.
In my case bit order is very important because this file is created from a communication line, from data sent by a device that couldn't care less about bytes. It sends frames of data, 320 bits long each, with the LSB first. What is it's 1st bit in each frame, I want to be my 1st bit, period. The sequence these poor bytes go through is quite lengthy. The device breaks its frames to bytes, to be able to send them with a UART (serial communication to a COM port). The PC receives the bytes (which are, by the way, sent as LSB first - this is the UART protocol) and writes them to a file. From there I read it with my BitStream (configured to take the LSB of each byte first) and can faithfully reproduce the frame.