I'm using a simple Hamming (7,4) now and apply it to every singly nibble. It obviously needs improvements. There is no bit re-ordering applied. faydrus recommended using libcorrect (Reed-Solomon or convolutional coding). I can't run any of the example tests
Maybe start out with a Golay code? The coding gain is modest (~1-2dB), but it's fairly simple and doesn't require the generation of any special data structures (like LDPC does).
I've attached the Golay source code, originally from Wireshark, that I ported over to QMesh. Note that I haven't tested it yet with QMesh.
Doesn't LDPC require generating an appropriate parity matrix for every configuration (coding rate, block size) you want to use? Do you have a good way to do that?
Doesn't LDPC require generating an appropriate parity matrix for every configuration (coding rate, block size) you want to use? Do you have a good way to do that?
It's probably a good idea to hammer out the frame format before doing anything else, then.
Trellis coding occurs at the data link layer and doesn't rely on block sizes, so it might be a better option if we're dealing with variable block sizes.
I’m thinking about the framing and the protocol. How important is it from an ECC stand point that every frame be a fixed size? (I know Protocol’s and networking, but not the math behind FEC.)
I just posted a proposal in the other thread for a 512 bit frame, only 288 bits of which are interesting:
SYNC: 32 bits (Constant string, not worth error correcting)
Link Control: 32 bits (32 bits of a larger message sent over multiple frames.)
Voice Payloads: 4x 64 bit CODEC2 3200 frames, 256 bits total
CRC: 32 bits
FEC: 160 bits
Two questions:
1) Do we really need both CRC and FEC? Can those instead be combined into a single 192 bit FEC field?
2) Can we do useful FEC on 288 bits of interesting payload with either 160 or 192 bits of FEC?