Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
M17 Protocol Proposals
#1
Now that we have removed the CAN from the FICH and agreed to include a 4-bit CRC. This means that we have one spare bit in each FICH. This leads to the following layout for the FICH:

 0..39 40-bits of full LSF
40..42 A modulo 6 counter (LICH_CNT) for LSF reassembly
43     Reserved (set to 0)
44..47 4-bit CRC

It takes six fragments to get a full LSF. however when there is no encryption, which is common, this is just zero bytes and a waste. It would be better if we could send a shorter LSF and so we could decode the information from less frames when the link setup frame is not decoded. With no encryption the NONCE data can be removed. Therefore only the DST, SRC, TYPE and CRC are needed, that is 128 bits instead of 240. With 128 bits we can send all of the data in four fragments instead of six. The use of this shorter LSF can be indicated by this reserved bit. Leading to:

 0..39 40-bits of full LSF
40..42 A modulo 6 (Long LSF), module 4 (Short LSF), counter (LICH_CNT) for LSF reassembly
43     Type of LSF, 0 = Short, 1 = Long
44..47 4-bit CRC

The long version is the same as we currently have with six fragments. The short version is the LSF without the NONCE. In the short version there are some unused bits in the final fragment, these are ignored and should be set to zero. I have a question, should the CRC in the short LSF be only for the DST, SRC, and TYPE, or should it also include the missing zero filled NONCE? The latter would make life easier for sending this data over the network where the NONCE will always be included.

The 4-bit CRC would use a standard polynomial, and I can give example code for the M17 specification once I have coded it. For efficiency, this CRC would be over the fill 48 bits of the LICH with the space for the CRC being set to zeroes. This would allow for an efficient byte wise implementation of the CRC algorithm.

Jonathan  G4KLX
Reply
#2
What real benefit does a CRC on each LICH segment provide? The entire LSF has a CRC. The Golay code provides reasonable error correction and detection for each segment.

The algorithm I use is:

1. Golay decode the LICH, discarding if invalid.
2. Discard segment if index is invalid.
3. Write the segment into the LSF buffer, offset by index.
4. Store index in a bit field (segment received).
5. If all segments received, check LSF CRC.
6. If CRC is good, done, otherwise continue.

If the BER is so high that you cannot get a good LSF using this method, the audio is going to be crap as well. A CRC just adds an additional step. The only improvement is that it avoids overwriting a potentially good segment with a known bad one. But the Golay code already provides reasonable protection against that.

I would like to see a quantitative analysis of the improvement adding such a CRC provides.

Rob WX9O

I prefer to use the reserved bit for "other data" where the "other data" is responsible for indicating its type.

We do not actually need to use the reserved bit if we change the spec to say that when encryption is not used, the sender MUST send a short superframe. We can then just inspect the flags field to determine the expected superframe type.

If we have a short superframe, the superframe CRC should be the same as the LSF, and the NONCE must be set to 0's. Proposal: The only difference between the short superframe and the LSF is that the NONCE bytes are omitted and must set to 0 by the receiver.

Rob WX9O
Reply
#3
Code:
This is the one that I'm working with:

const uint8_t CRC_TABLE[] = {
    0x0, 0x7, 0xe, 0x9, 0x5, 0x2, 0xb, 0xc, 0xa, 0xd, 0x4, 0x3, 0xf, 0x8, 0x1, 0x6,
    0xd, 0xa, 0x3, 0x4, 0x8, 0xf, 0x6, 0x1, 0x7, 0x0, 0x9, 0xe, 0x2, 0x5, 0xc, 0xb,
    0x3, 0x4, 0xd, 0xa, 0x6, 0x1, 0x8, 0xf, 0x9, 0xe, 0x7, 0x0, 0xc, 0xb, 0x2, 0x5,
    0xe, 0x9, 0x0, 0x7, 0xb, 0xc, 0x5, 0x2, 0x4, 0x3, 0xa, 0xd, 0x1, 0x6, 0xf, 0x8,
    0x6, 0x1, 0x8, 0xf, 0x3, 0x4, 0xd, 0xa, 0xc, 0xb, 0x2, 0x5, 0x9, 0xe, 0x7, 0x0,
    0xb, 0xc, 0x5, 0x2, 0xe, 0x9, 0x0, 0x7, 0x1, 0x6, 0xf, 0x8, 0x4, 0x3, 0xa, 0xd,
    0x5, 0x2, 0xb, 0xc, 0x0, 0x7, 0xe, 0x9, 0xf, 0x8, 0x1, 0x6, 0xa, 0xd, 0x4, 0x3,
    0x8, 0xf, 0x6, 0x1, 0xd, 0xa, 0x3, 0x4, 0x2, 0x5, 0xc, 0xb, 0x7, 0x0, 0x9, 0xe,
    0xc, 0xb, 0x2, 0x5, 0x9, 0xe, 0x7, 0x0, 0x6, 0x1, 0x8, 0xf, 0x3, 0x4, 0xd, 0xa,
    0x1, 0x6, 0xf, 0x8, 0x4, 0x3, 0xa, 0xd, 0xb, 0xc, 0x5, 0x2, 0xe, 0x9, 0x0, 0x7,
    0xf, 0x8, 0x1, 0x6, 0xa, 0xd, 0x4, 0x3, 0x5, 0x2, 0xb, 0xc, 0x0, 0x7, 0xe, 0x9,
    0x2, 0x5, 0xc, 0xb, 0x7, 0x0, 0x9, 0xe, 0x8, 0xf, 0x6, 0x1, 0xd, 0xa, 0x3, 0x4,
    0xa, 0xd, 0x4, 0x3, 0xf, 0x8, 0x1, 0x6, 0x0, 0x7, 0xe, 0x9, 0x5, 0x2, 0xb, 0xc,
    0x7, 0x0, 0x9, 0xe, 0x2, 0x5, 0xc, 0xb, 0xd, 0xa, 0x3, 0x4, 0x8, 0xf, 0x6, 0x1,
    0x9, 0xe, 0x7, 0x0, 0xc, 0xb, 0x2, 0x5, 0x3, 0x4, 0xd, 0xa, 0x6, 0x1, 0x8, 0xf,
    0x4, 0x3, 0xa, 0xd, 0x1, 0x6, 0xf, 0x8, 0xe, 0x9, 0x0, 0x7, 0xb, 0xc, 0x5, 0x2};

uint8_t crc4(const uint8_t* in, uint8_t nBytes)
{
    uint8_t crc = 0x0FU;

    for (uint8_t i = 0U; i < nBytes; i++)
        crc = CRC_TABLE[crc ^ in[i]];

    return crc;
}

I think there are some holes in some of your reasoning, and some things that I violently agree with!

I agree about the sequence number, end indication and associated checksum. They do seem to be a waste of precious resources, and for most purposes, unnecessary. I have heard the reason for their existence to do with encryption, but the commercial DV modes have good encryption and don't see the need for a sequence number nor a checksum on their payloads. I have problems with the addition of encryption anyway.

I agree about not using the FN for the end of data marker. Most DV modes use an explicit end marker, often a repeat of the header but with some special marker to indicate that it is the end of the data. Since we have a spare synchronisation vector, why not resend the link setup frame with that sync vector and label that as the end of the transmission? That'd work for both stream and packet data. This removes the need to worry about a corrupted FN.

Using the lack of a valid sync vector as an end of transmission marker is worrying. It is not uncommon to miss a sync vector or two when receiving a mobile or portable station, and so a good implementation must allow for such eventualities and not immediately mark it as the end of the transmission. In the MMDVM I allow 3 or 4 lost syncs before declaring a loss of signal. A proper end marker is needed. An implementation that declares the loss of one sync vector as being the end of a transmission will be far from optimal.

The short CRC on the LICH is very important as it makes some sort of guarantee that the preceding bits are hopefully correct. This means that the LICH_CNT and the LSF fragment are accurate enough to place in the LSF reconstruction buffer in the correct place. Golay is not going to tell you if there is any problem in the incoming data, so extra methods are needed to help you. That is why other modes like DMR, YSF, and NXDN do that very thing, FEC and a checksum.

As much as I like the idea of having a smaller number of LSF fragments, I also feel that the complication is probably not worth the extra processing.

Jonathan  G4KLX
Reply
#4
> Golay is not going to tell you if there is any problem in the incoming data

Sure it will! The Golay that we use has a parity bit. We have 4 parity bits for this data (four codewords). All odd numbers of bit errors can be detected in each codeword, as well as all 4-bit errors (3 bits and less are corrected). A 24-bit codeword has to have >5 bit errors before a possibly bad bit is considered "good". The LICH is interleaved throughout the frame with the payload. The odds of a burst error affecting one codeword is minimal. The BER has to be rather high for a single codeword to have 6 or more errors. That's a BER of 25% in one code word.

If any of the four codewords fails a parity check, we know the LICH frame is invalid.

You would need 6 or more bit errors in one code word, and no detected bit errors (<4 bit errors, or only an even number of 6 or more errors) in all other code words to have an undetected error.

If the BER is so high that the Golay correction/detection is not good enough, adding a CRC to toss out bad data is not going to help much.

CRCs are good at detecting burst errors. All of the bits in the LICH have been interleaved with the rest of the frame. And odds are that an error of 3 bits or more which may get through are going to be spread randomly across the 12 bits. This is not something that a CRC is particularly good at detecting.

Beyond that, we have the CRC on the LSF, which also must be validated.

The worst case scenario is that for a high BER, it takes ever so slightly longer to decode the complete LSF.

Rob WX9O
Reply
#5
Quote:Type of LSF, 0 = Short, 1 = Long


This is not needed as it can be inferred from the "Encyption type" part of the "Type field". No encryption (0b00) -> short LSF.
Reply
#6
(03-31-2021, 04:34 PM)SP5WWP Wrote:
Quote:Type of LSF, 0 = Short, 1 = Long


This is not needed as it can be inferred from the "Encyption type" part of the "Type field". No encryption (0b00) -> short LSF.

But we need to know the LSF type in order to decode the LSF in the first place (unless you have already decoded a header correctly). It's a chicken and egg situation. I think that the saving to going to a short LSF probably isn't worth the effort. If would have been if the saving had been greater, but 4 instead of 6 isn't that big a deal.
Reply
#7
Right, so are we dropping this idea of short and long LSF?
Reply
#8
For me yes, you can drop the idea. Not enough gain for the potential ambiguity.
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)