M17 Project Forum

Full Version: Packet Mode Questions
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
The section of the specification documentation on packet-mode say puncturing uses a 7/8 matrix (1, 1, 1, 1, 1, 1, 1, 0) but the Physical Layer section discusses 2 different puncturing matrices that are much longer. (P1 for the LSF and P2 for all other frames) Which is actually used for packet-mode frames? I assume a packet-mode LSF uses P1 as described.

The specs also say a packet-mode super-frame has 798 bytes of payload, but I can't figure out how regular frames fit into this evenly or if subsequent frames include LICH chunks like with stream-mode.

Finally, I gather that LICH refers to the data in the LSF, but what does LICH actually stand for?
Those two longer puncturing matrices are for the stream mode.




Quote:The specs also say a packet-mode super-frame has 798 bytes of payload, but I can't figure out how regular frames fit into this evenly or if subsequent frames include LICH chunks like with stream-mode.


We have to ask WX9O :-)



LICH was initially Link Information CHannel.
Puncturing for packet mode is different from streaming mode.  Puncture matrix for packet is {1,1,1,1,1,1,1,0}.  25 bytes + 6 bits + 4 flush bits = 210 bits, coded to 420 bits, 7/8 puncturing leaves 368 bits.

Edit: I did not read the question very carefully.  The answer is, yes, the LSF in packet mode uses the same P1 puncture matrix.  The 7/8 puncture is only for the packet data frames.  The only difference in the LSF of packet mode is that the packet/stream bit indicates packet, and the basic/encapsulated flag is set appropriately.
There are no LICH chunks in packet frames. Packet frames are well-defined in their own right. See "Table 13 Bit fields of packet frame". An important point is that a packet frame does not end on a byte boundary. The payload consists of 25 bytes, followed by 6 bits of metadata (frame number/byte count and EOF bit).

https://m17-protocol-specification.readt...ket-format

The physical layer should not discuss puncture matrices IMO as it is not part of the physical layer. The only things that are part of the physical layer are the modulation type, symbol mapping, preamble , sync words, and raw frame size. Convolutional coding and puncturing belongs in the data link layer. The P1 puncture matrix applies to all modes as it is used for the link setup frame, which is required for all modes. The P2 puncture matrix is a stream mode data link layer item. It does not apply to packet mode. The puncture matrix for packet (P3) is a packet data link layer item.
Thanks for the responses!

I still confused by the packet mode superframe though. Table 12 says it has a payload of 798 bytes. What is the payload? Based on the stream mode superframe, I would guess the packet superframe payload is regular packet frames, but they dont seem to fit evenly.
(01-26-2021, 09:55 PM)akbat Wrote: [ -> ]Thanks for the responses!

I still confused by the packet mode superframe though. Table 12 says it has a payload of 798 bytes. What is the payload? Based on the stream mode superframe, I would guess the packet superframe payload is regular packet frames, but they dont seem to fit evenly.

The raw packet superframe consists of data and a 2-byte CRC.  That totals to 800 bytes in the superframe.  800 bytes is exactly 32 25-byte frames.  Each superframe is split into 25-byte chunks(or 200 bits, as described below that table).

The encapsulated frames starts with a 1..n byte type identifier followed by data, then 2 bytes for CRC.  That, too, must fit within the total limit of 800 bytes. (32 frames).
Wait a second, I've been assuming that frames are packed into superframes as they get TXed and a superframe gets unpacked into frames on the RX end.
Is the process actually that we put some data into a superframe first, then split it up into frames for TXing? And the RX side would re-combine the frames into a superframe to recover the data?
(02-01-2021, 06:17 AM)akbat Wrote: [ -> ]Wait a second, I've been assuming that frames are packed into superframes as they get TXed and a superframe gets unpacked into frames on the RX end.
Is the process actually that we put some data into a superframe first, then split it up into frames for TXing? And the RX side would re-combine the frames into a superframe to recover the data?

That was an incorrect assumption.  Start by understanding the physical layer and the data link layer.  All M17 frames start with an 8-symbol sync word followed by 184 symbols.  This is true for all M17 frame types.

The M17 frame is unpacked into 368 bits, which are FEC encoded.  The sync word used indicates the M17 frame type and FEC coding used.  The packet sync word indicates that the frame is 7/8 punctured, has 25 data bytes and a 6-bit frame/byte count field.

The M17 packet protocol is split between data link layer (convolutional coding of the packet frame, frame counter, puncturing) and the application layer (construction of packet superframe and splitting of superframe into M17 data link layer packet frames).  Packet data gets packed into a superframe consisting of the type, packet data, CRC.  (Raw frames omit the data type information.) This is then split across data link layer frames.

The intention is that raw packet frames can allow a drop-in replacement for other packet protocols, and is what is being used for testing APRS over M17 today.  The encapsulated version is intended to be used to construct more complex network protocols, such as IP, 6LoWPAN, etc.