I think it is time I fleshed out my thoughts for packet format.
Packet data starts with a LSF with the packet bit set to 1. It is then followed by 1..32 data payload frames. Data payload frames will be preceded by a unique sync word, TBD (3423?).
Each data payload frame consists of 25 bytes of data, 6 bits for frame size and end of frame indicator, and 4 flush bits (210 bits total). These are convolutionally coded, generating 420 bits of coded data. This is punctured using a 7/8 puncture matrix, leaving 368 bits.
The first N bytes in the first frame identify the frame contents, where N is typically 1. The content type indicators are reserved words which must be added to the spec to avoid conflicts. The content type identifier follows the UTF-8 style encoding to allow a large number of possible content type identifiers. In practice, this will be a 1-byte value for quite some time.
The final two bytes contain a CCITT-16 CRC.
The result is that Each packet transmission can contain up to 797 data bytes. These are sent in 34 frame periods (preamble, LSF, data) , for a maximum duration of 1.36 seconds.
The 6-bit value is used as a frame counter, with the high-bit set to 0. The last frame has the high bit set to 1 and indicates the number of bytes in the frame, including the last two CRC bytes.
Beyond this, it is up to upper level layers to define the contents of the frame.
Packet types already identified:
APRS
AX.25 (non-aprs)
Text message (UTF-8 encoding)
Also, CSMA/CA is used to determine if/when it is possible to send a packet.
https://en.wikipedia.org/wiki/Carrier-se..._avoidance