Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
M17 frame description
May I also suggest to stick to a higher bitrate for audio? codec2 at 1300 or even 700 bps sounds great compared to SSB, but it is absolutely not an analog FM killer.
Starting to like the new format! (v3)
The header can indead be spread over different frames.
But maybe one or a few bits of the link control field should be specified for multiplexing.

That way you can spread a larger bit of data in between voice frames.
And if you make parts of the header optional it can safe a lot.

- As long as no 'destination address' is specified the frames will by default be regarded as broadcast
- As long as no type for a substream is given it will be regarded to be the default codec2 mode.

This would allow the simplest implementation to just send voice data in e.g. stream 0 with only a source address added in the link control field.
E.g. with bit 0 being stream 0 or 1:
Followed by 7 bits of 'header identifier'. (128 different header fields available)
Out of the 32bit link control you still have 24 bits available.

That would mean that you could send an source identifier for voice on channel 0 as:
bit0: 0 (stream 0)
bit1-8: 'Src address 1st part'
bit9-31: first 24 bits of source

and in the next frame:
bit0: 0 (stream 0)
bit1-8: 'Src address 2nd part'
bit9-31: second 24 bits of source

If you want a data frame spliced in on stream 1 you would start with:
bit0: 1 (stream 1)
bit1-8: 'start new packet' (to allow data packets bigger than 256bits)
bit9-31: length (total bytes to expect)

.... some more frames (with other header fields) or other stream ....

bit0: 1 (stream 1)
bit1-8: 'last frame of packet'
bit9-31: crc of total packet (16 bit is probably enough as each frame already has a 32bit crc)

If you have fields like:
- 'src 1st part': source address
- 'src 2nd part': source address
- 'dst 1st part': dest address
- 'dst 2nd part'; dest address
- 'type': 16 bit ethernet type
You can reassemble it into an ethernet frame and your OS (via e.g. a tap interface in linux) has automatic support for it.

So we have 2 streams and we can use either one of them or even both alternatively?

I'm quoting myself Smile
Quote:How about punctured Golay code? It's 1/2 by itself.
Probably a bad idea. It would mean deleting 6 bits for every 24 bit codeword. Besides - it's a systematic code.
Sorry for double posting.

I have changed the M17_ANL software a little bit to make it compatible with the v3 frame format. I'm sending 125 frames, this is how it looks like demodulated:

[img width=500][/img]

Preamble is present only at the beginning of the transmission. There are some spikes between frames visible (a brief moment of background noise). IDK if an Si4463 can transmit packets without pauses in between. We should probably use ADF7021. The other advantage of using the ADF chip is that it can probably be hacked to allow analog voice transmission. More info on that:
(12-05-2019, 01:48 PM)SP5WWP link Wrote: @pe1rxq:
So we have 2 streams and we can use either one of them or even both alternatively?

Yes, it depends a bit on the total bandwith available whether both at the same time is usefull, but one obvious way to use it would be voice in one stream and using the leftover bandwith for data frames, or at the very least identification and maybe authentication. This will always work if there is slightly more bandwith available than would be needed for just voice. It would be trivial to add position reports using e.g. FPRS in between the voice data

And of course if you have nothing to say you can use the full bandwidth for data  ;D
We need to treat this like a real network stack:
  1. Physical layer: How to send 1s and 0s.  RF modulation type, symbol rate, bits per symbol, etc.  This tells us how many bits per second we have to play with.
  2. Data Link layer: Turn 1s and 0s into data.  Common framing to all data types.  Current proposal (v3) is to have two Data Links, one for Packets and one for Stream.  Though I'm coming around to @pe1rxq's idea of using Ethernet framing for everything.  Ethernet doesn't include any FEC though, so we'd either need to add that to the on-air protocol and strip it before bridging to any other network, or add it to the Application layer protocols where it matters.
  3. Application: What to do with the data. Voice, data stream, ID beacons (including optional location, doubles as APRS), control messages, etc.

@SP5WWP: have you nailed down a Physical Layer yet?  What are the relative SNRs between a Layer 1 at 6400bps vs 9600bps?  How high can we go?  These are the kinds of questions we need to ask here.

The on-air bit rate will dictate the limitations we have to work in for the rest.  With the efficient stream protocol documented in the v3 PDF, we need at least a continuous 6400bps data rate from Layer 1, we can't really go any less efficient, like wrapping everything in an Ethernet header.  If we can get 9600bps or more, then we have more options.  See below:

What would a voice stream look like wrapped in Ethernet?
  • Preamble and Start Of Frame: 8 bytes (preamble and sync together.)
  • Destination: 6 bytes (yes, Destination first so receivers can make an early decision whether they care to continue listening or not.
  • Source: 6 bytes
  • EtherType: 2 bytes (N <= 1500, packet length for arbitrary data. N > 1500 is message type, length is implied.)  For a voice payload, we'd define tables where N > 1500 that specify:
    • The CODEC and bitrate, and number of CODEC frames per packet.
    • Any other data in the frame.
    • The FEC used, if not added to the packet.
  • Payload: N<=1500 bytes
    • For a voice payload, N = CODEC frame size * number of CODEC frames per packet + any other data + FEC, if here instead of Ethernet.
  • CRC: 4 bytes
  • FEC: 20 bytes (Do we do this here, or part of the application layload?  It's not standard for Ethernet, but it's only relevant to the on-air protocol anyway so it can be stripped before any routing happens.  Can we do FEC on variable length packets?)
  • 20 bytes of FEC for every 32 bytes of voice payload, regardless of whether it's part of the payload or the Ethernet frame
  • CODEC2 3200: An 8 byte (64 bit) CODEC frame every 20ms
If we go with 4 CODEC frames per packet (like in the v3 specification), then this is a 78 byte packet every 80ms, which is 7800 bps.  If we double that and do 8 CODEC frames per packet, then it's 130 bytes every 160ms, which is 6500bps.  12 CODEC frames per packet, 182 bytes every 240ms, 6067bps.

As you go up in the number of CODEC frames per packet, you gain over-air efficiency, but you increase the receive audio latency, and buffer size requirements, etc.  I don't think any more than 240ms latency would be a good user experience.

But in all three of these cases, we could fit this in a 9600 bps Layer 1, and have a bit of room to spare.
73 de KR6ZY
Forgot to mention:  Keeping the voice data as packets also makes it more appropriate to continue using the SI4463, which it sounds like from your description, @SP5WWP, assumes it's transmitting packets and not continuous streams.

Though, I do like the idea of making hardware that is capable of doing FM as well. 
73 de KR6ZY
Quote:Keeping the voice data as packets also makes it more appropriate to continue using the SI4463, which it sounds like from your description, @SP5WWP, assumes it's transmitting packets and not continuous streams.

So, it is okay right now? That pauses made me panic.

Quote:have you nailed down a Physical Layer yet?  What are the relative SNRs between a Layer 1 at 6400bps vs 9600bps?  How high can we go?  These are the kinds of questions we need to ask here.

We can copy a portion of the Physical Layer from the DMR standard. We should also hit 9600bps. M17 would be a continuous 4FSK stream with a preamble (for the PA ramp-up) and frames starting with a sync word.
I spent the last couple hours reading over the Si4463 datasheet [1] and the Packet Handler application note [2].  It really looks like that chip is designed assuming you'll be transmitting packets in a well defined format.  If you use the Packet Handler at all, you must define which two bytes in the header are the length field, and the receiver will only read that many bytes. ([2] Section 4.3.4, page 20)

[1] Section 4.3.2, page 26 talks about FIFO mode (read: using the SPI bus to send and receive data instead of direct de/modulation using a GPIO pin, which is called Direct mode) It's not well documented, but it keeps using "if enabled" when talking about the Packet Handler, which suggests to me that you can use FIFO mode WITHOUT enabling the Packet Handler.  It sounds like you can turn off Packet Handling, switch to TX mode, and just keep feeding data to the FIFO.  I think this is our streaming mode.

Unfortunately, my $3.50 modules haven't arrived from China yet, so I can't verify this myself.  @SP5WWP are you able to try the v3 packet format this way and see whether those gaps in the RF go away?

73 de KR6ZY
I'm really leaning toward a two-mode protocol: Packet Mode using Ethernet-like frames, and Stream Mode using something similar to v3 for voice frames.  The default/idle mode will be Packet, and you'll switch to Stream Mode by receiving a Link Control packet that specifies the parameters of the stream.

One potential problem with this is: If you turn your radio on in the middle of a stream, your receiver will default to Packet mode, never see a preamble, and therefore never trigger an interrupt that data is coming in.  To fix this, we can either program the receivers to turn off Packet Mode every so often to listen for a stream, or we can modify the stream protocol to occasionally send a preamble and sync to wake up receivers in Packet mode, maybe at the beginning of a super frame.  I think I kinda like that idea.
73 de KR6ZY

Forum Jump:

Users browsing this thread: 1 Guest(s)