Bug #6193
Hard coded limit on buffered mpegts data is too low.
0%
Description
There's a hardcoded limit at tvheadend/src/input/mpegts/mpegts_input.c on mpegts ingress that looks like it was based on assumptions from the early era of digital TV, and needs to be increased or made user configurable. 50Mb is a too small for potential pipeline glitchiness as the bandwidth expectations increase, and is already too small for some full-mux and 4K.
History
Updated by Flole Systems about 2 years ago
- Status changed from New to Invalid
It is not too small. Receiving 32 complete DVB-C frequencies with a total bandwidth of almost 1.5Gbit/s works perfectly fine. A 256QAM DVB-C Channel has around 50Mbit/s and can be received with the current buffer perfectly fine.
Updated by J Blanc about 2 years ago
SAT-IP receivers do not certify they will provide a constant mpegts bitrate, and may deliver it in bursts, or padded with zeros. Internet provisioned TV can have far far higher bitrate than OTA.
I have seen "mpegts: too much queued input data (over 50MB) for SAT>IP DVB-S Tuner, discarding new" errors when trying to implement a transcoding method the system is capable of, but a bursted and padded SAT>IP ingress hits against bursts of system latency that trip the 50Mb queue limit and drop packets.
Updated by Flole Systems about 2 years ago
J Blanc wrote:
SAT-IP receivers do not certify they will provide a constant mpegts bitrate, and may deliver it in bursts, or padded with zeros.
Then they are in violation of the specification which explicitly states:
RTP packets shall not be buffered by SAT>IP servers and shall not be delivered in a single burst. As soon as an RTP packet is filled with up to seven MPEG-2 TS packets, the RTP packet shall be released by the server. Buffering of RTP packets by the server will lead to increased packet jitter and may result in the malfunctioning of clients. SAT>IP servers shall therefore send packets at a constant rate, not in bursts, and packets shall be evenly paced.
In that case you should contact the manufacturer of the SAT>IP Server and ask them to fix their server as it is in violation of the specification.
Updated by J Blanc about 2 years ago
Two concerns,
1) The RTP stream is not identical to an MPEGTS stream, it includes retransmission and flow-control that can indeed result in the MPEGTS stream being delivered unevenly. The specification you quote only says that a SAT>IP server shall not attempt to coalesce more than 7 mpegts packets in the same RTP packet, and not have a RTP TX buffer. BUT this does not mean that those RTP packets will be received evenly paced, that is entirely up to the network conditions. Further, it says nothing about how much padding will occur, and other transmission flow concerns. The SAT>IP can't possibly mandate a steady rate of delivery because that's outside of the control of the SAT>IP device! Network bridges routinely alter the flow rates and even drop packets requiring retransmission.
2) Hardware, or even Broadcast transmissions, being out of spec are not going to be something the end user is responsible for. Tolerance and adaptions for out of spec situations is something that already exists in many other parts of TVH.
3) Again, Internet/xDSL provisioned TV can have much higher bursts of traffic than OTA.
Updated by Flole Systems about 2 years ago
Well show me the Math for the value you propose. You should calculate the buffer size based on the maximum normally expected bandwidth and a reasonable time (in Seconds) for the buffer.
If the end user buys hardware that's out of spec they can contact the manufacturer/seller and demand a fix. If they don't fix it they can return the faulty device. Increasing a buffer means increasing the memory footprint and I do not see any reason to do that, but I haven't seen your calculations that show that this isn't enough yet.
My 32-Channel-DVB-C-Tuner is also running over SAT>IP and I don't see any issues with that. It is running a proper implementation though and follows the specification.
Updated by saen acro about 2 years ago
Math is simple.
But containers with transfer data is different,
as same as transfer protocol.
Next is MTU problem,
1500 MTU cannot be transferred over internet without fragmentations.
(xDSL With degradation pass in head of your operator, this was used 15y ago and gone forever)
Updated by Flole Systems about 2 years ago
Well then show me the math, where are the calculations that proof that the current value isn't enough?
An RTP packet from a SAT>IP Server can't be 1500 Bytes anyways, if you would have attempted to calculate it you would have noticed that. And I can very well transfer 1500 Bytes over the internet without fragmentation, but all of that is irrelevant for this bug report.
Updated by saen acro about 2 years ago
Compassion of unicast and multicast is irrelevant.
In DVB stream there is a factor called GOP with include I, P and B frame's.
Without buffering there is "slow start" in unicast or "green start" in multicast.
Packet backlog is some workaround with will not work without buffering.
/* queue packets when buffer requests more data */
#define PULL_MAX_RATE 100000000 /* 100 Mbit */
#define PULL_MIN_RATE 1000000 /* 1 Mbit */
#define PULL_MIN_PCR 5 /* 5 ms */
#define PULL_MAX_PCR 100 /* 100 ms */
#define PULL_DURATION 4000 /* total TS duration no more than 4 seconds */
#define PULL_BENCH_COUNT 2500 /* max. 2500 PCR measurements */
#define PULL_LOW_THRESH 2 /* buffer won't dequeue last 2 blocks */
simple search will point to opensource code of software and how is solved there.
Updated by Flole Systems about 2 years ago
No, buffering is irrelevant for that as buffering won't magically cause packets to appear after you tune, no matter how big you set the buffer. In fact a larger buffer would cause an even longer delay if it is filled first, if the backlog-principle is used then it won't have any effect as the packets with the frames in it don't magically appear in the buffer. Ideally the backlog is 0. It is not a workaround, it is a valid approach for a reader/writer problem. Tvheadend doesn't even need to worry about different types of frames as it's only forwarding the frames and not decoding/reencoding them, and when it comes to transcoding it is not the tuner input buffer that's used for buffering the transcoder input. Please stop talking about stuff you don't understand, it's super annoying. I asked you to show me the math, take your values and calculate the buffer size based on that. I want to see numbers and calculations that explain what the buffer size needs to be. You said it's simple, and indeed it is, so show it, put the numbers in the formula and calculate the buffer size.
Updated by J Blanc about 2 years ago
If as you say there is no way for this to happen, then I can't possibly have ever experienced the "mpegts: too much queued input data (over 50MB) for SAT>IP DVB-S Tuner, discarding new" errors I have experienced? Please explain how I experienced these errors if it is impossible for TVH's mpegts queue to overflow?
Please stop demanding theoretical "maths" when the problem is demonstrated in the real world. The queue is currently too small for real-world conditions, and likely to only get worse when IP Provisioned Television expands. Further, the SAT>IP hardware I use is fully certified as SAT>IP compliant, so it would also be helpful if you stopped blaming the hardware maker.
If you are dead set against increasing the queue length beyond your theoretical maximum for ordinary conditions, then you need to address things on the network ingress. Implement flow control on network ingress so you don't accept more RTP packets than you are willing to handle the contents of, which allows for normal retransmission to occur instead of dropping packets on the floor. However, that won't fix future problems caused by increased rates in the MPEGTS stream, DVB-S2 is already capable of pushing 58.8Mbps and as interest in 4K broadcasts increase the "50Mb is good enough!" assumption becomes false.
Incidentally, tou haven't posted any reason why the current magic number of 50Mb is used for maximum queue length before TVH starts dropping packets. Or why newest-drop is chosen as the drop strategy, since that seems unwise considering how the data is consumed?
Updated by Flole Systems about 2 years ago
J Blanc wrote:
If as you say there is no way for this to happen, then I can't possibly have ever experienced the "mpegts: too much queued input data (over 50MB) for SAT>IP DVB-S Tuner, discarding new" errors I have experienced? Please explain how I experienced these errors if it is impossible for TVH's mpegts queue to overflow?
I'm not saying it can't happen, of course if the CPU is too slow or if you do transcoding and the hardware can't keep up with it you can manage to fill up the buffer. Or if Tvheadend locks up and the buffer is never emptied at all. But it can not happen under normal operating conditions.
Please stop demanding theoretical "maths" when the problem is demonstrated in the real world. The queue is currently too small for real-world conditions, and likely to only get worse when IP Provisioned Television expands. Further, the SAT>IP hardware I use is fully certified as SAT>IP compliant, so it would also be helpful if you stopped blaming the hardware maker.
A buffer size has to be calculated, we don't just pick one out of the air. You don't want to show the calculations for the proper buffer size, that's fine, then it stays as it is. I didn't blame anything on the hardware, you came up with a scenario where there are bursts sent by the server, so I explained that hardware sending bursts is in violation of the specification. I never said your hardware is at fault or violating the specifications. If the network is causing that behaviour then it needs to be fixed on the network layer, a jitter in the range of multiple seconds is not considered acceptable and that is not a Tvheadend issue either.
If you are dead set against increasing the queue length beyond your theoretical maximum for ordinary conditions, then you need to address things on the network ingress. Implement flow control on network ingress so you don't accept more RTP packets than you are willing to handle the contents of, which allows for normal retransmission to occur instead of dropping packets on the floor.
Or you need a faster CPU that processes the packets faster. RTP retransmissions aren't supported by most SAT>IP servers at all, and as it's UDP based there are no retransmissions on the lower layers either.
However, that won't fix future problems caused by increased rates in the MPEGTS stream, DVB-S2 is already capable of pushing 58.8Mbps and as interest in 4K broadcasts increase the "50Mb is good enough!" assumption becomes false.
There are no problems if you have a sufficiently fast CPU. If your CPU is too slow it doesn't matter how big the buffer get's, it just won't be able to keep up. That "assumption" (which is backed by calculations) will not become false in the near future. Again, show me the math, just saying "there will be a problem" without any kind of proof is not enough. The fact that you haven't shown and don't want to show any calculations already tells me that you haven't calculated anything and that your statement that there will be future problems is not backed by anything.
Incidentally, tou haven't posted any reason why the current magic number of 50Mb is used for maximum queue length before TVH starts dropping packets. Or why newest-drop is chosen as the drop strategy, since that seems unwise considering how the data is consumed?
That's how it currently is implemented and I don't have to justify it. You can go through the commits and previous bug reports/feature requests and try to find out why it was implemented the way it was if you're interested in that. You want to change it and I want explanation and proof that your change is necessary, so far you didn't even come up with a value that you want to change it to (or at least you haven't posted it yet). You don't want to explain and calculate it, that's fine, but then that change is not happening. It's as simple as that, you don't even need to open a PR as it will not be merged without a proper calculations.
Updated by J Blanc about 2 years ago
Your repeated confusion between a Buffer and a Queue, and imagining that bursty traffic can be "fixed at the network layer", make it clear that you do not understand the problems related to networking. At high enough bitrates a queue can fill up with a few ms of jitter, because those ms eventually add up.
Please provide the calculation you are using that says the current arbitrary fixed queue value is fine?
Please provide your definition of "normal operating conditions"?
Please provide your definition of "faster cpu" needed?
Please provide any simulations of traffic in multiple real-world conditions you have performed to show 50Mb is a suitable hard cutoff for queue length?
"The Maths" of queue filling are much more complicated than you seem to think, and theoretical ideas do not trump real-world observations. (As a sample, here's the simplest paper on queue maths. https://web.mst.edu/~gosavia/queuing_formulas.pdf)
I suggest you are strongly underestimating the headroom needed in the queue to allow for unusual but occuring incidences.
Updated by Flole Systems about 2 years ago
I'm not the one having problems, I send 1.5Gbit/s of SAT>IP Traffic into Tvheadend without hitting that limit (and it moves through multiple switches and a firewall and is split over multiple links at some parts, so this is not even an ideal connection between server and client), you're the one who apparently has issues at much lower bitrates. No matter how often you tell me that it doesn't work, I won't believe you because I know that it does work and that 1.5Gbit/s is more than 99.9% of the users will ever push into Tvheadend.
Again: I do not have to show that 50MB is sufficient, you want it changed so you have to show that 50MB is not sufficient and more important what would sufficient. You don't want to do that apparently, so it's not happening.
Feel free to update this once you have changed your mind and have come up with calculations and a proposal for what size is adequate.