Feature #5428
Feature #4933: Sat-IP: Allow only one Stream per Tuner
Limit number of channels per socket conection
0%
Description
I already saw this being discussed but not exactly as i am trying to propose.
I use Sat>Ip protocol on LAN and also over WAN. Whem tvheadend pushing from tvheadend that's not on it's network i usually get time to time and depending on the server (tvheadend/satipaxe/minisatip) CC errors and streams start to go up and down. This happen because the design of protocol use add pids and del pids to add or remove channels from the socket connection (i believe thats how it works). So you can have a single socket transporting a single channels or even the all channels in the transponder if server's allow it.
Everything runs smoothly when the connection is below certain MB/s and after that problems start.
To prove my concept (and i know thats completely agains protocol and will eat much more cpu because of the multiple parses) i forced tvheadend to push 10 channels using sat>ip as source (minisatip) for all day long and streams don't get more than 1-2 hours without stop and restart. On the other way i force other tvheadend instance to grab exactly same channels (10) but using http as input from the same sat>ip (minisatip) and 9 on 10 stay all day alive.
all tests done with fiber WAN with 4x more capacity than the bw needed for the test.
I also did another test use the 2 instances of tvheadend to grab each one only half of the channels with sat>ip protocol and results where very close to the http test.
I think this allow us to conclude that tvheadend could have a setting to insert the maximum of channels that can be pulled from a single socket and in the case it get's exausted second (splitted) connection should be created inside same tuner.
Thanks
History
Updated by Jaroslav Kysela almost 6 years ago
- Status changed from New to Rejected
- Parent task set to #4933
Dup of #4933 .
It would be probably much better to look what's wrong with satip for your config.
Updated by catalin toda almost 6 years ago
I believe the problem here is TCP.
10 connections that are uploading about 2MB/s are going to recover as a result of packet loss much faster than one tcp connection that uploads 20MB/s.
While bigger buffer could partially solve a part of the problem, multiple streams would be better approach.
Yes this would mean more processing from both minisatip and tvheadend and on some devices that might be too much, but basically the request is to do one stream / channel
Thank you
Updated by catalin toda almost 6 years ago
Or if you consider that you are loosing a packet every 1000M traansfered that will affect just one connection and not all of them. When you upload with 20MB/s and one connection then every packet loss will put pressure on the buffers minisatip have. So far I was not able to identify any bug on minisatip side in the logs I have analyzed.
Updated by Joe User almost 6 years ago
Ricardo Rocha wrote:
all tests done with fiber WAN with 4x more capacity than the bw needed for the test.
WAN bandwidth is not the same LAN bandwidth and is far from being guaranteed. There can be delays which cause problems for real-time data delivery.
The SAT>IP protocol was never meant for WAN usage, it was designed for LOCAL distribution. It was also not meant to be a conduit between two servers.
You can see the specification here: [[http://www.satip.info/resources]]
Creating a custom (non-standard) SAT>IP protocol just for tvheadend is probably not a good idea.
Updated by Jaroslav Kysela almost 6 years ago
It's not about the custom standard. We can just open a new session to the satip server for another service with limited PIDs. But the problem is that too many subsystems in tvheadend expect that the mux is received in the one stream. Dot. There's similar issue with the CAM CI - the decoding cards are usually limited for one or two services so other should run on another tuner.
Updated by Joe User almost 6 years ago
Making more TCP connections will only make overall performance worse, although it would reduce all streams failing.
As catalin toda pointed out, part of the problem is using TCP. I did a quick test using UDP and I had much better overall performance, although there were some lost/late packets which caused some continuity errors, there was no need to restart any channels.