Feature #4517
SAT>IP: Improve the TCP transport
100%
Description
Hi,
I'm working to improve the TCP transport. And this is a continuation of the issue #4226. So I appreciate a feedback for my ideas:
- The first point is adjust the size of RTP 'chunks' to the TCP_PAYLOAD 'chunks'. Now, RTP_TCP_PAYLOAD=(87*188+12+4)=16,372 bytes; and RTP_BUFSIZE=(256*1024)=262,144 bytes. So it's impossible to write one RTP chunk with only one call in the TCP socket.
- The second point is to improve the write in the socket to the 'chunks' boundaries. In the function "rtp.c@satip_rtp_tcp_data()" will be necessary to repeat the call to "tvh_write()" if the written length is less than the chunk's size.
- The third idea is add some kind of intelligent packets discarding. If the buffer of the socket is completely full, then is required to discard some packets. My initial idea is remove one full RTP chunk, but I'm not sure if this is the best option.
Please, can you comment your opinion?
Thank you!
Files
History
Updated by Jaroslav Kysela over 7 years ago
The first point is adjust the size of RTP 'chunks' to the TCP_PAYLOAD 'chunks'. Now, RTP_TCP_PAYLOAD=(87*188+12+4)=16,372 bytes; and RTP_BUFSIZE=(256*1024)=262,144 bytes. So it's impossible to write one RTP chunk with only one call in the TCP socket.
16372 < 262144, so you can write about 16 chunks to the kernel's buffer
The second point is to improve the write in the socket to the 'chunks' boundaries. In the function "rtp.c@satip_rtp_tcp_data()" will be necessary to repeat the call to "tvh_write()" if the written length is less than the chunk's size.
The third idea is add some kind of intelligent packets discarding. If the buffer of the socket is completely full, then is required to discard some packets. My initial idea is remove one full RTP chunk, but I'm not sure if this is the best option.
Yes, non-blocking I/O should be used and the data should be queued and/or destroyed later when the socket has no room for new data after some time. But RTSP code might write to the socket, too. The both 'senders' must write/queue in sync.
Updated by Mono Polimorph over 7 years ago
Hi,
Jaroslav Kysela wrote:
The first point is adjust the size of RTP 'chunks' to the TCP_PAYLOAD 'chunks'. Now, RTP_TCP_PAYLOAD=(87*188+12+4)=16,372 bytes; and RTP_BUFSIZE=(256*1024)=262,144 bytes. So it's impossible to write one RTP chunk with only one call in the TCP socket.
16372 < 262144, so you can write about 16 chunks to the kernel's buffer
Maybe I've misinterpreted the code. The RTP_TCP_PAYLOAD defines the maximum size of an interlaved chunk over the TCP socket. And the RTP_BUFSIZE is the size of the TS packets (188*N) in memory. Then, how much data you try to write? I had imagined it was the equivalent of the buffer size. And if this is true then 266144 > 16372, so you need to write 16 times to the socket to complete one write!
Jaroslav Kysela wrote:
The second point is to improve the write in the socket to the 'chunks' boundaries. In the function "rtp.c@satip_rtp_tcp_data()" will be necessary to repeat the call to "tvh_write()" if the written length is less than the chunk's size.
The third idea is add some kind of intelligent packets discarding. If the buffer of the socket is completely full, then is required to discard some packets. My initial idea is remove one full RTP chunk, but I'm not sure if this is the best option.Yes, non-blocking I/O should be used and the data should be queued and/or destroyed later when the socket has no room for new data after some time. But RTSP code might write to the socket, too. The both 'senders' must write/queue in sync.
The first problem isn't the interlave mode. The RTSP code can write when a stream chunk is completed. It's only required to put some priorities: RTSP has maximum priority, stream=0 (RTP) has the lower priority, and stream=1 (RTCP) has a medium priority.
Futhermore, when in streaming no RTSP commands are sended by the server. So, the problem when the stream is stalled because a socket buffer fully is not related to the priorities. Only some data needs to be discarded.
So, you agree to implement the priorities first?
Updated by Jaroslav Kysela over 7 years ago
Mono Polimorph wrote:
Hi,
Jaroslav Kysela wrote:
The first point is adjust the size of RTP 'chunks' to the TCP_PAYLOAD 'chunks'. Now, RTP_TCP_PAYLOAD=(87*188+12+4)=16,372 bytes; and RTP_BUFSIZE=(256*1024)=262,144 bytes. So it's impossible to write one RTP chunk with only one call in the TCP socket.
16372 < 262144, so you can write about 16 chunks to the kernel's buffer
Maybe I've misinterpreted the code. The RTP_TCP_PAYLOAD defines the maximum size of an interlaved chunk over the TCP socket. And the RTP_BUFSIZE is the size of the TS packets (188*N) in memory. Then, how much data you try to write? I had imagined it was the equivalent of the buffer size. And if this is true then 266144 > 16372, so you need to write 16 times to the socket to complete one write!
OK, I made also mistake, to clarify: RTP_BUFSIZE is used only for UDP and it defines the in-kernel buffer for UDP packets. The default TCP buffer size can be obtained using 'cat /proc/sys/net/ipv4/tcp_wmem' - second value. TVH does not change this buffer size through SO_SNDBUF at the time - we should probably do it.
Updated by Mono Polimorph over 7 years ago
Jaroslav Kysela wrote:
OK, I made also mistake, to clarify: RTP_BUFSIZE is used only for UDP and it defines the in-kernel buffer for UDP packets. The default TCP buffer size can be obtained using 'cat /proc/sys/net/ipv4/tcp_wmem' - second value. TVH does not change this buffer size through SO_SNDBUF at the time - we should probably do it.
In my server this value is by default '16384' just 12 bytes less than the RTP_TCP_PAYLOAD.
I'll do some testing increasing this value at system level. However, I suggest you to change the TCP buffer size with the sockopt SO_SNDBUF when use RTP_over_TCP.
But in any case, some logic for discarding data from the buffer when it's full for some time, instead of stop the stream, will be necessary.
Thank you for your support!
Updated by Mono Polimorph over 7 years ago
Hi,
I think I have discovered the problem!
The TCP connection is stalled just after 30 seconds of play. You can try to show any program and with any condition, that after just 30 seconds the connection in the "master" (remote) server is expired! This only appears with Interlaved RTP over TCP.
The message log in the master server is:
8A075BF8/52: session closed (timeout)
Then the question is: Where is the problem, in the SAT>IP client part or in the SAT>IP server part?
A TCPdump indicates that using the Interlaved RTP over TCP, after the PLAY command it's sended a keepalive OPTIONS command (with the corresponding response) just after 15 seconds. But after another 15 seconds no new OPTIONS appears... just the stream is stoped. Perhaps the problem is in the client...
Updated by Mono Polimorph over 7 years ago
Hi,
I think the problem is in the server part.
I do this test:
- Using one SAT>IP client, tune one radio in a MUX with more than one radio program (a radio has a low bitrate, compatible with a tcpdump of the Interlaved RTP stream).
- After 10 seconds change to a different radio channel in the same MUX.
- After another 10 seconds change to the first radio.
- After 8 seconds rechange to another radio... just at 28 seconds of total.
- At second 30'' the stream stops... even if the PLAY commands are send and confirmed!
So, the problem isn't the client part. It's the sever part that doesn't reset the timer.
Updated by Mono Polimorph over 7 years ago
Hi,
I solved this problem!
Here the patch:
--- src/satip/rtsp.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/satip/rtsp.c b/src/satip/rtsp.c index d71c502..236c68b 100644 --- a/src/satip/rtsp.c +++ b/src/satip/rtsp.c @@ -237,7 +237,7 @@ rtsp_session_timer_cb(void *aux) static inline void rtsp_rearm_session_timer(session_t *rs) { - if (!rs->shutdown_on_close) { + if ((!rs->shutdown_on_close) || (rs->rtp_peer_port == RTSP_TCP_DATA)) { pthread_mutex_lock(&global_lock); mtimer_arm_rel(&rs->timer, rtsp_session_timer_cb, rs, sec2mono(RTSP_TIMEOUT)); pthread_mutex_unlock(&global_lock); -- 1.7.10.4
As you can see, the problem is that the server doesn't rearm the RTP thread when using Interlaved RTP over TCP.
Please, commit this patch!
Updated by Mono Polimorph over 7 years ago
Hi,
Before close this issue (the commit is done), please try to commit also this suggestion:
http://www.tvheadend.org/issues/4226#note-48
-#define RTP_TCP_PAYLOAD (87*188+12+4) /* cca 16kB */ +#define RTP_TCP_PAYLOAD ((7*6*188)+12+4) /* less than 8kB (FFMpeg and others have troubles with >8kB) */
The reason is quite simple: Some tools (like FFmpeg & VideoLAN) has a RTP buffer of 8192 bytes. Then, you need to send a payload size of less than 8KB. If not, the client will suffer errors.
Another option is add one configuration value for selecting the RTP_TCP_PAYLOAD size (but I don't recommend this).
You agree?
Updated by Mono Polimorph over 7 years ago
Mono Polimorph wrote:
You agree?
Hi,
Any reason for not making TVH compatible with FFmpeg and VideoLAN?
Without this change in the size of the RTP_TCP_PAYLOAD these tools can't receive the stream over the TCP RTSP socket.
Updated by Mono Polimorph over 7 years ago
Hi,
This simple change enhances the compatibility with FFmpeg and VideoLAN.
-#define RTP_TCP_PAYLOAD (87*188+12+4) /* cca 16kB */ +#define RTP_TCP_PAYLOAD ((7*6*188)+12+4) /* less than 8kB (FFMpeg and others have troubles with >8kB) */
Any reason to not incorporate it?
Updated by Mono Polimorph over 7 years ago
- File SAT-IP-Make-RTP-over-TCP-compatible-with-FFmpeg.diff SAT-IP-Make-RTP-over-TCP-compatible-with-FFmpeg.diff added
Hi,
I attach a simple diff file for apply this patch!
Please, see this:
- Current implementation uses a payload size of: 87*188 + 12 + 4 = 16372
- Version compatible with FFmpeg has a size of: 7*6*188 + 12 + 4 = 42*188 + 12 + 4 = 7192
- And 87=(42*2)+3.
Any idea or reason to not commit this patch?
Updated by Jaroslav Kysela over 7 years ago
- Status changed from New to Fixed
- % Done changed from 0 to 100
Applied in changeset commit:tvheadend|219dc8e5dcd68a63e008f17684945c992013c779.
Updated by Jaroslav Kysela over 7 years ago
I added the configurable TCP RTP data payload size in v4.3-471-g219dc8e5d.
Updated by Mono Polimorph over 7 years ago
Jaroslav Kysela wrote:
I added the configurable TCP RTP data payload size in v4.3-471-g219dc8e5d.
Great! Thank you! It works very well, and it's a best idea than my proposal.
Updated by Mono Polimorph over 7 years ago
Jaroslav Kysela wrote:
I added the configurable TCP RTP data payload size in v4.3-471-g219dc8e5d.
Hi,
I found one bug: When the TVH is restarted the PAYLOAD is changed (reseted) to 7896 Bytes.
I feel this line of code is wrong:
http://github.com/tvheadend/tvheadend/commit/219dc8e5dcd68a63e008f17684945c992013c779#diff-e50b9e103329909d35f0c98f03b10a46R934
Instead of a default value you're hardcoding the value despiste the value in the config.
Please, can you fix it?
Thank you!
Updated by Mono Polimorph over 7 years ago
Jaroslav Kysela wrote:
Fixed v4.3-484-g442a87183 .
Thank you!
Fixed and closed.