Project

General

Profile

Bug #6158

Bug #5860: Constant crash on unsubscribe

CRASH using http streaming

Added by Pablo R. over 2 years ago. Updated over 2 years ago.

Status:
Invalid
Priority:
Normal
Assignee:
-
Category:
-
Target version:
-
Start date:
2022-04-07
Due date:
% Done:

0%

Estimated time:
Found in version:
4.3-1986
Affected Versions:

Description

Hello, it is not the first time that I suffer from this crash during http streaming. I hope it helps to identify the error.

CRASH: Signal: 11 in PRG: /usr/bin/tvheadend (4.3-1986~g09a2c71ab) [7595b689faa8f0d42544113025d3fdd0e74f22ce] CWD: /
CRASH: Fault address 0x28 (Address not mapped)
CRASH: Loaded libraries: linux-vdso.so.1 /usr/lib/x86_64-linux-gnu/libdvbcsa.so.1 /usr/lib/x86_64-linux-gnu/libssl.so.1.1 /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1 /lib/x86_64-linux-gnu/libz.so.1 /usr/lib/x86_>
CRASH: Register dump [23]: 00007fbe7003ff7000676e696d61657200007fc12605c6a300007fbe7000008000007fc112dd8a2e00007fc112dd8a2f00007fc112dd8af000007fbcc33f8d4000007fbe7003ff9d000055e84457bb8100007fbcc33f842000007fb>
CRASH: STACKTRACE
CRASH: /home/tr/tvheadend/src/trap.c:176 0x55e8443d0360 0x55e844211000
CRASH: ??:0 0x7fc1261f83c0 0x7fc1261e4000
CRASH: /home/tr/tvheadend/src/webui/webui.c:307 0x55e844454db4 0x55e844211000
CRASH: /home/tr/tvheadend/src/tcp.c:1148 (discriminator 4) 0x55e844382c3b 0x55e844211000
CRASH: /home/tr/tvheadend/src/api/api_status.c:92 0x55e84440e81c 0x55e844211000
CRASH: /home/tr/tvheadend/src/api.c:102 0x55e84440d53c 0x55e844211000
CRASH: /home/tr/tvheadend/src/webui/webui_api.c:47 0x55e844460636 0x55e84421100
CRASH: /home/tr/tvheadend/src/api.c:102 0x55e84440d53c 0x55e8442110000
CRASH: /home/tr/tvheadend/src/http.c:1277 0x55e84438afb2 0x55e844211000
CRASH: /home/tr/tvheadend/src/http.c:1352 0x55e84438b48d 0x55e844211000
CRASH: /home/tr/tvheadend/src/http.c:1431 0x55e84438b77c 0x55e844211000
CRASH: /home/tr/tvheadend/src/http.c:1569 0x55e84438c518 0x55e844211000
CRASH: /home/tr/tvheadend/src/http.c:2054 0x55e84438d87a 0x55e844211000
CRASH: /home/tr/tvheadend/src/http.c:2105 0x55e84438da92 0x55e844211000
CRASH: /home/tr/tvheadend/src/tcp.c:724 0x55e8443818b5 0x55e844211000
CRASH: /home/tr/tvheadend/src/tvh_thread.c:91 0x55e844378a19 0x55e844211000

History

#1

Updated by Pablo R. over 2 years ago

Investigating further I see that the lines of the crash, in the first line are in:

  if (hc->hc_proxy_ip) {
    tcp_get_str_from_ip(hc->hc_proxy_ip, buf, sizeof(buf));
    htsmsg_add_str(m, "proxy", buf);
  }

It so happens that I am using the HTTP PROXY option on the server, it receives requests from an external nginx proxy. So i think it may be related with this part of code.

#2

Updated by Flole Systems over 2 years ago

This issue seems to have something to do with the status page being opened and (this is just guessing) a client closing the connection in the wrong moment. Is there any "unsubscription" message in the log right before the crash? Is there only a single HTTP stream active or did you have multiple streams active at the same time? Same question for the status page, I assume you had that one open?

How often does this happen for you? Every time you use HTTP streaming? Or maybe just 1 out of 100 times?

#3

Updated by Pablo R. over 2 years ago

Flole Systems wrote:

This issue seems to have something to do with the status page being opened and (this is just guessing) a client closing the connection in the wrong moment. Is there any "unsubscription" message in the log right before the crash? Is there only a single HTTP stream active or did you have multiple streams active at the same time? Same question for the status page, I assume you had that one open?

How often does this happen for you? Every time you use HTTP streaming? Or maybe just 1 out of 100 times?

Yes, I had 2 unsuscriptions in the same second as the crash happens. I had 6 streams oppened at that moment.
I didnt have own tvh status page oppened because I read data externally by /api/status/subscriptions.

It happens to me very rarely, about once every three weeks. There does not seem to be a clear pattern, sometimes it happens with more connections and sometimes with less.

#4

Updated by Flole Systems over 2 years ago

I would guess that it's related to the unsubscriptions and your API requests. I guess if an unsubscription happens in the exact same moment that the status api is requested it leads to that issue. If I have some time I'll see if I can somehow reproduce this.

#5

Updated by Flole Systems over 2 years ago

  • Status changed from New to Invalid
  • Parent task set to #5860

Duplicate of #5860

Also available in: Atom PDF