Project

General

Profile

Actions

Debugging » History » Revision 9

« Previous | Revision 9/32 (diff) | Next »
Jaroslav Kysela, 2016-12-05 11:43


Debugging

If you're going to be regularly trying development versions of Tvheadend or need to report a crash or deadlock then you should really read this page!

If you are investigating problems within Tvheadend then its worth being familiar with tools such as gdb and valgrind or clang, although these are not covered here.

However one thing that can be useful in investigating crashes within Tvheadend is to ensure that coredumps are generated, this will allow post analysis in gdb without having to actual run Tvheadend within gdb.

You can enable temporarily by running:

ulimit -c unlimited

To make this permanent put this somewhere in your shell environment setup (.bashrc, .profile, etc...)
Firstly I'd recommend that if you're specifically trying to investigate an issue then you should consider running Tvheadend manually, rather than as a service, as documented here.

Logging

I'd strongly recommend that if you're specifically trying to investigate a crash or other problem in Tvheadend that you enable debugging:

  • -s will output debug info to syslog
  • --debug allows you to specify which subsystem to debug (TODO: add more info)
  • --trace allows you to enable trace (more in-depth) logging on specific subsystems

You can also get Tvheadend to log to it's own file using:

-l FILE

You may also modify the debug settings using WEB GUI as admin - Configuration/Debugging. Note that the information is not saved,
it is just set for run-time (current task).

  • Debug log path - filename to store log
  • Debug trace - enable traces
  • Debug subsystems - comma separated list of subsystems
  • Trace subsystems - comma separated list of subsystems

The traces must be compiled to the tvheadend binary - see Traces.

Basic crash debug

You may run tvh in gdb directly using command:

gdb --args /the standard tvh command line/

(gdb) run

Or attach gdb to the running process:

gdb tvheadend pid

(gdb) continue

You may need to replace tvheadend with the full path to the binary and you will need to replace pid with the PID of the running process. To find that run:

ps -C tvheadend

Once you have gdb attached grab a stack trace from every thread using the following command:

(gdb) set logging on
(gdb) set pagination off
(gdb) bt full

Note: "set logging on" will cause GDB to write its output to a file, by default this will be gdb.txt in the current directory.

Enabling coredumps

If you need to investigate some running problem you can always attach (see below) later and if you need to trap crashes, then you can configure your system to generate a core file and then retrospectively analyse this with gdb.

If you're running manually you should enable coredumps in your environment:

ulimit -c unlimited

I'd recommend you enable this permanently by putting this command in your shell initialisation scripts (.bashrc etc..).

If you're running as a daemon then you should use the -D command line option, this will enable coredumps from the daemon. If you start using sysvinit, upstart etc... then you will need to put this in the configuration file, e.g.:

TVH_ARGS="-D" 

Finally it's probably worth changing the coredump file format, personally I use the following configuration:

echo core.%h.%e.%t | sudo tee /proc/sys/kernel/core_pattern
echo 0 | sudo tee /proc/sys/kernel/core_uses_pid

Or put the following in /etc/sysctl.conf:

kernel.core_pattern = core.%h.%e.%t
kernel.core_uses_pid = 0

If you're using a system like Ubuntu that uses apport (and cripples the ability to change the core format) just set core_uses_pid=1 instead.

Note: coredumps are (by default) stored in the current working directory, to make it possible for the daemon to write files the current working directory is set to /tmp when using -D, so check there for core files.

To verify that you have everything configured properly you can use the -A option to force a crash on startup. Do this from the command line or add to /etc/default/tvheadend:

TVH_ARGS="-D -A" 

Note: remember to remove the option after you've tested it!

Processing core file.

Once you have a core file you can start up gdb with that coredump, just as if you'd caught the crash while running under gdb:

gdb tvheadend core

You may need to replace tvheadend and core above with the proper paths.

For most crashes the most useful information is the back trace, this will provide a stack trace showing where the code crashed and the stack information at the time of the crash:

(gdb) set logging on
(gdb) set pagination off
(gdb) bt full
#0  0x00007f5b10cc1425 in __GI_raise (sig=<optimised out>)
    at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
        resultvar = 0
        pid = <optimised out>
        selftid = 7517
#1  0x00007f5b10cc4b10 in __GI_abort () at abort.c:120
        act = {__sigaction_handler = {sa_handler = 0, sa_sigaction = 0}, 
          sa_mask = {__val = {18446744073709551615 <repeats 16 times>}}, 
          sa_flags = 0, sa_restorer = 0}
        sigs = {__val = {32, 0 <repeats 15 times>}}
#2  0x000000000040744e in main (argc=<optimised out>, argv=<optimised out>)
    at src/main.c:810
        i = <optimised out>
        set = {__val = {16386, 0 <repeats 15 times>}}
        adapter_mask = <optimised out>
        log_level = <optimised out>
        log_options = <optimised out>
        log_debug = <optimised out>
        log_trace = <optimised out>
        buf = "/tmp\000\000\000\000\360\350\364\023[\177\000\000\000\320\365\023[\177\000\000t\n\327\023[\177\000\000\370\271\311\020[\177\000\000\017\000\000\000\000\000\000\000:\000\000\000\000\000\000\000h\344\364\023[\177\000\000.N=\366\000\000\000\000\236\022\327\023[\177\000\000\300\304S\205\377\177\000\000.\000\000\000\000\000\000\000 \305S\205\377\177\000\000\377\377\377\377\000\000\000\000\264\352\310\020[\177\000\000\250\354\310\020[\177\000\000\360\304S\205\377\177\000\000\360\350\364\023[\177\000\000@\256\311\020[\177", '\000' <repeats 18 times>"\340, \346\364\023[\177\000\000\000\320\365\023[\177\000\000\231,@\000\000\000\000\000\370\271\311\020[\177\000\000\340\033@\000\000\000\000\000\000\000\000\000\001\000\000\000\021\b\000\000\001", '\000' <repeats 11 times>, " \266\370\023[\177\000\000`\305S\205\377\177\000\000.N=\366\000\000\000\000\340\346\364\023[\177\000\000\200\305S\205\377\177\000\000"...
        opt_help = 0
        opt_version = 0
        opt_fork = 1
        opt_firstrun = 0
        opt_stderr = 0
        opt_syslog = 0
        opt_uidebug = 0

Note: "set logging on" will cause GDB to write its output to a file, by default this will be gdb.txt in the current directory.

However I'd strongly recommend that you keep a copy of tvheadend binary and core file in case further analysis is required.

Dead or Live Lock

If Tvheadend appears to die but the process is still running, then its quite possible that the process is deadlocked (or possibly live locked). The best way to help investigate such a problem is to get a full stack trace from every thread in the system.

First attach gdb to the running process:

gdb tvheadend pid

(gdb) continue

You may need to replace tvheadend with the full path to the binary and you will need to replace pid with the PID of the running process. To find that run:

ps -C tvheadend

Once you have gdb attached grab a stack trace from every thread using the following command:

(gdb) set logging on
(gdb) set pagination off
(gdb) thread apply all bt full

Note: "set logging on" will cause GDB to write its output to a file, by default this will be gdb.txt in the current directory.

It might also be useful to generate a core file for good measure:

(gdb) generate-core-file

This information may give an indication as to why things are locked, often 2 threads are stuck trying to lock a mutex (probably each holds the opposite lock).

Reporting crash (or lock)

If you're going to report a crash (or lockup) then please try to provide the above information, including a debug log (or whatever logging you have), a core file and the tvheadend binary and basic information about the platform (distribution, version and architecture) you're running on.

Memory leaks or corruption

It may be really difficult to track these problems. There are basically two tools which may help to discover the memory leaks or memory corruptions.

Valgrind

It is very slow, but it may be useable for things which are triggered everytime:

valgrind --leak-check=full --show-reachable=yes /tvh_command_line/

clang

There is address sanitizer in the clang toolkit. The binary must be rebuild using the clang compiler and libraries:

ARGS="/your_configure_arguments/" 
SANITIZER=address
export CFLAGS="-fsanitize=$SANITIZER" 
export LDFLAGS="-fsanitize=$SANITIZER" 
./configure $ARGS --disable-pie --enable-ccdebug python=python3 cc=clang ld=clang nowerror
make -j4

Updated by Jaroslav Kysela almost 8 years ago · 9 revisions