Project

General

Profile

Debugging » History » Version 1

Adam Sutton, 2013-12-29 00:27

1 1 Adam Sutton
h1. Debugging
2
3
If you're going to be regularly trying development versions of Tvheadend or need to report a crash or deadlock then you should really read this page!
4
5
If you are investigating problems within Tvheadend then its worth being familiar with tools such as gdb and valgrind, although these are not covered here.
6
7
However one thing that can be useful in investigating crashes within Tvheadend is to ensure that coredumps are generated, this will allow post analysis in gdb without having to actual run Tvheadend within gdb.
8
9
You can enable temporarily by running:
10
11
<pre>
12
ulimit -c unlimited
13
</pre>
14
15
To make this permanent put this somewhere in your shell environment setup (.bashrc, .profile, etc...)
16
Firstly I'd recommend that if you're specifically trying to investigate an issue then you should consider running Tvheadend manually, rather than as a service, as documented [[Development|here]].
17
18
h2. Logging
19
20
I'd strongly recommend that if you're specifically trying to investigate a crash or other problem in Tvheadend that you enable debugging:
21
22
* *-s* will output debug info to syslog
23
* *--debug* allows you to specify which subsystem to debug (TODO: add more info)
24
* *--trace* allows you to enable trace (more in-depth) logging on specific subsystems
25
26
You can also get Tvheadend to log to it's own file using:
27
28
<pre>
29
-l FILE
30
</pre>
31
32
h2. Enabling coredumps
33
34
Although you can run Tvheadend within gdb, personally I never bother. If you need to investigate some running problem you can always attach (see below) later and if you need to trap crashes, then you can configure your system to generate a core file and then retrospectively analyse this with gdb.
35
36
If you're running manually you should enable coredumps in your environment:
37
38
<pre>
39
ulimit -c unlimited
40
</pre>
41
42
I'd recommend you enable this permanently by putting this command in your shell initialisation scripts (.bashrc etc..).
43
44
If you're running as a daemon then you should use the -D command line option, this will enable coredumps from the daemon. If you start using sysvinit, upstart etc... then you will need to put this in the configuration file, e.g.:
45
46
<pre>
47
TVH_ARGS="-D"
48
</pre>
49
50
Finally it's probably worth changing the coredump file format, personally I use the following configuration:
51
52
<pre>
53
echo core.%h.%e.%t | sudo tee /proc/sys/kernel/core_pattern
54
echo 0 | sudo tee /proc/sys/kernel/core_uses_pid
55
</pre>
56
57
Or put the following in /etc/sysctl.conf:
58
59
<pre>
60
kernel.core_pattern = core.%h.%e.%t
61
kernel.core_uses_pid = 0
62
</pre>
63
64
If you're using a system like Ubuntu that uses apport (and cripples the ability to change the core format) just set core_uses_pid=1 instead.
65
66
Note: coredumps are (by default) stored in the current working directory, to make it possible for the daemon to write files the current working directory is set to /tmp when using -D, so check there for core files.
67
68
To verify that you have everything configured properly you can use the -A option to force a crash on startup. Do this from the command line or add to /etc/default/tvheadend:
69
70
<pre>
71
TVH_ARGS="-D -A"
72
</pre>
73
74
Note: remember to remove the option after you've tested it!
75
76
h2. Processing core file.
77
78
Once you have a core file you can start up gdb with that coredump, just as if you'd caught the crash while running under gdb:
79
80
<pre>
81
gdb tvheadend core
82
</pre>
83
84
You may need to replace _tvheadend_ and _core_ above with the proper paths.
85
86
For most crashes the most useful information is the back trace, this will provide a stack trace showing where the code crashed and the stack information at the time of the crash:
87
88
<pre>
89
(gdb) set logging on
90
(gdb) set pagination off
91
(gdb) bt full
92
#0  0x00007f5b10cc1425 in __GI_raise (sig=<optimised out>)
93
    at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
94
        resultvar = 0
95
        pid = <optimised out>
96
        selftid = 7517
97
#1  0x00007f5b10cc4b10 in __GI_abort () at abort.c:120
98
        act = {__sigaction_handler = {sa_handler = 0, sa_sigaction = 0}, 
99
          sa_mask = {__val = {18446744073709551615 <repeats 16 times>}}, 
100
          sa_flags = 0, sa_restorer = 0}
101
        sigs = {__val = {32, 0 <repeats 15 times>}}
102
#2  0x000000000040744e in main (argc=<optimised out>, argv=<optimised out>)
103
    at src/main.c:810
104
        i = <optimised out>
105
        set = {__val = {16386, 0 <repeats 15 times>}}
106
        adapter_mask = <optimised out>
107
        log_level = <optimised out>
108
        log_options = <optimised out>
109
        log_debug = <optimised out>
110
        log_trace = <optimised out>
111
        buf = "/tmp\000\000\000\000\360\350\364\023[\177\000\000\000\320\365\023[\177\000\000t\n\327\023[\177\000\000\370\271\311\020[\177\000\000\017\000\000\000\000\000\000\000:\000\000\000\000\000\000\000h\344\364\023[\177\000\000.N=\366\000\000\000\000\236\022\327\023[\177\000\000\300\304S\205\377\177\000\000.\000\000\000\000\000\000\000 \305S\205\377\177\000\000\377\377\377\377\000\000\000\000\264\352\310\020[\177\000\000\250\354\310\020[\177\000\000\360\304S\205\377\177\000\000\360\350\364\023[\177\000\000@\256\311\020[\177", '\000' <repeats 18 times>"\340, \346\364\023[\177\000\000\000\320\365\023[\177\000\000\231,@\000\000\000\000\000\370\271\311\020[\177\000\000\340\033@\000\000\000\000\000\000\000\000\000\001\000\000\000\021\b\000\000\001", '\000' <repeats 11 times>, " \266\370\023[\177\000\000`\305S\205\377\177\000\000.N=\366\000\000\000\000\340\346\364\023[\177\000\000\200\305S\205\377\177\000\000"...
112
        opt_help = 0
113
        opt_version = 0
114
        opt_fork = 1
115
        opt_firstrun = 0
116
        opt_stderr = 0
117
        opt_syslog = 0
118
        opt_uidebug = 0
119
</pre>
120
121
Note: "set logging on" will cause GDB to write its output to a file, by default this will be gdb.txt in the current directory.
122
123
However I'd strongly recommend that you keep a copy of tvheadend binary and core file in case further analysis is required.
124
125
h2. Dead or Live Lock
126
127
If Tvheadend appears to die but the process is still running, then its quite possible that the process is deadlocked (or possibly live locked). The best way to help investigate such a problem is to get a full stack trace from every thread in the system.
128
129
First attach gdb to the running process:
130
131
<pre>
132
gdb tvheadend pid
133
</pre>
134
135
You may need to replace _tvheadend_ with the full path to the binary and you will need to replace _pid_ with the PID of the running process. To find that run:
136
137
<pre>
138
ps -C tvheadend
139
</pre>
140
141
Once you have gdb attached grab a stack trace from every thread using the following command:
142
143
<pre>
144
(gdb) set logging on
145
(gdb) set pagination off
146
(gdb) thread apply all bt full
147
</pre>
148
149
Note: "set logging on" will cause GDB to write its output to a file, by default this will be gdb.txt in the current directory.
150
151
It might also be useful to generate a core file for good measure:
152
153
<pre>
154
(gdb) generate-core-file
155
</pre>
156
157
This information may give an indication as to why things are locked, often 2 threads are stuck trying to lock a mutex (probably each holds the opposite lock).
158
159
h2. Reporting crash (or lock)
160
161
If you're going to report a crash (or lockup) then please try to provide the above information, including a debug log (or whatever logging you have), a core file and if you're not using a pre-built tvheadend package then the binary and basic information about the platform (distribution, version and architecture) you're running on.