Hallo @all, with each channel change (with HW Transcode at NVENC) the memory is occupied more by the GPU -> NVENC since when this is so, unfortunately not known, since I tested this release last year ago with other hardware.
In the version 4.2.7 one can observe clearly that the GPU Ram is released after a switching process again. At about 2500 MB occupancy comes in the TVH log the following message "libav: AVHWDeviceContext: Failed to initialize VAAPI connection: -1 (unknown libva error)." Until the restart of TVH no transcode is possible. In my opinion, only the "every old" transcode process is not closed properly.
Thanks and greetings
Enclosed an output from nvidia-smi
Tue Dec 11 18:01:36 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.77 Driver Version: 390.77 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro P5000 Off | 00000000:65:00.0 Off | Off |
| 34% 56C P0 46W / 180W | 1499MiB / 16278MiB | 1% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 66068 C /usr/bin/tvheadend 1487MiB |
+-----------------------------------------------------------------------------+
Tue Dec 11 18:01:43 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.77 Driver Version: 390.77 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro P5000 Off | 00000000:65:00.0 Off | Off |
| 34% 56C P0 44W / 180W | 1507MiB / 16278MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 66068 C /usr/bin/tvheadend 1495MiB |
+-----------------------------------------------------------------------------+
Tue Dec 11 18:01:44 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.77 Driver Version: 390.77 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro P5000 Off | 00000000:65:00.0 Off | Off |
| 34% 56C P0 44W / 180W | 1640MiB / 16278MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 66068 C /usr/bin/tvheadend 1628MiB |
+-----------------------------------------------------------------------------+
its been a year - and I have still the same issue. On using htsp client memory is not freed up when stop streaming the channels but is stacks until the server goes down. On M3U playlist + simple-XML it works like a charm.
Any Ideas? If you need more info - do not hesitate to come back to me.
I think that's just an overlay to the "normal" nvdia sources,
"Please note that NvPipe acts as a lightweight synchronous convenience layer around the NVIDIA Video Codec SDK and doesn't offer all high-performance capabilities. If you're looking for ultimate encode/decode performance, you may want to consider using NvCodec directly."
Therefore, normal IPTV: destroy -> a -> no problem HTPS IPTV: destroy -> b -> not complete -> problem
in version 4.2, a new subprocess has been built for each stream. I found this better because Sub destroy and done.
Sorry to disappoint you but nobody is actively working on tvheadend at the moment, there are a few changes now and then but other than that there is not much work being done.
Let's start with yourself: Why don't you care? Why are you not spending time to investigate this issue? Seems like this issue is not that important if you don't put effort in it, totally fine with me but if your issue isn't important to yourself it's not important to me either (especially since I can't reproduce it and I am not affected by it at all).
Are u using nvenc with TVH? i think not, so u cant reproduce it. The Problem is i dont know witch source file is responsible for transcoding and channel switching
If you don't even want to spend the time to update to the latest version you really shouldn't complain that nobody is working on it. You don't even try to fix the issue, so I won't either.
Also no accurate Tvheadend version or debug log was presented either....
Old driver is used, Tvheadend is used in Docker. Once someone can reproduce this on the latest Tvheadend version with the latest Nvidia Driver on a native System I can consider reopening this.
both codec´s have the same Problem, i dosn´t think this is a Problem with the Codec, i think this is a Problem with the Opening and Close from the Sub-Process.
both codec´s have the same Problem, i dosn´t think this is a Problem with the Codec, i think this is a Problem with the Opening and Close from the Sub-Process.
Hallo @all, with each channel change (with HW Transcode at NVENC) the memory is occupied more by the GPU -> NVENC since when this is so, unfortunately not known, since I tested this release last year ago with other hardware.
In the version 4.2.7 one can observe clearly that the GPU Ram is released after a switching process again. At about 2500 MB occupancy comes in the TVH log the following message "libav: AVHWDeviceContext: Failed to initialize VAAPI connection: -1 (unknown libva error)." Until the restart of TVH no transcode is possible. In my opinion, only the "every old" transcode process is not closed properly.
Thanks and greetings
Enclosed an output from nvidia-smi [...]
Ronny M. wrote:
saen acro wrote:
Rene Wagler wrote:
glad im not alone.. but why Flole Systems does not have this bug ?
Because no one report bugs corectly What OS/Kernel is used What hardware platform What devices/driver used etc.
Valgrind debugging for example
OS: Debian (10.5) Kernel: 4.19.132-1 Hardware: Motherboard Supermicro, CPU Intel(R) Xeon(R) Silver 4110 GPU: Nvidia Quadro P5000 Driver Version: 450.57 NVIDIA-SMI: 450.57 Cuda Version: 11.0 Docker: No ffmpeg: Static from TVH Build TVH Version: 4.3-1857