![]() I also think it's fairly easy to come up with on your own with some basic Linux knowledge and a willingness to spend an evening messing around with ffmpeg, pipes, different containers, and just seeing what it lets you get away with. Perhaps one day I will publish my solution but it's extremely janky right now and has a lot of hard-coded config. A custom demuxer is is on my list of things to do but I'm not much of a C++ person so it's a little daunting for me. I'm currently feeding an ffmpeg encoder with named pipes of uncompressed media streams and this is kind of an off-label use that ffmpeg handles but isn't entirely happy about, and particularly requires some hacking to get ffmpeg to never think the stream ended even when the container says so. ![]() I think the way to really fix these problems is to write a custom demuxer for ffmpeg that uses a small buffer of a "technical difficulties" still or whatever to feed into the processing chain when the stream publisher (whether internal or external) fails to deliver a packet for a certain time period, thus preventing generation of the HLS playlist ever halting for long enough for the video player to stall. These are far from perfect and there is still some choppiness/buffering/repeated playback of a small segment when publishing starts and stops (and thus the management server starts/stops feeding the idle animation). I almost solved this in my own solution by 1) making it so that, in theory, there is always a video stream because the server "plays a video to itself" when there is no publisher, 2) a very lazy fix of having the frontend page auto-refresh periodically when playback fails to start. We're both using the same frontend video player, and I think it's a combination of limitations of that video player and inherent limitations of HLS/MPEG-DASH (the "download a playlist over and over again" model of HLS has some intrinsic issues when it comes to the generation of the playlist starting and stopping). Video encoding is CPU intensive so if you want to run it on a small/inexpensive VPS, it's helpful to 1) architect to absolutely minimize the number of encoding passes required (even when new containers are needed), 2) provide fine-tuning of the encoder settings, since just the change from "fast" to "veryfast" with libx264 makes a pretty big difference on CPU time without a very noticeable difference in the video output.Īnother issue with owncast is that it has choppy/unreliable response from the video player to the actual stream starting/stopping. Really just an FYI for anyone else looking at making a switch or doing something similar. Of course I have also set ffmpeg/libx264 to very high performance options, and I think owncast is less conservative here. I haven't looked into it exhaustively but I think the main problem is that owncast seems to encode the video n+1 times where n is the number of desired quality levels (extra encoding pass on the published stream to smooth out publishing problems), while my solution sends out the original published stream as the high-quality variant and so encodes n-1 times with a couple additional steps of putting the same streams into new containers. Unfortunately I immediately ran into huge performance issues with owncast compared to my solution, with input bitrates stuck at 200-300kbps. ![]() ![]() For reasons largely of laziness I run this on a cheap VPS instead of on the larger machine I have colo'd. ![]() When I saw Owncast mentioned here my immediate thought was to switch over to it, because it has features beyond my homegrown solution (based on nginx-rtmp and a Python/Flask application which performs "management" functions such as authentication of the publisher and injecting an "idle" animation when there is no publisher). I have been running for some time a small single-stream video streaming server. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |