FreeTube is fantastic, and since it started including support for SponsorBlock, it has become my main way of consuming YouTube content. The only downside being the fact that there is no built-in way to synchronize your setup across computers. By the power of open source, let's fix it!
Ingredients
Two or more computers
Software that syncs files from your filesystem (I use Syncthing)
Symbolic links
If you have a sync solution already set up, syncing your settings is quite simple. FreeTube has a directory that contains all its settings, which can be added to your chosen sync software, then linked into the correct locations. In my case, I wanted sync from the Linux Flatpak version of the app that runs on my personal computer, to my MacBook, and to my work computer which runs Windows. It's also important to note that Syncthing does not follow symbolic links (to avoid infinite recursion issues), so the actual FreeTube settings folder needs to live in your Syncthing directory - then linked out - not the other way around. If your chosen syncing solution syncs directories in-place, then just point it at the correct locations.
Linux Flatpak settings directory: ~/.var/app/io.freetubeapp.FreeTube/config/FreeTube
This will synchronize all FreeTube settings, watch history and subscriptions across all linked computers.
Note that the operands are flipped between Linux/macOS ln and Windows mklink; *nix ln uses target link_name while Windows mklink uses link_name target (which is great fun when you do this relatively often and always mess up the order).
After some testing with this, I also recommend adding these files to the Syncthing ignore list on each synced client (.stignore in the root of the Syncthing directory):
These are session lock and cache files used by FreeTube that will cause sync errors if they are not ignored. Since they are ignored, though, I recommend always cleanly closing FreeTube on each computer after you're done using it - using it on more than one computer at the same time WILL case sync conflicts.
I came across an interesting Bash issue today, as I was trying to restore a zstd-compressed CloneZilla Partclone image to a raw file in order to extract some data from it. For some reason, none of the solutions on the internet worked, and searching for the error message turned up no useful results. This was the command line I had constructed:
Notice the glob at the end of the only argument to zstdcat. This only gave me:
Partclone v0.3.17 http://partclone.org
Starting to restore image (-) to device (image.img)
This is not partclone image.
Partclone fail, please check /var/log/partclone.log !
partclone kept saying This is not partclone image no matter what I did. I did some sanity checking with the commands:
Wait... What? Notice how ab is the first and aa is the last substitution. Obviously, aa needs to be the first substitution! They need to be zstdcated in order for partclone to recognize the file, as I assume there's a file signature/magic bytes at the start of the raw file contained within the archives.
My question was, why is my alphabetization seemingly broken? According to a web search, Bash globs will always return an ordered/alphabetized list. I was able to start the restore process by manually entering the list of files in order instead of globbing, but curiosity got the better of me and I had an inkling this was related to my system locale.
Observe how LC_COLLATE=nb_NO.UTF-8. I have my system language set to English, but most other locale settings set to Norwegian. In Norwegian, Aa/aa is a common substitution for Å/å, the last character of the Norwegian alphabet. The sorting algorithm, in its infinite wisdom, seems to have decided that a file extension of *.aashould be sorted at the very end because of this, which breaks the argument list.
To fix this, I set LC_COLLATE to C by issuing:
$ sudo localectl set-locale LC_COLLATE=C
This worked for about three seconds before KDE decided that my opinion is wrong, and promptly overwrote it, resurrecting nb_NO.UTF-8 like a digital zombie.
In KDE's System Settings > Region & Language section, there are a few locale related settings, but nothing about sorting. Presumably, it uses one of the other fields to assume the value of LC_COLLATE, and you know what they say about assuming.
To actually fix it, I added export LC_COLLATE="C" to my ~/.bashrc, which seems to work, and persists between terminal sessions.
It's been nearly a year since I set up a Mac Mini as my one-stop-shop for all multimedia related tasks in the bedroom. It's generally worked well, and both streaming media, games, general web browsing and PC usage has been smooth. You're rolling around in bed, playing video games, watching YouTube, or maybe the freshest episode of Smiling Friends. But then you fall asleep, and the fun abruptly stops.
"I'm gonna wreck it!" - Apple, probably
Sometimes, the computer will still be on and fully active in the morning after falling asleep using it, soaking up all those precious jiggawatts, directly out of your wallet.
Why, I hear you ask? Maybe there is a mouse connected that causes jiggle when you shuffle around in your sleep?
No, only a USB remote and a keyboard + touchpad combo device (the Logitech K400 Plus, which is absolute fucking garbage[1]) are connected.
How can this be, when the system is set to sleep after 1 hour of inactivity? Surely, Thou Be Trippin'?
Well, as many things Apple, it is a defect by design: Apple lets software overrule this user setting, without notifying the user, and without letting the user change this behaviour. This means that since EmulationStation - the launcher I'm using - is running in the background, and is considered a game, it inhibits system sleep. I've looked all over, and there is no way in current-day macOS to let me, the user, owner, administrator and fucking Dom Top of this machine, ignore what the system thinks, and just Go The Fuck To Sleep after 1 hour, no matter what. That's a huge defect, but dealing with huge defects is the bread and butter of a technical person trying to make an Apple product cooperate.
"I'm gonna fix it!" - Me
I'm not gonna spin a yarn and complain any more about Apple today, even if I could fill pages with that kind of content. I'm a Solutions™ kinda guy, not an Apple Sux™ kinda guy.
Here's how to fix it: Using HammerSpoon, an application for automating macOS, along with a simple Lua script. I have never used HammerSpoon before this, but I've written a ton of Lua (for video games), so getting started was easy enough.
Install HammerSpoon, start it, give it the appropriate permissions, set it to run at boot, and edit your ~/.hammerspoon/init.lua to contain the following:
local function handlePowerEvent(event)
if event == powerWatcher.systemDidWake or event == powerWatcher.screensDidWake then
print("System woke up, restarting idle monitoring")
idleTimer:start()
elseif event == powerWatcher.systemWillSleep or event == powerWatcher.screensDidSleep then
print("System going to sleep, suspending monitoring.")
idleTimer:stop()
end
end
local function checkSleep()
local idleTime = hs.host.idleTime()
if idleTime < sleepThreshold then
print("Idle for " .. idleTime .. " secs")
else
print("Idle time exceeded threshold, going to sleep")
hs.caffeinate.systemSleep()
end
end
sleepThreshold = 3600 --seconds
powerWatcher = hs.caffeinate.watcher
powerWatcher.new(handlePowerEvent):start()
idleTimer = hs.timer.new(60, checkSleep)
idleTimer:start()
Save the file, click the HammerSpoon icon on your menu bar, then "Reload Config", and boom, you're in business.
The script is extremely simple. It sets a timer that monitors how long the system has been idle for, which runs every 60 seconds. If the timer sees that the idle time is over the set threshold (set to 3600 seconds here - 1 hour), it tells the computer to go to sleep immediately — come hell or high water. The script additionally monitors the sleep and wake up events, in order to stop and start the timer, so it doesn't run while the computer is asleep (yeah, that can happen). It also prints some log messages to the HammerSpoon console, which you can see by clicking the HammerSpoon icon and clicking "Console".
And that's all there is to it. Apple has a long way to go when it comes to user friendliness, and definitely needs to provide an option for "going to sleep no matter what the system or any piece of software has to say about the present state of things". I'm not going to be in charge of wording the toggle, though.
[1] The Logitech K400 Plus, also known as The Frustrationator 400, is e-waste that they charge money for. The touch pad comes with an infuriating acceleration that can't be turned off, which makes navigating the user interface of the computer an exercise that all but ensures that you go to bed angry. They told me I shouldn't do that, but here we are. It also looks dumb, feels cheap, and makes me nostalgic for the simpler times when I didn't own the Logitech K400 Plus.
I recently purchased a double pack of wireless microphones (specifically, these ones) to replace my ageing and faltering wired one. I am very happy with the audio quality, their ease of use and their range, but for my specific use case, the battery life (around 3 hours) leaves a little to be desired. I mainly use these while hanging out with a friend over the internet while sharing my screen, and we'll watch movies and TV shows together. That can last a couple of hours or even most of the day, and at some point the battery for the microphone will run out.
The first time this happened, it took me a little while to understand what was going on, as there was no beep or any indication from the microphone to signal its demise. Apparently the LED on the device will blink when it's low on battery, but that's impossible to see when it's clipped to my shirt right underneath my chin.
But, through the magic of buying two of them, the solution is easy — swap the depleted mic with a freshly charged one, then recharge the depleted mic while you discharge the fresh one. Still, the problem of not knowing when to do that persists.
The code
That's right, we're writing more Bash. Normally, you could just set a timer for your phone or your computer, but we're watching content that has sound whilst wearing headphones. What I wanted was a solution that could pause all playing media and tell me to swap my mic out, so that's exactly what I wrote:
mic-change.sh:
#!/usr/bin/env bash
matches=$(playerctl -l)
IFS=$'\n'
read -r -d '' -a matchedPlayers <<< "$matches"
numPlayers=${#matchedPlayers[@]}
((numPlayers--)) # To use zero based indexing
for i in $(seq 0 "$numPlayers"); do
currentPlayer=${matchedPlayers["$i"]}
status=$(playerctl -p "$currentPlayer" status)
if [[ "$status" == "Playing" ]]; then
playerctl -p "$currentPlayer" play-pause
if [[ "$currentPlayer" == "kodi" ]]; then
sleep 1
fi
fi
done
mplayer "/opt/sfx/mic-change.mp3"
How to use
Run the script when you turn on your microphone, and re-run it whenever you swap your mic. Examples:
Run a sleep then the script from the terminal: sleep 9000; /opt/scripts/mic-change.sh
Put the sleep at the start of the script (after the shebang) and run it
Use a timer app that can launch scripts when the timer finishes, like KClock
(9000 seconds is 2h 30m)
Script explanation
This script uses playerctl to pause all currently playing media players via the MPRIS D-Bus specification. Most players for Linux support this natively. Kodi, my media player of choice, does not, but support can easily be added through an addon. The script goes through every currently registered media player, and checks if it is currently playing. If it is, it pauses it. If it is Kodi, it waits a second before doing anything else, since Kodi has a ˜1 sec delay when pausing before audio stops playing (and also an annoying corresponding ˜1 sec delay before audio starts playing again once you unpause). Finally, it plays the sound defined in the last line of the script using mplayer, which in my case is a TTS voice named Onyx telling me to "Change your mic, motherfucker."
If you'd like to get the same microphones, which are very good despite the relatively short battery life, you can get them on AliExpress here.
The product links in this article are affiliate links. If you buy something using them, I may earn a small commission at no extra cost to you.
The past few months, I've spent some time to set up a room in my house to be a pretty sweet space dedicated to retro-gaming. This includes a vintage PC, a CRT TV, and a multitude of classic games consoles hooked up to four RCA/AV switches. This works well, and to play the console we want, one can just flip a couple of switches instead of having to unplug and shuffle around literally 8 different sets of RCA cables.
Thing is, when you're playing games, especially during couch coop, things tend to happen. Some of these things, depending on the types of games you play, can be utterly hilarious, and worth saving for posterity. With the two last generations of consoles I own, PS5, PS4 and the Nintendo Switch, you can just press a single button on the controller to save recent gameplay as a file to the system storage, which is awesome. When you also enjoy playing on 20-30 year old game consoles as well, this part becomes a bit trickier.
Emergent technologies
Enter: The "Retrocorder" - the dedicated computer I've set up to continuously record all retro gaming gameplay as it happens.
For reference, this is what my setup looked like before Retrocorder:
(If I stick with 3-port switches, I can only get one more console before I have to add another level to the hierarchy 😳)
Since all the signals involved here are analog, you'd think there would be some degradation at each step. That might be true, but none are perceptible to me in regards to image or sound quality, and the cables between the switches are short, minimizing signal loss.
Hardware
If you want to follow along at home, here's a parts list:
Spare PC with a processor powerful enough to record and encode 480p/576p video in real time (using h264 this roughly means any CPU from the last 10 or so years)
The above links are affiliate links. If you buy something using them, I may earn a small commission at no extra cost to you.
Using these parts, we can split the RCA cables right before the TV, and lead one end to the TV, and the other into our capture card, like so:
As you can see from the chart, the splitters are inserted between the first switch and the TV. This way, no latency is introduced to the TV, so we can play our games normally without any changes to our gameplay. We can also swap game consoles mid-session and have no interruptions to the video recording, since the video is captured from the first switch.
Regarding the 3-6x RCA cables listed: 3 of them go from the first split of the RCA Y splitter to the TV. As for the other 3 - depending on how your splitters and AV2HDMI box look, you may or may not need them. I was able to plug the second split of the RCA Y splitter directly into the AV2HDMI box without using the extra 3 cables - your mileage may vary.
The reason I recommend the HDMI capture card over the RCA/AV capture card, is because AV capture cards tend to have issues switching between NTSC and PAL video signals. I mix NTSC and PAL video signals all the time, as I own consoles and games of both regions. My CRT TV supports both signals, which isn't a given for any CRT TV either, so make sure yours does before you mix signals and end up with unusable video files. The HDMI route isn't immune to this issue either, but the AV2HDMI converter box seems to handle this a lot better, and seems to send the same signal to the capture card regardless of input. I also experienced horrible buzzing noises during recording when using my AV capture card, presumably due to interference, but that might be down to a faulty card that's been stored in a box for 10 years or more.
The AV2HDMI converter box works great, and will detect a change in signal automatically. In my testing and usage, I have found only one combination of consoles/game regions that will throw it for a loop - playing NTSC games on a PAL GameCube (via Swiss). This outputs a black and white, wavy signal from the AV2HDMI box, presumably because the GameCube is outputting an esoteric (or out-of-sync) signal that it doesn't quite understand. The TV shows the signal just fine, however. Playing PAL games on the same console works fine, so I've solved this issue by just using PAL games instead. The PS2, where I also mix NTSC and PAL games on a PAL console, does not exhibit this problem, and both regions display fine when captured.
Software
Now that all the hardware is plugged together, it's time to make this as seamless as possible using software.
For the operating system, I use my default go-to for projects such as these: The latest Kubuntu LTS. KDE is my favourite desktop environment, I'm very familiar with it at this point, and the Ubuntu LTS base provides a solid foundation.
The hardware of the PC is modest, as this is an ancient PC that I used as a daily driver 10 or so years ago, only upgraded with an SSD:
CPU: AMD FX-6100 Six-core @ 3.300GHz
GPU: AMD ATI Radeon HD 7870 GHz Edition
RAM: 8 GB
SSD: 128 GB
After installation, we can start installing some software, and setting some desired options.
obs-cli - Make sure you get you get the one from pschmitt - the other ones I've found are all defective or outdated
Now, set up your scene in OBS. At the bottom, add a new video source.
I had to pick YU12 to get the correct image for my setup, but yours may be different. The AV2HDMI converter box outputs a 720p or 1080p signal, configurable by a physical switch on the unit itself. I set mine to 720p, and the recording resolution to 576p, as no console in my setup outputs anything higher than that over AV anyway.
Now that that's out of the way, we just need to configure a few things about the recording environment. In OBS, go to File > Settings, then Output, and set Output Mode to Advanced. Click the Recording tab, then set your desired settings:
My changes from the default:
I've used /opt/retrocord as the folder to store all my recordings.
I've checked "Generate File Name without Space"
I've set the video encoder to x264 as it is light on CPU and produces files of a manageable size
I've enabled "Automatic File Splitting"
I've set "Split Time" to 5 min
The reason I've enabled automatic file splitting is twofold: To not end up with files of gargantuan sizes, and to let me delete older files in the directory when it becomes too large, without having to split file and review what I want to save.
Click OK. Next, choose Tools in the menu bar, then click WebSocket Server Settings. Check Enable WebSocket server, then click Show Connect Info. Note down the Server Port and the Server Password, as you'll be needing it soon. Close the connection info window and click OK in the WebSocket Server window.
Scripting scripts
Your recording environment should now be ready to use. The next step is to automate this, so you don't need to manually interact with OBS to make it do its thing. I accomplish this using two scripts; a login script and a logout script.
/opt/scripts/retrocorder-start.sh:
#!/usr/bin/env bash
obs --startrecording &
/opt/scripts/retrocorder-stop.sh:
#!/usr/bin/env bash
obs-cli --host localhost --port 4455 --password abcde12345 record stop
sleep 1
kill -s TERM "$(pidof obs)"
sync
In retrocorder-stop.sh, you need to change two values in the second line: the port and the password that you noted down earlier (4455 and abcde12345 in the example above). The reason we need obs-cli in the first place and the OBS WebSocket Server to be running, is that while OBS can let you send it an argument to start recording, it has no such argument to stop the recording in progress (for some god-awful reason).
As you might have noticed, this setup might end up with a full disk after enough usage, so we're gonna have to deal with that with another script. This time, we're gonna set up a cron job to be run once an hour, to prune the oldest videos in the recording directory, once a size threshold has been exceeded:
/opt/scripts/retrocorder-prune.sh:
#!/usr/bin/env bash
cd /opt/retrocord || exit
limitBytes=$((50*1024*1024*1024)) # 50 GiB
currentDirSize="$(du -bs | awk '{print $1}')"
if [[ "$currentDirSize" -gt "$limitBytes" ]]; then
while [[ "$currentDirSize" -gt "$limitBytes" ]]; do
purgeFile=$(ls -rt *.mkv | head -n 1)
rm "$purgeFile"
currentDirSize="$(du -bs | awk '{print $1}')"
done
fi
This script deletes the oldest files in a directory until the total directory size is less than 50 GiB. Change the second line of the script to point to your recording directory, and the third line to reflect how much disk space you'd like to allocate to recordings. I've set mine to 50 GiB, as that is plenty, and leaves lots of headroom on the 128 GB SSD.
On my setup, 5 mins of recording equals around 100 MiB. This means that I can record > 42 hours of gameplay before the script starts purging - more than enough time to save any clips I want to keep!
Quick maffs
50 GiB * 1024 MiB per GiB = 512,000 MiB allocated
512,000 MiB / 100 MiB per 5 mins = 512 files á 100 MiB
512 files * 5 minutes per file = 2560 minutes
2560 minutes ≈ 42.66 hours ≈ 1.77 days
Finally, put the script in your crontab. First, edit your crontab:
$ crontab -e
Then append a new line at the end of the file:
0 * * * * /opt/scripts/retrocorder-prune.sh
Save and close the file (if the editor is nano, press, Ctrl+X, then Y, then Enter).
This will make the script run once an hour.
Setting settings
Note: These settings are for KDE Plasma. There are most likely equivalents for these settings in all other major DEs.
Once the above scripts are created, in Plasma, go to System Settings > Startup and Shutdown > Autostart. Click Add at the bottom of the window, then pick Add Login Script. Navigate to and pick /opt/script/retrocorder-start.sh. Now do the same for the logout script: Click Add > Add Logout Script, then navigate to and pick /opt/scripts/retrocorder-stop.sh. This will automatically start and stop recording when you log in and out of the computer.
To make this completely automatic, you'll also need to make sure you're automatically logged in to the computer. Also in Startup and Shutdown, pick Login Screen (SDDM), then click the button labeled Behaviour on the bottom left. Next, check the box next to Automatic log in, then choose your user and session on the same line. Click Apply.
Also in Startup and Shutdown, pick Desktop Session, then uncheck the box next to Logout Screen - Show. This makes sure when you request a shutdown, it is done immediately.
The next destination is still in System Settings, under Power Management this time (called Energy Saving in earlier versions of Plasma). Uncheck all checkboxes, then check Button events handling. In the drop-down box When power button pressed, pick Shut down.
Lastly, if you wish access this computer over SSH, install and enable openssh-server:
This allows you to log in remotely via SSH. Additionally, it lets you use FISH to easily copy files from the Retrocorder to your main machine; SSH-enabled servers can be accessed in Dolphin by using the fish: URI scheme in the address bar:
fish://192.168.0.123/
You could also set up an NFS or SMB share, but that's out of scope for this post.
Headless chicken
Note: This section mostly applies to desktop PCs. If you're using a laptop, you're more or less done.
At this point you should be in a state where everything is automatic. Starting with the PC off, when you press the power button, the computer will boot, log you in, start OBS and start recording. Once an hour, your recording directory will be checked, and if it's too big, the oldest files will be deleted. OBS will keep recording until you hit the power button. Once you hit the power button, OBS will stop recording, close, the disks will sync, and the computer will turn off.
Wouldn't it be cool if you didn't need that pesky monitor, keyboard and mouse?
In most cases, if you don't have a display attached, the computer will not boot to a graphical environment. There's two ways to fix this, either by creating a dummy display (Xorg only), or by getting a physical dummy connector which will fool your computer into thinking a display is attached. There are dummy connectors available for all sorts of display connectors, but this post will focus on the software solution, as it works great for me.
This following solution only works on Xorg. I don't know if Wayland has an equivalent method of making a dummy display, but I'm sure you could find something by searching the web.
If you're not on *ubuntu, xorg.conf might live somewhere else, such as /etx/X11/xorg.conf. In many cases, it doesn't exist and must be created - searching the web is your friend again here.
Save this file, which configures a dummy 1280x720 display. In my experience, increasing the resolution doesn't do anything, the dummy display seems to max out at that resolution.
Restart your computer, and recording should now start, even without a display attached! It will also enable you to remote control the desktop using NoMachine or equivalent remote control software.
You should now have a fully automated recording solution for your retro gaming setup! :) The only thing you need to do now is press the power button to turn the computer on and start recording, and press it again to shut down when you're done.
Here's a sample of gameplay recorded from my Retrocorder. It's by no means perfect, but it's more than good enough for my purposes - saving fun or memorable bits of gameplay. This clip is me playing the 2006 PS2 game Black:
YouTube truncates this to 480p, so if you want the source file, you can get it here!
Closing thoughts
The first time I tried booting my PC after setting up the dummy display, it would not start up, much to my annoyance. Turns out this motherboard is one of those who will halt the boot process if a keyboard isn't connected - easily fixed by changing the BIOS settings not to halt on keyboard errors.