Groundhog Day in Present Day

Toliver had me watch a YouTube video today that demonstrated how carburetors work through the construction of a transparent one. It was really good, but then another video of a deleted scene from the movie Groundhog Day was recommended that got me thinking.

If you haven’t seen the movie, you absolutely should. It holds up, although I admit that it’s been a little while since I last saw it. Bill Murray does an excellent job of playing a weatherman who ends up reliving the same day over and over again. (I bet the day is an easy guess). After enjoying the concept at first, he runs out of things to do fairly quickly. The deleted scene video plays it out fairly well, showing that he’s basically memorized all the things he could.

But, the movie was came out in 1993. The Internet wasn’t all that ubiquitous at the time, although I was on it because of course I was. I’m a big nerd. How we live has changed quite a lot in the past almost 30 years.

I started thinking about what it would be like to reboot the movie, as seems so popular lately. Imagine the premise of the movie set in present day. It didn’t take me all that long to conclude that the presence of the Internet would completely change everything, and could almost invalidate most of the premise of the movie itself.

Think about how much content you could consume now compared to before. Streaming movies and shows, eBooks. Online learning platforms. You could learn all there is to know about so many subjects. Sure, you might get bored, especially since you wouldn’t be able to travel all that far (because of the reset once you fell asleep), but you get to retain your knowledge. That’s a big plot point for the original movie, our hero gets wiser over time.

The biggest downside I can see is that you’d always be stuck in that same point of time. Just like in the movie, how the same games played themselves out to the same scores, the current events would always be the same. This would greatly influence the algorithms that suggested content to you. You could find new stuff, sure… but you’d have to go actively seek it. Since you don’t lose your memories, this might not be all of that big of a deal.

Luckily, I think this movie is quite safe from getting rebooted. There wouldn’t be much point based on how things have changed in the world, and the original concept was so completely unique that it can stand on its own.

Also, RIP Harold Ramis.

Ford Maverick

Ford released the details on their new Maverick compact pickup truck today. Honestly, I barely remembered they were even working on it, honestly. I basically dismissed it after being a little disappointed with the way the reintroduced Ranger came about.

This was a huge mistake on my part.

40 mpg with the hybrid engine, under $20,000 starting price. Using the Ford vehicle builder, I spec’d out a fairly well outfitted Lariat trim and only hit about $33,000. Truck utility, sedan comfort and features. Having the ability to haul things every once in a while, but still having most of the properties of a sedan or crossover is going to make a huge dent. Ford has done a pretty good job of hitting the greatest common denominator of the options most would need.

Built on the C2 platform and using tried and true hybrid and EcoBoost options. They’ve most likely killed the Hyundai Santa Cruz before the pricing even got announced. Ford has been trying to make their platforms reusable since the days of the World Car, but seems they’ve finally landed on the right combination.

Sure, it’s not a full-sized truck, and many of the folks who have one of those probably don’t see the need for this. They’re hauling stuff regularly, or towing big trailers. But there’s a reason people liked Subaru Bajas and Chevy El Caminos. There’s a reason a lot of folks were excited about the Pontiac G8 Sport Truck rumors based on the Holden Commodore, up until GM killed Pontiac.

I hope to see an option added to select AWD with the hybrid engine. Not sure why this wasn’t an option out of the gate considering AWD is all the hotness. There’s a reason why more and more manufacturers are going this direction, especially for areas that get

But more importantly, if they add a plug-in hybrid option in the future to allow you to do a few miles all electric? Crazypants. And sure, the PHEV option could add cost, but if you couple that to electric vehicle incentives? Most folks who could use this vehicle are only hauling stuff every once in a while. Zipping around town for basic errands like it’s a Prius Prime? Sounds pretty compelling to me.

For almost the last 20 years, I’ve preferred sedans. I was worried when Ford accounced they were moving to only trucks and SUVs, but looks like there was method to their madness. Between the Maverick, their recently introduced F-150 Lightning electric truck, and even the new Bronco, they’re making some impressive moves.

Using MoCA to Extend Ethernet Networks

I’ve made some pretty significant upgrades to my home network in the past few years. As the homelab bug started to bite again, I’ve begun transitioning back to using hard-wired connections where possible. I swapped out my older wireless equipment with UniFi equipment for better control, and started running cabling where I could.

My current living situation prevents me from making major modifications. I either had to configure the UniFi system to use wireless mesh and backhaul the traffic, or run cables all over the place somehow. Yuck.

I started looking at alternatives I remembered from a while ago, including Ethernet over Powerline and Ethernet over Coax. Each of these progressed a lot farther than I had expected. Luckily for me, there were already Coax cable drops exactly where I wanted to run my equipment, so going that route seemed the best.

I ended up selecting a solution from goCoax. For under $200, I was able to acquire three of the adapters (WF-803M), which was exactly what I needed. I ended up spending more time tracing and labeling the existing cables than I did getting the adapters up and running. Because I had access to all of the Coax drops, I was able to reconnect everything to suit my needs. Below is a diagram I worked up in Mermaid; any unlabeled connections are Ethernet.

The marketing site for goCoax boasts a data rate of up to 2.5 Gbps. I haven’t tested extensively tested this, nor will I. I have no need. I don’t see any blatant latency, and since I’m limited to gigabit for wired and obviously less for wireless, it doesn’t really matter.

The biggest downside I can think of for this solution is that you can’t use Power over Ethernet (PoE), for the fairly obvious reason that it’s not really Ethernet when you’re sending the bits over Coax. That’s fine, I have the needed PoE injectors and battery backup units, so it doesn’t affect my use case.

The only thing I haven’t extensively researched would be the security of the goCoax devices themselves. Specifically, if I’m using these things and the onboard bonding firmware/software is out of date, is there an attack vector there? Although it’s a concern, it hasn’t been enough for me to dig deeply enough or disconnect them.

Jekyll and VS Code Remote Containers

I really, really enjoy using static web site generation systems like Jekyll.

However, I generally hate getting them running after I pave over my desktop machine.

I started out with Jekyll back when GitHub first introduced GitHub Pages. It was great, but I was using Windows at the time and there were a lot of hoops to jump through. Even after moving to Linux, I found that Jekyll requires installation of things I only use for the websites (e.g.: I generally don’t do much Ruby development). When I regularly pave over my desktop, it makes it a bit of a hurdle to add new content to the site. A side effect of this was that it basically afforded me a great excuse to not add content.

I converted the site over to use Netlify a few years ago. This allows for a wide user of tooling to generate the site, including alternatives such as Hugo which I use at work. But, there is a cost to switching away from Jekyll, specifically that the content needs to be adapted to the new templates and such. Now, the overall look and feel of the site currently isn’t anything breathtaking, but I don’t really wish to rip this particular bandage off right now.

So, as with so many things, containers to the rescue! Microsoft announced the addition of container development tooling to VS Code back in 2019. I played around with it a little bit, but didn’t quite grasp how it could make my life better. Wow, do I wish I would have dug deeper! Lots of additional details about using remote containers in VS Code are available as well.

I did a little bit of searching, and came across a few kind souls who apparently had very similar thoughts, and before I did, too. The nerve! Specifically: Carlos Mendible, Steve Hocking, and Allison Thackston.

The final result ended up being fairly straightforward:

  • Ensure Docker is installed for Linux, or Docker Desktop for Windows/macOS. Microsoft details this in their install instructions.
  • Add the Remote Containers extension in from the marketplace.
  • Create an appropriate dockerfile and devcontainer.json file to the repo; see my commit here.
  • Add a task to the vscode folder that defines how to start Jekyll. I actually did this first, you can see that commit as well. I also had to update gitignore to remove that folder. I don’t remember why I had that on, but I for sure want this checked in.
Using the extension  
Adding the extension gives you this cute little green panel in the lower right hand corner:
When you open a folder that the extension thinks can be containerized, it will prompt you:
After opening in the container, the green panel lets you know that you’re connected to a remote container:
Clicking the green panel gives you additional options in the command palette:
Running the configured tasks from the Command Palette:

And, uh, that’s about it. I was able to clone the repo for this site to a fresh Linux box, and after a few minutes of restoration, everything just worked with zero monkeying around. I didn’t test on a Windows machine, but I have very little doubt that it should does what it says on the tin.

I’ve only begun to scratch the surface on where I can plug this into my workflows, but I’m pretty excited to dig deeper into the possibilities that the remote containers feature of VS Code offers.

Paving Over Computers

I rebuild my various computers regularly, possibly even far too often. It’s actually pretty rare nowadays that I have a daily driver machine whose operating system was installed more than 6 months prior.

A decent portion of this comes from using Windows for so long. Although things appear to be getting better, it still seems things get bogged down after you’ve gone through a few of the bigger OS updates. I do want things to updated, but I always find it annoying when the resulting system seems to be slower than ever. Not to mention that the updates always manage to come at the most inopportune times, because of course they do.

Since switching to Linux desktop usage full-time a few years, I’ve definitely done a bit of distro hopping. The advent of cloud services and other shifts in thinking make it easier to think of machines as being completely ephemeral. I first heard it described this way by Casey Liss on Accidental Tech Podcast awhile back, and I thought it perfectly encapsulated the approach I’ve taken to my personal computers for at least the last 5 years or so now. All of my data files are either backed up in multiple places, in the cloud (ugh, so cliche but also true), or easily recoverable.

Running my systems this way allows me to be able to take the trusty “Nuke and Pave” approach. Any time I feel like a change, or if things aren’t running right, I can destroy the installed OS (nuke) and start over with a clean OS install (pave). Thinking on it, it seems to me that this tried and true methodology of systems recovery helped in part to set up the “Cattle, Not Pets” approach for cloud-based systems deployment.

When it comes time to reinstall the OS for the machine, I first determine if I have any files that need to be saved. If I do, I get them copied to a temporary holding location. I also make note of those file so I can automate that process in the future. Then, I grab the installation media I’ve created so I can re-install from scratch.

Since the advent of USB boot drives for reinstall, I’ve always kept a few around, with each one labeled with whatever OS I need. For example, there’s always a Windows 10 one available, as well one with whatever Linux flavor I’m using on my desktop at that time. This way I always have the ability to recover from something catastrophic.

However, this hasn’t been without drawbacks. When I need to rebuild a server, I end up hunting around for an empty USB stick (or one I can temporarily press into service as such) and load it. When I flip to a new OS, I need another USB drive. Sometimes it seems like I’ve been resetting USB a lot. It was annoying, but just the cost of doing business.

Enter Ventoy. Paraphrased from the website:

Ventoy is an open source tool to create a bootable USB drive for image files.

With Ventoy, you don’t need to format the disk over and over, you just need to copy the image files to the USB drive and boot them directly.

Ohmygoodness why didn’t I find this sooner? It’s like dependency injection for disk images via USB boot drives. Load the ISO to the USB drive, select from the menu, and you’re golden? I tested it out, and it does what it says on the tin. Below is a screenshot of Ventoy running in Linux KVM, booted from my USB drive with the following command:

kvm -hdb /dev/sdb

The installation instructions on the site are fairly comprehensive, providing an EXE for Windows usage, a standard script for Linux, and even a neat little Web UI that can be used on Linux as well. Provide a standard USB disk and run the install. Copy the needed ISOs over to

So now, my new simplified plan is to have two identical USB drives with Ventoy installed. I’ll download the needed source ISOs, and load them into my homelab storage. Then I’ll load the ISOs to each of the Ventoy USB drives so I can use them. Remember the golden rule, kids:

Two is One, and One is None.

As an added bonus, in the case of something excessively horrible happening to my local machine, I can always fire up a desktop instance from the Live USB so I can still do my jobby job. Even more impressive is the ability to create and define persistent storage as well. Woot!