Using MoCA to Extend Ethernet Networks

I’ve made some pretty significant upgrades to my home network in the past few years. As the homelab bug started to bite again, I’ve begun transitioning back to using hard-wired connections where possible. I swapped out my older wireless equipment with UniFi equipment for better control, and started running cabling where I could.

My current living situation prevents me from making major modifications. I either had to configure the UniFi system to use wireless mesh and backhaul the traffic, or run cables all over the place somehow. Yuck.

I started looking at alternatives I remembered from a while ago, including Ethernet over Powerline and Ethernet over Coax. Each of these progressed a lot farther than I had expected. Luckily for me, there were already Coax cable drops exactly where I wanted to run my equipment, so going that route seemed the best.

I ended up selecting a solution from goCoax. For under $200, I was able to acquire three of the adapters (WF-803M), which was exactly what I needed. I ended up spending more time tracing and labeling the existing cables than I did getting the adapters up and running. Because I had access to all of the Coax drops, I was able to reconnect everything to suit my needs. Below is a diagram I worked up in Mermaid; any unlabeled connections are Ethernet.

The marketing site for goCoax boasts a data rate of up to 2.5 Gbps. I haven’t tested extensively tested this, nor will I. I have no need. I don’t see any blatant latency, and since I’m limited to gigabit for wired and obviously less for wireless, it doesn’t really matter.

The biggest downside I can think of for this solution is that you can’t use Power over Ethernet (PoE), for the fairly obvious reason that it’s not really Ethernet when you’re sending the bits over Coax. That’s fine, I have the needed PoE injectors and battery backup units, so it doesn’t affect my use case.

The only thing I haven’t extensively researched would be the security of the goCoax devices themselves. Specifically, if I’m using these things and the onboard bonding firmware/software is out of date, is there an attack vector there? Although it’s a concern, it hasn’t been enough for me to dig deeply enough or disconnect them.

Jekyll and VS Code Remote Containers

I really, really enjoy using static web site generation systems like Jekyll.

However, I generally hate getting them running after I pave over my desktop machine.

I started out with Jekyll back when GitHub first introduced GitHub Pages. It was great, but I was using Windows at the time and there were a lot of hoops to jump through. Even after moving to Linux, I found that Jekyll requires installation of things I only use for the websites (e.g.: I generally don’t do much Ruby development). When I regularly pave over my desktop, it makes it a bit of a hurdle to add new content to the site. A side effect of this was that it basically afforded me a great excuse to not add content.

I converted the site over to use Netlify a few years ago. This allows for a wide user of tooling to generate the site, including alternatives such as Hugo which I use at work. But, there is a cost to switching away from Jekyll, specifically that the content needs to be adapted to the new templates and such. Now, the overall look and feel of the site currently isn’t anything breathtaking, but I don’t really wish to rip this particular bandage off right now.

So, as with so many things, containers to the rescue! Microsoft announced the addition of container development tooling to VS Code back in 2019. I played around with it a little bit, but didn’t quite grasp how it could make my life better. Wow, do I wish I would have dug deeper! Lots of additional details about using remote containers in VS Code are available as well.

I did a little bit of searching, and came across a few kind souls who apparently had very similar thoughts, and before I did, too. The nerve! Specifically: Carlos Mendible, Steve Hocking, and Allison Thackston.

The final result ended up being fairly straightforward:

  • Ensure Docker is installed for Linux, or Docker Desktop for Windows/macOS. Microsoft details this in their install instructions.
  • Add the Remote Containers extension in from the marketplace.
  • Create an appropriate dockerfile and devcontainer.json file to the repo; see my commit here.
  • Add a task to the vscode folder that defines how to start Jekyll. I actually did this first, you can see that commit as well. I also had to update gitignore to remove that folder. I don’t remember why I had that on, but I for sure want this checked in.
Using the extension  
Adding the extension gives you this cute little green panel in the lower right hand corner:
When you open a folder that the extension thinks can be containerized, it will prompt you:
After opening in the container, the green panel lets you know that you’re connected to a remote container:
Clicking the green panel gives you additional options in the command palette:
Running the configured tasks from the Command Palette:

And, uh, that’s about it. I was able to clone the repo for this site to a fresh Linux box, and after a few minutes of restoration, everything just worked with zero monkeying around. I didn’t test on a Windows machine, but I have very little doubt that it should does what it says on the tin.

I’ve only begun to scratch the surface on where I can plug this into my workflows, but I’m pretty excited to dig deeper into the possibilities that the remote containers feature of VS Code offers.

Paving Over Computers

I rebuild my various computers regularly, possibly even far too often. It’s actually pretty rare nowadays that I have a daily driver machine whose operating system was installed more than 6 months prior.

A decent portion of this comes from using Windows for so long. Although things appear to be getting better, it still seems things get bogged down after you’ve gone through a few of the bigger OS updates. I do want things to updated, but I always find it annoying when the resulting system seems to be slower than ever. Not to mention that the updates always manage to come at the most inopportune times, because of course they do.

Since switching to Linux desktop usage full-time a few years, I’ve definitely done a bit of distro hopping. The advent of cloud services and other shifts in thinking make it easier to think of machines as being completely ephemeral. I first heard it described this way by Casey Liss on Accidental Tech Podcast awhile back, and I thought it perfectly encapsulated the approach I’ve taken to my personal computers for at least the last 5 years or so now. All of my data files are either backed up in multiple places, in the cloud (ugh, so cliche but also true), or easily recoverable.

Running my systems this way allows me to be able to take the trusty “Nuke and Pave” approach. Any time I feel like a change, or if things aren’t running right, I can destroy the installed OS (nuke) and start over with a clean OS install (pave). Thinking on it, it seems to me that this tried and true methodology of systems recovery helped in part to set up the “Cattle, Not Pets” approach for cloud-based systems deployment.

When it comes time to reinstall the OS for the machine, I first determine if I have any files that need to be saved. If I do, I get them copied to a temporary holding location. I also make note of those file so I can automate that process in the future. Then, I grab the installation media I’ve created so I can re-install from scratch.

Since the advent of USB boot drives for reinstall, I’ve always kept a few around, with each one labeled with whatever OS I need. For example, there’s always a Windows 10 one available, as well one with whatever Linux flavor I’m using on my desktop at that time. This way I always have the ability to recover from something catastrophic.

However, this hasn’t been without drawbacks. When I need to rebuild a server, I end up hunting around for an empty USB stick (or one I can temporarily press into service as such) and load it. When I flip to a new OS, I need another USB drive. Sometimes it seems like I’ve been resetting USB a lot. It was annoying, but just the cost of doing business.

Enter Ventoy. Paraphrased from the website:

Ventoy is an open source tool to create a bootable USB drive for image files.

With Ventoy, you don’t need to format the disk over and over, you just need to copy the image files to the USB drive and boot them directly.

Ohmygoodness why didn’t I find this sooner? It’s like dependency injection for disk images via USB boot drives. Load the ISO to the USB drive, select from the menu, and you’re golden? I tested it out, and it does what it says on the tin. Below is a screenshot of Ventoy running in Linux KVM, booted from my USB drive with the following command:

kvm -hdb /dev/sdb

The installation instructions on the site are fairly comprehensive, providing an EXE for Windows usage, a standard script for Linux, and even a neat little Web UI that can be used on Linux as well. Provide a standard USB disk and run the install. Copy the needed ISOs over to

So now, my new simplified plan is to have two identical USB drives with Ventoy installed. I’ll download the needed source ISOs, and load them into my homelab storage. Then I’ll load the ISOs to each of the Ventoy USB drives so I can use them. Remember the golden rule, kids:

Two is One, and One is None.

As an added bonus, in the case of something excessively horrible happening to my local machine, I can always fire up a desktop instance from the Live USB so I can still do my jobby job. Even more impressive is the ability to create and define persistent storage as well. Woot!

My Technology Interests

I started out with a Commodore VIC-20 when I was really young (8 or 9, maybe?), and I was hooked. I transcribed a kaleidoscope program from a “Learning to Program” book. It was awesome, but there was a fatal flaw. Once it started, I couldn’t figure out how to stop it. I hadn’t saved it to the attached cassette tape drive (and I can’t even remember if it was an option, actually). I remember leaving the computer running for as long as I possibly could, until my parents finally told me to shut it off. I was sad and happy at the same time; sad I lost the program (that I typed, didn’t create), but able to start working on new stuff. Eventually as I got older, we moved up to the 64, and even got to work on other friends’ 128 systems. We ended up getting a modem, and I subscribed to services like Q-Link and Prodigy over time. Q-Link eventually became AOL, but that wasn’t my fault.

Later on, my Dad brought home a CompuAdd PC build kit. This came complete with a 40 mb hard drive AND a matching tape backup drive. That was amazing. It also marked the shift for me to PCs, and I never looked back. I glommed on to DOS, the menuing systems, everything. Eventually branching out to BBS networks, FidoNet (1:280/5!), and the like. By either 1991 or1992, I had found my way to the Internet, courtesy of a Johnson County Community College uplink.

I had access to Apple computers through some of the schools I went to. In middle school, we didn’t have a computer lab. Instead, they had a small fleet of Apple machines on metal rolling desks. On computer day, they’d roll all the desks and plug them in. We didn’t have much latitude in what we could do during computer time. I mainly remember three things about Apple at the time:

  1. Using Logo to move the turtle around.
  2. Playing Oregon Trail.
  3. Realizing that since they plugged one desk into the next, if you positioned your hand so that you were touching both desks, you’d feel the electical current flowing through your hand. That was neat!

For whatever reason, those remembrances had always caused me to always think of Apple being primarily used for education. Obviously, this changed over time, but I didn’t necessarily change my view. Plus, once Apple moved to hardware that you couldn’t upgrade, I just didn’t see a ton of value in spending time in that ecosystem.

The introduction of Windows was pretty awesome (for the time). It reminded me of GEOS that was available on the Commodore 64, but it was much more powerful. By this time, I realized wanted to do this for a career, but I still wasn’t sure how to translate that. I stumbled through a few things as I was trying to find my way. It was awesome, the technology just kept getting better and better. I never really stopped to think about it, but now I realize I always liked figuring out how things worked as well as creating programs of various complexity.

I had a little experience with Unix-based systems, but I wasn’t all that drawn to them. Since I had mainly worked on systems with a GUI, systems that were terminal based seemed powerful, yet limiting at the same time. I had respect for them, but after using a VAX at UWGB, I wondered when (and if) that technology would ever progress.

Enter Linux. Around that time, Linux was starting to take off. I was intrigued by it, but it was awfully finicky. Device compatibility was terribly difficult to determine, and most answers to problems seemed to be “write your own drivers, and you’ll be all set”. There was potential, but since it wasn’t as polished, I opted to build my career on Microsoft tech.

I made a career, launched a consulting company, and even ran my own servers at home. After a while, I burned out a little bit. The boys were young, and I decided to wrap things up. All of the Web 2.0 companies were taking off, hosting services. Maybe I could just coast for a while, becoming a user for the most part.

Right around this time, the iPhone was released and looked like a game changer. I had no interest in the Mac platform, but man, could Apple make phones look attracted. I just wanted things to work; I didn’t want to have to mess around with things. That’s why I really enjoyed my iPhone (and still do).

Cloud was starting to heat up, which of course was the underpinning of the companies that let me take my foot off the gas. I found this fascinating from the start, but the companies I was working for weren’t even considering cloud at the time. This meant I needed to build my own personal projects, so I generated little things to keep me plugged in.

Finally, over the past two or three years, I have found myself getting interested in the nuts and bolts of technology again. I have been tinkering with a bunch of stuff, both on-prem and in the cloud, but I haven’t been documenting much here. Time to change that. I’m thinking to look back across all of the topics I raised here, as well as documenting some of the projects I’ve been working on.

Modern Society, Old Tech

Society Depends on Old Tech

So much of the technology infrastructure that modern society today currently depends on was created long before security was even being considered as a primary concern. In many cases, how technology is being used now use is far different from what was originally envisioned or designed. While this is great for today’s users, many of the things we have come to depend on have vital flaws that can be exploited. What’s worse, we know that these exploits exist, but only limited progress has been made against the work required to protect these critical systems that we have come to depend on.

When many of these technologies were first created or developed, the focus was aimed directly at proving if something could even work. Of course, this makes sense as so much of what we take for granted today was the subject of science fiction not so long ago.

Development and Consumers

Companies burned large quantities of cash in order to unlock the potential of these products, so they pushed to get them on the market and profitable as quickly as possible. Adding in additional security features could have required additional time and money; it also might have slowed the adoption of the technology or product. Sometimes, however, the spec is right but the implementation is fouled. I believe an excellent example of this can be seen in Bluetooth pairing. How many times have you seen a default PIN of 0000 for a device?

Governments, especially their military components, have a lot of incentive to keep things secure. However, so much of the technology that was originally built for military use is now being used by consumers in a way that they don’t even realize is insecure.

For consumers, technology generally succeeds only once people find it easy enough to use and adopt it. This presents a challenge for those creating new products and technologies to find the correct balance between ease of use and underlying security. Too many times we see security being downplayed so as to simplify usability, but this is a recipe for disaster. The best options are those that

The Future

Nowadays, we are seeing shifts towards a world where security is considered during the beginning stages of projects, instead of as an afterthought. This is excellent news, but it doesn’t mean we’re in the clear yet. Any software developer knows that there are always bugs in code. Some of them can be catastrophic, but only when extremely specific scenarios occur. Even so, any product or technology that attempts to reduce security holes from the start is already in a better place than most.

Plans for updating and replacing existing technologies need to be created, and their implementation needs to begin quickly to allow time for consumers to adopt them. Some technologies that are impacted and need to be updated include GPS, cellular telephony, and the electrical grid. It is quite obvious that these systems are critical to everyday life in the modern area. In some cases, inroads are being made to secure them. In others, known vulnerabilities continue to exist without repair.

Much like our physical infrastructure, we must invest and maintain these systems to ensure they will continue to operate. We can choose to pay the cost now, which is admittedly quite expensive, or we will find ourselves with no choice to pay even more in the future… possibly after something disastrous has happened.