My Technology Interests

I started out with a Commodore VIC-20 when I was really young (8 or 9, maybe?), and I was hooked. I transcribed a kaleidoscope program from a “Learning to Program” book. It was awesome, but there was a fatal flaw. Once it started, I couldn’t figure out how to stop it. I hadn’t saved it to the attached cassette tape drive (and I can’t even remember if it was an option, actually). I remember leaving the computer running for as long as I possibly could, until my parents finally told me to shut it off. I was sad and happy at the same time; sad I lost the program (that I typed, didn’t create), but able to start working on new stuff. Eventually as I got older, we moved up to the 64, and even got to work on other friends’ 128 systems. We ended up getting a modem, and I subscribed to services like Q-Link and Prodigy over time. Q-Link eventually became AOL, but that wasn’t my fault.

Later on, my Dad brought home a CompuAdd PC build kit. This came complete with a 40 mb hard drive AND a matching tape backup drive. That was amazing. It also marked the shift for me to PCs, and I never looked back. I glommed on to DOS, the menuing systems, everything. Eventually branching out to BBS networks, FidoNet (1:280/5!), and the like. By either 1991 or1992, I had found my way to the Internet, courtesy of a Johnson County Community College uplink.

I had access to Apple computers through some of the schools I went to. In middle school, we didn’t have a computer lab. Instead, they had a small fleet of Apple machines on metal rolling desks. On computer day, they’d roll all the desks and plug them in. We didn’t have much latitude in what we could do during computer time. I mainly remember three things about Apple at the time:

  1. Using Logo to move the turtle around.
  2. Playing Oregon Trail.
  3. Realizing that since they plugged one desk into the next, if you positioned your hand so that you were touching both desks, you’d feel the electical current flowing through your hand. That was neat!

For whatever reason, those remembrances had always caused me to always think of Apple being primarily used for education. Obviously, this changed over time, but I didn’t necessarily change my view. Plus, once Apple moved to hardware that you couldn’t upgrade, I just didn’t see a ton of value in spending time in that ecosystem.

The introduction of Windows was pretty awesome (for the time). It reminded me of GEOS that was available on the Commodore 64, but it was much more powerful. By this time, I realized wanted to do this for a career, but I still wasn’t sure how to translate that. I stumbled through a few things as I was trying to find my way. It was awesome, the technology just kept getting better and better. I never really stopped to think about it, but now I realize I always liked figuring out how things worked as well as creating programs of various complexity.

I had a little experience with Unix-based systems, but I wasn’t all that drawn to them. Since I had mainly worked on systems with a GUI, systems that were terminal based seemed powerful, yet limiting at the same time. I had respect for them, but after using a VAX at UWGB, I wondered when (and if) that technology would ever progress.

Enter Linux. Around that time, Linux was starting to take off. I was intrigued by it, but it was awfully finicky. Device compatibility was terribly difficult to determine, and most answers to problems seemed to be “write your own drivers, and you’ll be all set”. There was potential, but since it wasn’t as polished, I opted to build my career on Microsoft tech.

I made a career, launched a consulting company, and even ran my own servers at home. After a while, I burned out a little bit. The boys were young, and I decided to wrap things up. All of the Web 2.0 companies were taking off, hosting services. Maybe I could just coast for a while, becoming a user for the most part.

Right around this time, the iPhone was released and looked like a game changer. I had no interest in the Mac platform, but man, could Apple make phones look attracted. I just wanted things to work; I didn’t want to have to mess around with things. That’s why I really enjoyed my iPhone (and still do).

Cloud was starting to heat up, which of course was the underpinning of the companies that let me take my foot off the gas. I found this fascinating from the start, but the companies I was working for weren’t even considering cloud at the time. This meant I needed to build my own personal projects, so I generated little things to keep me plugged in.

Finally, over the past two or three years, I have found myself getting interested in the nuts and bolts of technology again. I have been tinkering with a bunch of stuff, both on-prem and in the cloud, but I haven’t been documenting much here. Time to change that. I’m thinking to look back across all of the topics I raised here, as well as documenting some of the projects I’ve been working on.

Modern Society, Old Tech

Society Depends on Old Tech

So much of the technology infrastructure that modern society today currently depends on was created long before security was even being considered as a primary concern. In many cases, how technology is being used now use is far different from what was originally envisioned or designed. While this is great for today’s users, many of the things we have come to depend on have vital flaws that can be exploited. What’s worse, we know that these exploits exist, but only limited progress has been made against the work required to protect these critical systems that we have come to depend on.

When many of these technologies were first created or developed, the focus was aimed directly at proving if something could even work. Of course, this makes sense as so much of what we take for granted today was the subject of science fiction not so long ago.

Development and Consumers

Companies burned large quantities of cash in order to unlock the potential of these products, so they pushed to get them on the market and profitable as quickly as possible. Adding in additional security features could have required additional time and money; it also might have slowed the adoption of the technology or product. Sometimes, however, the spec is right but the implementation is fouled. I believe an excellent example of this can be seen in Bluetooth pairing. How many times have you seen a default PIN of 0000 for a device?

Governments, especially their military components, have a lot of incentive to keep things secure. However, so much of the technology that was originally built for military use is now being used by consumers in a way that they don’t even realize is insecure.

For consumers, technology generally succeeds only once people find it easy enough to use and adopt it. This presents a challenge for those creating new products and technologies to find the correct balance between ease of use and underlying security. Too many times we see security being downplayed so as to simplify usability, but this is a recipe for disaster. The best options are those that

The Future

Nowadays, we are seeing shifts towards a world where security is considered during the beginning stages of projects, instead of as an afterthought. This is excellent news, but it doesn’t mean we’re in the clear yet. Any software developer knows that there are always bugs in code. Some of them can be catastrophic, but only when extremely specific scenarios occur. Even so, any product or technology that attempts to reduce security holes from the start is already in a better place than most.

Plans for updating and replacing existing technologies need to be created, and their implementation needs to begin quickly to allow time for consumers to adopt them. Some technologies that are impacted and need to be updated include GPS, cellular telephony, and the electrical grid. It is quite obvious that these systems are critical to everyday life in the modern area. In some cases, inroads are being made to secure them. In others, known vulnerabilities continue to exist without repair.

Much like our physical infrastructure, we must invest and maintain these systems to ensure they will continue to operate. We can choose to pay the cost now, which is admittedly quite expensive, or we will find ourselves with no choice to pay even more in the future… possibly after something disastrous has happened.

Doubling Down

Simply by adding an article now, I’ve created double of what I’ve posted to this site last year.

I’ve been writing so many papers for school that I sometimes find it difficult to sit down and construct posts for this site. I’ve been able to square that by considering that nobody really reads this drivel anyway.

Based on what I’m currently thinking, the next year might very well get interesting. I’ve gone back to my interests in cloud computing and other nerdy things, and I’m trying to push through to finish earning my bachelors degree 15+ years too late. What for? Is it simply to check something off of a list? Or am I trying to make a statement? I’m honestly unsure.

Anyway, Toliver is pushing me to write more here, and simply by including his name in this post I’ve fanned his potentially narcissistic flames.

I’ve got a place where I can post words, and it’s time I start using it.

Amazon and AWS

There have been a lot of rumblings over the past year or two suggesting that Amazon needs to spin off Amazon Web Services (AWS). These rumblings have waxed and waned as various pundits attempt to prognosticate on what Amazon is going to do next.

I find it increasingly difficult to parse the suggestions given in articles when trying to match it to the reality of the world at this time.

Amazon will decide to split off AWS, because it makes a lot of sense and market forces will dictate it.

Scott Galloway The business school prof who predicted Amazon would buy Whole Foods now says an AWS spinoff is inevitable.

I completely understand that spin offs are designed to unlock the value in a company, I don’t think it’s nearly that simple in this case. There is far more to it than just making the numbers work as part of a business case. If you read the backstory as chronicled on TechCrunch, the idea behind Amazon launched from previous integration nightmares the company had experienced in the past. Amazingly, this was 15 years ago already, long before most developers were even thinking of integration at a scale like this.

The feasibility of a spin off is not the question here. Of course AWS could be spun off into a separate corporate entity, and it absolutely would do quite well. The reason it seems unlikely to me that they would do so is that Amazon would lose a lot of the flexibility that they currently enjoy from operating the AWS platform.

The principals at AWS most assuredly look at the needs of all of their customers; just a casual glance at the list of product announcements from AWS re:Invent 2018 should provide ample evidence of their intent. Ensuring the ease of integration as well as a convenient ability to quickly harness a large number of tools continues to help businesses of all sizes.

But, I’d be shocked to learn that Amazon proper doesn’t have the ability to put their thumb on the scale to push feature development as part of AWS development timelines. That alone indicates that the value proposition of retaining control of AWS hasn’t been fully considered, suggesting that the cost of an AWS spin off is higher than previously calculated. Considering how Jeff Bezos approaches, well, everything, it seems to be a stretch that relinquishing control of a well-run division of Amazon, when the company itself depends so much on it, would be something considered unless an unexpected hardship were to occur.

The conclusion that Amazon and AWS aren’t co-dependent seems quite short-sighted when considering the technical aspects. Sometimes the math is only part of the equation, and further investigation is required.

In the end, arguably the most compelling reason to split up – and the most meaningful end goal that can’t be achieved in another way – is to avoid government regulation.

John Divine, U.S. News and World Report Should Amazon Split Up? 3 Pros and Cons.

The idea of avoiding government regulation is an interesting one, but I doubt it’s a concern the company will need to face in the near future. It seems much more plausible that an entity like Facebook would need to worry about this. The Department of Justice took on Microsoft with little to show for it; for all of the bluster of the day, Amazon seems well positioned to avoid the scrutiny of U.S. regulators.

Of increasing concern could be the European governments with the implementation of GDPR, but AWS is well ahead of this. It’s always possible that Amazon could run afoul of the GDPR privacy rules, but a company with resources like Amazon should have that well in hand. Furthermore, while I haven’t read GDPR in its entirety, it seems more likely that Amazon would be charged with hefty fines than find itself burdened by regulation it can’t keep up with.

Ponzi Schemes Need Docs Too

Documentation in code is extremely important, even if developers hate doing it. We’ve all been there, stuck debugging some confusing code that has zero code comments. It made sense to the dev at the time, but they’ve long since moved on and you’re stuck supporting that bad boy.

GitHub recently released the results of their Open Source Survey, which polled active users to better understand how they were using the software. One of the primary insights they learned?

"Documentation is highly valued, but often overlooked."

I just recently finished listening to Ponzi Supernova. This podcast provides some interesting backstory around the Bernie Madoff investment scandal that he confessed to in late 2008.

I won’t give away many details from the podcast, as it was very well done (and you should go listen to it immediately). But, I couldn’t help but to reflect on a very important point. In the podcast, it was suggested that the code comments from the application(s) used to generate the fraudulent transaction statements and other corroborating documents were used to confirm that the trading programs were specifically constructed to target or avoid ongoing audit activity.

That caught my attention, so I did some searching. Sure enough, I came across an article that detailed that the RPG programs included code comments specific enough to convince a non-technical jury that the application was indeed built and subsequently manipulated in a way to pass various audits:

So the pair resorted to what any normal RPG programmers would do: They added comments to the code.

"The programmers nicely commented the code, which made explaining some things easier, because they said this is what they’re doing," Diedrich says. The jury didn’t have to try to read the code. They said ‘This is how we’re generating these numbers.'"

Perez and O’Hara also added comments to ensure their audit preparation was up to snuff. "There were comments in the code hat indicated, for this kind of audit we need this kind of information," Diedrich says. "The code would say, ‘We don’t need this for this audit,’ so they commented it out from the code at times, then they would put it back in for the other audits."

So, there you have it. Code comments are important to everyone, because you never know when you’ll be involved in a high stakes Ponzi scheme designed to defraud people of over 65 billion dollars.